The dark art of testing in agile

***I am now back at Software Education teaching and consulting around testing and agile . I’ve just recently rewritten our Agile Testing course (ICAgile accredited) and this is my blurb on the course before I teach it on the 7th October for the first time (see http://www.SoftEd.com). Also I haven’t written anything here in a long time and being prompted by something else I sent to Lee Hawkins, I thought I would post this here as well***

Testing is like the dark arts. It hides in the shadows of projects probing silently, mocked openly and looked down with disdain by those who think they know better. But do they?

Testing was seen like this even by testing consultancies as they espoused the rhetoric that testing is simplistic, mechanical and artifact driven. The implied idea was that ‘any one can do testing’ was prominent. Fortunately two communities began to challenged these ideas. The context driven community broke the fallacies of mechanical, simplistic testing to human based, skill based, thought provoking investigations of self, product and relationships. The second community was the agile community who helped challenge the ideas of heavyweight documentation and adherence to process to one of experimentation with short feedback loops. With this came a more technical approach to testing using tools to assist (though sometimes there is an overreliance on these tools).

Our agile testing course looks to combine these two communities together by building key skills around critical thinking, using heuristics to build solid testing models and focusing on quicker feedback and leaner documentation. In turn, building these key skills then help testers to be better equipped to understand the agile context. Testing in an agile context requires quick, critical, skilled thinking and combined with some technical understanding enables testers to answer this question – How do I add value to my team today?

Our agile testing course is very hands on and experiential and looks to increase testing skills that help you become better in your role and add value. Testing is no longer in the shadows – it is front and centre and is our mission to help you to become indispensable to your team.

The question for you though is – how much better do you want to be as a tester?

KWST#3 is coming – 5/6 July 2013!

It’s that time of yeKiwi Workshop on Software Testingar again – Kiwi Workshop on Software Testing (KWST) #3  will  again grace Wellington, New Zealand.

This years theme is …

“Lighting the way; Educating others and ourselves about software testing –  (raising a new generation of thinking creative testers)”

 

And this promises to be an excellent peer conference!  We have invited test leaders throughout New Zealand and from Australia including Anne Marie Charrett.

So more details to follow but much thanks go to …

  • The Association for Software Testing
  • Software Education
  • The KWST crew (Aaron, David, Katrina, Oliver and Rich)

Star West Software Testing Conference 2012

During the first week of October 2012, I will be presenting at Star West at The Disneyland hotel, Anaheim, California on Using Agile techniques to Manage Testing – Even on non-agile projects (http://www.sqe.com/StarWest/Concurrent/Default.aspx?Date=10/4/2012#T19 ).

Its going to be an exciting testing conference and I’m looking forward to meeting fellow testers at such a prestigious event. Already as I scan the speaker list I see testers such as Michael Bolton, Dawn Haynes, Rob Sabourin and so forth who are leaders in our craft and I’m looking forward to meeting them (again) and talking – what else? Testing!

No doubt there are many more here that are not on the speakers list and I’m looking forward to meeting you too. 🙂

See you there!

Learning from the frustration of test case debates

What is a test case?

The reason I ask is that recently I have been following (and commenting) on a question in the LinkedIn group Software Testing & Quality Assurance  – ” hi guys do u think that creating Test Cases is must? According to me, creating Test Cases is just a waste of time rather we should utilize the time in doing testing. what is your opinion? ”

At first glance I thought it would relatively *easy* and pick apart this question and the ensuing replies. However, after reading through the comments, I immediately felt frustrated. Why?

Upon reflection, I noticed a couple of things.

First, it helps to view the comments from the start. I had missed the fact that there were something like 100 comments before I jumped in. Realising this would’ve help save the frustration because Griffin Jones said it from comment one

@Pradeep – I forecast that this conversation will become confusing because:

a. people will be using unstated assumptions about what is a “Test Case”. Some people’s working definition will exclude other people’s working definition of the term. This “shallow agreement” problem will not become obvious until comment # 78.

And Griffin’s prophecy came to pass.

Which led to *the* problem:

Comments were roughly divided between *a test case is a set of inputs with expected results* group that talked of the test case as a tangible artifact. The second group tended towards seeing the test case as an instance of a test idea and generally speaking this second group were the ones that seem to constantly challenge the assertions of the first group.

And then it dawned on me.

The second group appeared to be aligned with the context driven school of testing and as such realise that were *a lot* of dangerous assertions in the comments made by the first group. For example:

Testcases ensures the tester does not over do his testing and makes sure when and at what stage of his testing he could exit and say that the testing is complete.

If we were to look at the above statement a number of questions spring to mind. First of all, how does a test case ensure that a tester does not over do his testing? What does it mean to overdo testing and if testing is *overdone* what is it compared to be deemed overdone?  If the commenter means ignoring the risk of testing something else or finding information outside of the scope of the test case then overdone has potentially risky consequences for him or her (as they have now have jumped outside of the test case box and may find interesting information…tsk, tsk as now they may not meet their execution test case complete target because now they are THINKING about what they are doing as opposed to just doing *something*.). If the tester became engaged then they would be aware of their coverage and risk model and seek after information that may challenge that model. Notice that the engaged tester does not complete a test just because they have ticked off all of the steps to execute;   otherwise, we end up blindly following a script and we’re checking not testing. This highlights an issue of the commenter viewing a test case as a tangible item when in reality it is an abstraction. It is an idea (or collection of ideas) and *passing* a test case does not guarantee that the idea is finished with. Rather a good tester will most likely extend that idea into many ideas.

Of course we could critically pull apart the rest of the comment and show the fallacy in the statement (such as how does finishing your test cases mean that your testing is complete? It could in some circumstances but I suspect that the commenter meant completing testing – full stop). There are a number of comments like this and they all follow the same theme. We write test cases so that we can cover the requirements and we have repeatable tests so that we can teach others and because the v-model aligns with Saturn and Mercury in the house of Leo – so it must be good!

But I digress…

AND this was  frustrating for me. It seemed that no matter now many times (and in different ways), the second group (lets called them Team CDT) highlighted flaws in the first groups arguments (lets called them Team Factory) then another equally inane comment appears and it made me realise that (to paraphrase James Bach)…

If you are frustrated then it means that something is frustrating!

Realising this then made the rest of the journey…well…more fun.! I realised that I could not wilfully change anyone’s mind except my own. I realised that, regardless of what I shared, others are free to disagree. I realised that no matter how many times I pointed out a fallacy in someone’s argument, it’s up to them if they take heed or not.

AND I realised that I could actually benefit from this and not let the emotion of frustration take hold.

How you say?

By looking for like minded individuals and engaging with them knowing that I’m mostly likely to get a meaningful discussion coming back. By practising pulling apart a comment and challenging someone’s assertions. By applying James Bach’s Huh?Really?So? heuristic and what was initially a frustration quickly became a learning experience.

While it’s galling to see many testers fall into Team Factory, I am hearted to see a number of testers critical of the *status quo* and challenge them (as demonstrated by their replies to team Factory comments). It is through challenging that we grow the craft into something that is stronger, assertive and more critical overall.

Career advice from New Zealand

Two years ago I created Software Testers New Zealand (google group). It has taken a little while but there have been some fantastic discussions especially in recent months.

Yesterday a member of the group *ranted* (his words) about an *approved* job add that was posted.

It has spawned an interesting discussion on what it takes for a tester to get a foot in the door and has morphed into learning about the industry/certification.

I consider the discussions pure gold – check it out, comment if you wish – i think it would be helpful to hear about other testers thoughts/ideas/experiences from around the world!

Entaggle – a website built on reputation

Elizabeth Henderickson has create a new website called Entaggle which allows users to tag another user with a reputation tag. I think this is a brilliant idea and in our testing community this may hold more *weight* than a traditional cv.

Check it out!

Mr T and the Art of Box Painting

It’s funny how one can take different media and apply them to what you want to…in this case software testing. I recently watched a World of Warcraft ad with Mr T from the A-Team days (http://www.youtube.com/watch?v=bqJE5TH5jhc )

Mr T created a new character, a Night Elf Mohawk – the ‘directors’ of the ad said that he couldn’t do that. In Mr T’s own way, he boldly announced that he was ‘handy with computers’ and ‘hacked his own Night Elf Mohawk.’

Like most things software, the developer is looking for a solution to a problem. A tester (in this analogy, Mr T) is looking for a problem in the solution or in other words looking outside of the box.

Being *bound* by specifications and scripts is what I mean by box. Now I don’t mean that i am anti specification and anti scripts (they may be valuable resources, oracles if you will, in the right context) but reliance on these solely leads to the box being painted (http://viscog.beckman.illinois.edu/flashmovie/20.php for an example of *box painting* – INSTRUCTIONS FOR THE CLIP: Count the number of passes made by the team in white. Record the number of passes and continue reading….(at the end of this post is the next set of instructions but don’t go there yet!)).

In the ad, Mr T is looking outside of the box. He is thinking outside of the bounds of the requirements.

Why?

If the software delivers as per the requirements, has it not passed?

No.

Outside of the *bounds* are the areas testers love to tread because we then are looking at potential bugs. When we find bugs and report them, they are resolved in some way. As they are resolved, then potentially the quality of the product is increased.

I once worked on an application whereby the requirement of an input field (stated in the specification) said “truncate 32 chars”.

This was a java based browser hosted financial application.

A colleague and I started testing. We typed into the input field and as much as we tried, we couldn’t type past 32 chars.

So we created a very large string (1,000’s +) and copied and pasted into the same input field.

BANG!

CRASH!

DEAD!

The application fell over completely!

The developer had followed the spec and had coded for it but he did not cater for a copy and paste (let alone a large string!)

It took the developers about an hour or two to resolve it.

In this case, we thought outside of the box – we dared to push beyond the realms of the spec. We tested for something that wasn’t considered and this is an important consideration for testers – to question and challenge what is in front of us. Challenge what we have been given and the value that we will add as testers will be made manifest (i.e. bugs!!)

Happy hunting!

**INSTRUCTIONS from the video clip continued – what did you notice? Was there anything interesting going on? If haven’t found anything, review the clip and defocus your vision or in other words, look outside of the box.

Software test leadership is alive in New Zealand!

New Zealand flagI’ve been lamenting the state of testing in New Zealand or more specifically test leadership. Now, I’m not talking about the number of test leads or managers – I’m talking about leaders in our community.

I felt there weren’t very many leaders with most testers here settling for a *just do my job* mentality.

Until last night.

Last night, Software Education held a customer evening by inviting customers to view Software Education’s new premises. I met some interesting people and had some great discussions and then it dawned on me – I’ve met some strong testing community leaders already but I had thought of them individually not collectively and I’ve discovered that there are more test leaders than I’ve realised. Now, when I’m talking about community leadership, I’m talking about context driven, lets discuss and debate and better our craft type of leaders (and this is irrespective of whether these leaders are part of the ISTQB certification program or what have you).

And so what I would like to do is highlight these leaders as testers to watch because in their own way, they are helping the craft grow in New Zealand. 

Farid Vaswani – Test manager at Auckland University, associate editor for Software Test Professional and implementor of SBTM at Auckland University.

Oliver Erlewein – Performance tester/test Manager at Datacom Wellington, context driven space, will debate or challenge the status quo. Weekend Testers Australia New Zealand facilitator.

Trevor Cuttris – Team Leader IAG – involved in mentoring and upskilling testers in many different ways (at work SIGiSTs groups etc). We had a good discussion around ET and SBTM.

Rob O’Connell – Assurity Consulting – very similar to Trevor. Lots of passion. Not willingly to accept the status quo if it provides no value. Mentoring, upskilling, uplifting and highlight the craft.

Katrina McNicholl – AMI Insurance – Christchurch based – passionate about the craft, about learning and about sharing ideas and thoughts on testing at the local level.

Tessa Benzie – AMI Insurance – Christchurch based – the same as Katrina – involved wanting to better the testing craft at a local level.

John Lockhart – Webtest Auckland – context driven test automation – *guru* with fitness – first met Jon through the AST BBST series of courses.

Matt Mansell – DIA – is involved in many different areas that result in testing being given a higher profile particularly in the Wellington market.

Honorable mention: Aaron Hodder, Shawn Hill (what an awesome presentation at STANZ 2010!), Christo Bence, Andrew Black, Sophia Hobman, Richard Robinson, Jonathon Wright.

Is this an exhaustive list? No.

Are these the only community leaders in New Zealand? No – but these are testers that I’m tagging that will have an impact on the testing community – whether it’s locally or nationally and will help improve the state of our craft here in New Zealand.

Have I missed some testing leaders? Most likely – BUT i hope you come forward, I hope you stand up and I hope you begin to share your passion for testing with us all (conferences, SIGiST groups, STANZ, blogs, twitter – the list is endless).

To those whom I’ve *outed* – it’s time to highlight the incredible talent we have here in testing – and its time to share the passion that you have with everyone and become …leaders.

A student of the craft

I recently had a Skype session with James Bach. One of the topics we discussed was around guru’s. I said to James that I tell my classes that I am not a guru, I’m the dude at the front.

James said …

[22/06/2010 2:20:51 p.m.] James Bach: I have a name for that
[22/06/2010 2:20:58 p.m.] Brian Osman: whats that?
[22/06/2010 2:21:43 p.m.] James Bach: I say I’m a “student of the craft” and I want to connect with other students. I may be a more advanced student in some ways, and sure, I have a lot of opinions, but I’m still a student. That’s the attitude.
[22/06/2010 2:22:47 p.m.] Brian Osman: I like that – actually i remember you asking Lee Copeland something similar at STANZs last year. Do you mind if I share that title also?
[22/06/2010 2:23:10 p.m.] James Bach: no problem

So its *official* – I am a student of the craft – constantly learning in some way.

Teamwork – The value of a good team

How a good test team can help you become a better tester!

Teamwork

 

 

 

I’ve been watching New Zealand’s Junior Tall Blacks play at the U19 FIBA World Championships (Auckland New Zealand) and what struck me the most was the level of teamwork showed by the team. This was one of the contributing factors behind the team doing so well – i mean undersized, under gunned but plenty of heart, a good coach, sound systems AND generally good teamwork. What it did lack was the experience. Even though this was the U19’s, a number of teams had professional basket ballers in their team and that experience help decide close games.

When i think back to software testing teams i have been on i immediately think about the varying degrees of teamwork. I’ve worked on a team that was very hierarchical, there was a definitive pecking order and if you upset the head honcho (or in this case, honcho-ess), you quickly became ostracised. And this was regardless of skill, knowledge or enthusiasm and when you were out, you were out. This meant that the peripheral testing activities became harder to accomplish until you got back “in”. You had no or little peer support and pleas (subtle or otherwise) to management were fruitless. It didn’t bother me too much  because (either i was naive or ignorant) but one tester i saw felt this ‘pressure’ and it affected her ability to test. Why? Because she was too busy dealing and thinking about her social status that she couldn’t concentrate on testing (AND I mean thoughtful, critical testing.)

I’ve also worked as a sole tester in which, generally speaking, i never had to contend with team politics. I guess i was seen more as a project peer, an individual and not some annoymous member of an annoymous team. I was real and approachable and i guess this made it easier to build a rapport with. This is my experience but obviously it may not be typical. We have ‘control’ over ourselves but not much so over our environments.

I have also been part of a team that was supportive and encouraging and in essence allowed individuals to experiment, to try different things, expand and explore. And because these positive team attributes were in place, the opportunity to collaborate, share and test greatly increased. Whereas in the hierachial team i was in, knowledge was gold and he/she who had the most gold won, the supportive team wasn’t worried about which individual had the most gold but how much gold the team had collectively. Testing thrived because it was allowed to!

I have felt the value of a good teamwork. It goes along way to helping you get up in the morning and enjoying your day rather than dreading it.Testing is a human approach and its not just our interaction with the software but also with those we work with that helps us become better testers!