Insufficient Testing

F-22 RaptorsIs a test team ‘liable’ if the product/software fails in some way? A recent post to the Software Testing Yahoo groups forum brought this to light and got me thinking.

Jared Quinert – a proponent of ET from Australia said “…a lack of testing – that insufficient testing requires some co-conspirator to cause a project to fail?
Sadly, nothing stops people trying. Googling ‘”insufficient testing” project failure’ goes some way to demonstrating this.”

So i did….try googling “insufficient testing” and see what comes up. There are, according to Google, 493,000 references to insufficient testing. This then begs the question – What is insufficient testing?

I worked recently within a test group that was fixated on exhaustive testing – they literally wanted to test everything and anything (and with good reason i might add – the situation i.e. context – surrounding them was NOT conducive to a co-operative approach. The harder the test group tried the more they got blamed.) It was hard to changed that mindset because they had litteraly been burnt in the past. What this meant was a huge overhead in terms of time. This group is the opposite of insufficient testing because they wanted to do everything.

However, it is a fact of life (this has been well documented in a number of articles, blogs etc) that software testers cannot find everything. Software is complex (ask NASA), software can be daunting and despite testing things do go wrong – just ask the US Air Force

(http://en.wikipedia.org/wiki/F-22_Raptor#Recent_developments )

“While attempting its first overseas deployment to the Kadena Air Base in Okinawa, Japan, on 11 February 2007, a group of six Raptors flying from Hickam AFB experienced multiple computer crashes coincident with their crossing of the 180th meridian of longitude (the International Date Line). The computer failures included at least navigation (completely lost) and communication. The fighters were able to return to Hawaii by following their tankers in good weather. The error was fixed within 48 hours and the F-22s continued their journey to Kadena”

Was this fault because of insuffcient testing or was it the result of other factors? In my experience of failed projects, insufficient testing usually isn’t the cause rather a lack of cohesion between PM, vendor, BA’s, developers, testers – each group assumed a territorial stance and placed their ego in the way.

As Gen. Colin Powell (ret) says ” never let your ego get so close to your position that when your position falls, your ego goes with it.”

Often there was some sort of conflict or barrier (whether declared or otherwise) that existed in which the leadership group was unable to break through. Disharmony in a project team will definitely achieve less with more.

So then is insufficient testing clearly a fault of the test team?

 Sometimes it is.

If the team was not aligned to the Project goals and was off on their own agenda then yes. However, if there are external influences involved then insufficient testing may be a symptom of a bigger problem.

Testing the Mindset

tao_yinyangearth2.jpgI recently read an interesting blog entitled ‘How can i become a better Tester’ – http://thoughtsonqa.blogspot.com/2007/12/how-can-i-become-better-tester.html

This was my comment i left… 

Hi John,

Enjoyed your article. I agree – its mindset (quality), its information gathering (read, read and more reading….asking questions…be involved)and finding that mentor who you can clicked with. Sometimes, when as a new tester, we can be blinded by the bias of that mentor so i would add – ‘When you are ‘ready’ question yourself, your understanding, your toolbox and then define yourself in the testing space’ – the trick is knowing when you are ready!
When i first started testing i was sure that testing was <b>ALL</b> about test scripts, test documents, writing documents and more documents because that’s how it was. Today, my thoughts and process have changed dramtically compared to when i first started testing but those earlier experiences shaped my thought processes today!
Great blog John!

Which got me thinking – how do our experiences shape our thought processes and ‘steer’ us towards one method or another? For me embracing a more Exploratory approach was a logical evolution in the testing space. It allowed me to be creative yet structured at the sametime – it increased my toolbox – and i gain immense satisfaction from this approach to testing. Why? Because when i was involved in the more traditional form of testing, i got to the point that i wondered what is the point to what i am doing….in other words i began to question myself and re-examined the ‘tools’ i had. That’s when i became open to different methods to testing.

If i wasn’t as receptive or i wasn’t at that questioning stage, i doubt that Exploratory testing would’ve taken off for me as it has!

So sometimes, it comes down to timing as well as being open to new ideas!

Teamwork

teamwork-skydivers-ii-print-c10007532.jpgI was reading a book from master coach Pat Riley entitled The Winner Within. This book has been around for a number of years (1993) and Coach Riley discuss his philosophies that make up a successful team. Coach Riley knows what he’s talking about – he’s was at the helm of the 1980’s LA Lakers World Championship basketball teams, the New York Knicks and the 2006 Miami Heat championship team.

Most of us belong to some sort of team. And while we may not always get on with others, we are some what reliant on others doing their job, playing their role. Its no different from testing. Whilst we may be perceived as being negative (‘Your job is to break the system’), we still play a vital part. If we then are able to look at the big picture and synergise with the (project) team as a whole then we are able to produce quite effecient results.

You see this all the time in sports where a very talented team just can’t seem to get it together. I once coached a basketball team that, individually, were quite brilliant for their age group. But as a team they just couldn’t take the ‘I’ out of team. A championship team (or for that matter a good team) is a team that is able to synergise well together – their is no ego only healthy respect for each other. BA’s working with Testers and developers in harmony to produce something extraordinary (one government department i worked for worked exactly like that and yet another didn’t because there was a wall between the testers, BA’s and developers – literally and figuratively).

One of the ways to build this teamwork is founded on trust. It drives us, it helps us, it builds confidence in each other and in others. It enhances who we are and yet at the sametime if we trust and are trusted then we are more likely to ‘build up’ then tear down. A divided house cannot stand.

As Benjamin Franklin states on the signing of the Declaration of Independence “We must all hang together, else we shall all hang seperately”

Test Strategy vs. Test Plan

sun_tzu.jpgRecently i have posted a couple of replys on the Microsoft Software Test forum – http://msdn2.microsoft.com/en-us/testing/default.aspx – click the link entitled Software Testing Discussion Forum.

There are a couple of posts on Test Strategy and Test Plans and what they are. Quite often the two terms are interchangeable and used indiscriminately. A Test Strategy (according to ISTQB) is the document that describes the methods of testing or the how. Whether this document is pitched at an enterprise level or a project level is open to discussion but essentially the Strategy is projecting ideas over a longer period of time.

The Oxford Dictionary defines Strategy as “… a plan designed to achieve a particular long-term aim” and as such looks at the ‘bigger’ picture.

A Test Plan describes the ‘How’ at a lower level. IEEE 829 (on which there is much debate on the revision currently being voted on) introduces the structure of should be incorporated in the plan. It is comparable to tactics which again according the Oxford Dictionary is “…an action or strategy planned to achieve a specific end.”

Whether you use either or both terms or documents is up to you. As testers we sometimes become involved in paper wars and become document heavy at the expense of efficiency and effectiveness. Whatever process you follow the key for any test document is effective communication.

Bj Rollison – a Test Architect at Microsoft (http://blogs.msdn.com/imtesty/about.aspx) sums up what happens if we stop thinking about how and what we use these documents for…”The only testers who stop thinking critically about tools and the application of tools we can use in the appropriate context are testers who have a limited understanding of the overall responsibility of testing, and know even less about the tools they are tyring to use.”

Great quote – i totally agree. Whether its a Test Strategy or Test Plan, it is a tool whose purpose is to serve us and guide us in our testing activities.

If you are responsible for producing said documents, please be critcial in your thinking and look at the best way to communicate to all those involved in your sphere!

The 2006 NBA season the Phoenix Suns Strategy vs. the LA Lakers was “…Phoenix’s strategy against the Lakers this season has been to contain everyone besides Kobe Bryant, and it’s worked. Bryant had 39 points in the first meeting and 37 in the second, but no other Lakers player scored more than 17 in the two games and the Suns simply outscored Los Angeles, averaging 114 points.” – Yahoo Sports.

There is your Strategy, the plan is how each Sun player played defense against their man.

Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat.– Sun Tzu

The Test Plan – Format vs. Content

map.jpg 

The test plan is the framework for your testing activities. However, what is most important is the content not the actual format (whether its IEEE or an organisational format).

The test plan should outline what is being tested, how its being tested, what you do with artifacts generated during test and what happens if…. .If we become set on the ‘format’ then what we get is a process and document heavy testing activity that detracts from the actual testing!

How many times (i’ve seen many) have the TM or Lead spent hours, days, weeks on a plan, get feedback (which is usually minimal) and get it signed off only to have it resigned to the top drawer never to see the light of day!

If anything, the test plan should be a flexible document in the sense that the unexpected usually happens during testing. For example, what if one of your plan’s exit criteria says that testing is complete when 100% of the Test Scripts have been signed off but a blocking defect occurs that blocks 2% of those Scripts (and will not be fixed in this release). Does this mean the plan has failed? Were we wrong to make our Exit criteria so strict?Or does commonsense tell us that because the business have made the call to defer fixing the blocking defect to a future release then our test plan is still on track?

 

My point is that adherence to structure and format is secondary to the who, how, what, why and where’s. The structure gives a framework that we adapt to suit our situation (CONTEXT).

OODA

jrboyd-photo.jpgObservation-Orientation-Decision-Action (OODA) – Colonel John Boyd (USAF) was the theorist behind the OODA loop. Essentially, the quicker a fighter pilot can execute the OODA loop, the more of an advantage that person has over their adversary.

How can we relate this then to bug hunting? Well, simply put…

Observation – Observe our environment – what is happening when we spot a defect? Has it happened before, is it known? If its new, there may be other related bugs (bug clustering). Beware of anything that looks out of the ordinary or just doesn’t ‘feel right’.

Orientation – lets get our bearings – how did we get that bug? What were the steps? Is it easy or hard to reproduce? What are the triggers?

Decision – what are we going do with the bug? How do we prove our case to management and the developers? How severe is the defect? Are there any other bugs lurking around?

Action – Prove it that it exists, show that it exists and make it visible to those that are responsible for making the decision (Action) to do something about it!

The key is observation which can take many forms. If we are able to observe what we are testing (not just look at but in-depth observation) then we are able to draw out the necessary conclusions that allows us to satisfy our curiosity of the system at hand. With that curiosity we are then able to figure out where the bug appears, decide the appropriate way forward and act on it. Testing by nature is curiosity unleashed on a grander scale and with some sort of framework around.

What better job to do than satisfy ones curiosity?