An Expression of Thought – Testing Ideas

Having no way as way, having no limitation as limitationI have become a fulltime trainer working for Software Education in New Zealand ( www.softed.com ) delivering software testing courses. As i’m now sitting more in the Academic space as opposed to the Practioner space, I have been given the opportunity to meet many different people, with different backgrounds, looking to gather new ideas to use in their testing jobs.

Some people won’t do much, if anything with this new knowledge (it’s human nature after all especially when the work pressure comes on) but some will. It is these testers that will hopefully feel inspired to share their thoughts and ideas with us all.

The internet has made us a very small, very connected global community and each thought expressed or shared (particularly in testing) is a thought worth considering. Maybe you have discovered a new idea with regards to testing or maybe even reaffirming an existing idea (and adding your own wrapper around it).

To those testers that i have met or to those people who may be reading this and haven’t yet considered creating a blog, please rethink. Your thoughts are valuable and your ideas are at least worthy of expression and/or comment.

I would to *hear* them – please let me know if you do! 

Happy blogging!

The Power of Two

I am currently watching and listening to colleagues perform Exploratory Testing simultaneously. Instead of one working the keyboard and the other gathering oracles and recording paths, they are testing the application at the same time on different PC’s.

WOW! What a synergy! There is a flood of ideas, debates, discussions, agreements and the beginnings of their conclusions on this particular application.

The idea that Exploratory Testing is a cheap approach to find quick, superficial bugs is completely untrue….I’ve just in the last 30 minutes seen the converse to that argument! I am watching a creative collaboration of minds – coverage obtained – yes (i know that application enough to understand the coverage of functionality) diverse – yes, depth  – yes – Superficial – NO.

I have been involved in Exploratory Test sessions where the creative juices just absolutely flowed – to those that oppose Exploratory Testing with superfluous arguments like ‘its monkey testing with a million monkeys at the keyboard’ – miss the point (maybe its because they want to quantify creativity but can’t …somehow…fit the square peg…into the…round..hole).

The point to Exploratory Testing is that the mind is the key to testing for it is the mind that allows inspiration and ideas to be generated and therefore expressed onto the ‘canvas’. It’s not ‘touchy-feely’ and to suggest otherwise may also suggest that the spark of creativity is missing from that person.

Otherwise, how do you explain music? How do you explain that feeling of ‘being in the zone’? How do you explain the artist that adds the touches to their work of art guided by their inner feelings?

Testing may be part of computer science but that doesn’t mean we need to conform to the discipline like robots. Testing doubles its effectiveness when its couple with intelligent thought processes.

I’ve just seen it!

The Art of Championing Bugs – The Bug Advocacy Course

Well its been awhile since i’ve last had the opportunity to post and there are a couple things that i will comment on in due course. The first of these is the BBST (Blackbox Software Testing) course 200A – Bug Advocacy. This course is part of the Association of Software Testing’s course curriculum (http://www.associationforsoftwaretesting.org/drupal/courses/schedule).

There are a number of positives aspects to the method of delivery and to the content contained within the course. First of all, you (as a student) are connected with software testers around the world (i have ‘met’ testers from Australia, India, New Zealand, Sweden and of course the United States) and learning starts straight away. This is because my testing context in New Zealand may differ from someone in India and will differ from other’s in the US. This is valuable because you are now connected to some real thought leaders and people who have different experiences ground in practicality.

Second is the quality of the instructors – Professor Cem Kaner (a leader in the testing world) and Scott Barber (a guru in the Performance testing sphere) coupled with other quality instructors such as Doug Hoffman, Pat McGee et al (refer to the Association for Software Testing website for the course instructors and then google their names for context). The instructors have *been around* (excuse the term 8) ) and are willingly to share their knowledge and understanding freely. They critique with validlity meaning that what they have to say has substance and credence (i would cite the many examples from the course but that may detract from future opportunites of growth for the next crop of course participants) and allows the student to actually learn.

I can’t do that from a multi choice tickbox with no feedback given.

Thirdly, the questions in the exams/quizzes are designed to be read throughly and applied to the context at hand. I struggled with this. I could say that because i haven’t been to University and received a degree in anything (other than life!) my exam taking skills are outdated …. but that didn’t matter. See, you don’t need to have a degree to be successful in this course – just listening eyes, observant ears (yes that’s exactly what i mean) and a thinking mind. I struggled because i’m a jump in and do person – stepping back and thinking things through come second…

While i didn’t overcome this tendancy i did make progress and we as students got some great instructor led/peer feedback so learning was maximised through collaboration and guidance.

And lastly, working together as teammates in some course exercises (and this may be dependent on the course content) allowed us to utilise other testers thoughts, points of view and experiences together with our own ideas to deliver a stronger, better framed answer to some of the questions we were given.

Learning was therefore continual, learning was shared and learning was amplified. The AST courses are some of the best courses i had ever been on and i highly recommend them (…and they are free!)

Part of my email to Cem Kaner and Scott Barber capture my thoughts thus…

“…I have learnt alot from this course and i feel that i’ve gone better this time around compared to Foundations. Cem, the recent discussion on grading and call of questioning was like a big light bulb going off in my head when i read it….being someone that has not attended University, these ideas were ‘foreign’ to me but refreshingly interesting (i think my mind has ‘expanded’ during these two courses).

Scott, your insights and answers were ones that i learnt alot from and was drawn to (as well as Jeff’s, Dee’s and Anne’s) – you were like a stealth instructor/student…i’m sure that if you were my PM, i would flourish under your guidance! The discussion of Question 5 was gold!

 Bug Advocacy and Foundations – I have learnt more, made more mistakes, kicked myself, got mad at the questions but came away with a feeling of actually learning something and achieving it. I compare this to a certain certification that is now prevelant in the marketplace (well in this marketplace). I sat the course and pass the multi choice questioned exam very, very well….but i don’t remember alot of it (except the V-model which is now ingrained in my head despite the fact that i don’t know if i’ve ever worked in a V-model environment) and I’m not sure if i learnt much.

That certificate for me is, at this stage, my commercial ticket (in this marketplace) but the BBST courses are, for me, where the real growth and learning have come.

Thank you both, thank you Doug and Pat for your time and also all the participants on the bug advocacy course!

The Pursuit of the Unreplicable Bug

ghostbusters.gifI’ve been recently testing a web-based application that produced a very interesting defect. It seemed that in one particular screen, a user (with the right combination key strokes and mouse clicks) could actually enter a supposedly uneditable error message field and enter text! At first i wasn’t able to repeat this behaviour but with the words from a James Bach article ringing in my ears about “…ignoring unreproducible bugs at your peril”, i logged it waiting for the right opportunity to attack it.

I had already spent time looking for this ‘bug’ but figured that i would put it to one side and come back to it with fresh eyes and clearer thoughts. Interestingly enough, the developers caught hold of this bug and attempt to replicate in their dev environments – i was even ‘challenged’ in a joking way that if i couldn’t reproduce the bug within 5 attempts then it didn’t exisit!! Oh, did the competitive urges come out then! (This was done in good spirits – we have a tremendous rapport between developers, testers and BA’s). However, it was another developer that found the key/mouse strokes that generated the bug and we discovered that it was a validation error on that web page!

So what were the lessons learnt?

  1. Exploratory testing found this bug – some may say that discovery was a ‘fluke’ but scripted testing would never have picked this bug up.
  2. Fresh eyes and a clearer head can aid tremendously in trying to replicate a bug (especially one discovered late in the day!)
  3. Having a rapport with developers helps in solving bugs – personal agendas and politics are put to one side for the greater good of the ‘team’
  4. Working alongside developers generally breaks down communication barriers (percieved and physical)
  5. Unreproducable bugs ARE best ignored at ones own peril – in this case finding this bug lead to a tightening of field validation for the application
  6. Bugs are bugs are bugs…testers find them, developers fix them, buisness decide what they want done with them – never give up on trying to replicate bugs that are difficult to reproduce!
  7. Teamwork – i honestly believe the power of many can be greater than the power of one
  8. It’s tremendously satisfying finding a bug that is difficult to find and reproduce – the testing equilivant of a three-pointer!

AST and the BBST Foundations Course

astlogo.gifIt has been awhile since my last post and its because I (along with 19 other esteemed test colleagues from around the world) have been ‘attending’ the Association of Software Testing online course – BBST – Foundations – see http://www.associationforsoftwaretesting.org/drupal/courses 

(as well as doing work of course!)

I have ‘met’ testers from Australia, New Zealand. India and the United States and to share in their knowledge has been superb! I have learnt alot and i have been challenged mentally with regards to my view on testing.

The instructors were Scott Barber http://www.perftestplus.com/ and Cem Kaner http://www.kaner.com/ and their knowledge and willingness to help everyone learn was outstanding. I highly recommend this course (actually a series of courses). The following is an email that i wrote to Scott…

Hi Scott,

Thank you very much…it was a privillege to have learnt from the ‘best’ – from the participants and of course our esteemed instructors! Yes it would be fine to post my name on the website. Again, as i’ve explained in my course evaluation – i have sat ISTQB and passed well BUT this means more to me – it was more challenging, stimulating and has me rethinking the way i approach things (either as good reminders or changes to my testing habits). Thank you once again and i hope we all can stay in touch.
Kind Regards
Brian

Exhaustive Testing

exhausted.jpg

 The following is  a response i sent to Kit who commented on my blog on ‘Insufficient Testing’ ….

Thanks for your comment. It’s almost a catch 22 situation. One of the principles of testing (according to ISTQB) is that Exhaustive Testing is impossible – I agree but the question is how much do you test and when do you know enough is enough?

For a complex system my thoughts would center around risk and priorities as your starting point. The approach or method used would ultimately rest on what level of auditability you must provide to the Business (they ultimately make the decision to go or no go.) Personally I would still use Exploratory Testing (if I was ‘allowed’ to) because in my experience I would be more likely to find something of value more often than through scripts.

However, in saying that, if the test team is involved right at the beginning of the project through walkthroughs, reviews or inspections (or any other type of review)than clarification and understanding will no doubt increase amongst the testing team with regards to the system.

After doing a Wikipedia search on Dr. Deming, one of his quotes is quite applicable to software testing… “Acceptable Defects: Rather than waste efforts on zero-defect goals, Dr. Deming stressed the importance of establishing a level of variation, or anomalies, acceptable to the recipient (or customer) in the next phase of a process. Often, some defects are quite acceptable, and efforts to remove all defects would be an excessive waste of time and money.” It is known that major commercial software often ships with known (and unknown) defects – MS Windows, Firefox v2.0 etc – its is reasonable then for the business to decide how much of the ‘risk’ they wish to carry. Testers should provide the necessary information to enable business to make that decision (good or bad).

At one New Zealand bank that I worked in, the test team I became involved with tried hard to exhaustively tested everything in a very complex application. The upshot was that one release took almost 12 months to ‘complete’ testing (there were other factors involved – personnel, political and management)BUT I guarantee that they could not say that that application was bug free. So I guess that leads to the second question – how much is enough?

James Bach says “When I exhausted the concerns of my internal critic (and external critics I asked to review my work), I decided it was good enough” (refer http://www.satisfice.com/articles/how_much.shtml).

NASA’s software safety standard (http://satc.gsfc.nasa.gov/assure/nss8719_13.html) NASA-STD-8719.13A September 15, 1997 – Section 3.4.5 says “The test results shall be analyzed to verify that all safety requirements have been satisfied. The analysis shall also verify that all identified hazards have been eliminated or controlled to an acceptable level of risk. The results of the test safety analysis shall be provided to the ongoing system safety analysis activity.” What then is an acceptable level of risk and acceptable to whom? Risk is then defined in this document as “…As it applies to safety, exposure to the chance of injury or loss. It is a function of the possible frequency of occurrence of the undesired event, of the potential severity of resulting consequences, and of the uncertainties associated with the frequency and severity.” Also in the document under section 1.4 Tailoring it says “….The tailoring effort shall include definition of the acceptable level of risk, which software is to be considered safety-critical, and whether the level of safety risk associated with the software requires formal safety certification.” Therefore at the end of the day , it’s a business decision taken within context of the business. As testers, we can test complexity within the context of the project and report back our findings – it is then up to those charged with making the ‘big’ decisions, to make them – or not!

Insufficient Testing

F-22 RaptorsIs a test team ‘liable’ if the product/software fails in some way? A recent post to the Software Testing Yahoo groups forum brought this to light and got me thinking.

Jared Quinert – a proponent of ET from Australia said “…a lack of testing – that insufficient testing requires some co-conspirator to cause a project to fail?
Sadly, nothing stops people trying. Googling ‘”insufficient testing” project failure’ goes some way to demonstrating this.”

So i did….try googling “insufficient testing” and see what comes up. There are, according to Google, 493,000 references to insufficient testing. This then begs the question – What is insufficient testing?

I worked recently within a test group that was fixated on exhaustive testing – they literally wanted to test everything and anything (and with good reason i might add – the situation i.e. context – surrounding them was NOT conducive to a co-operative approach. The harder the test group tried the more they got blamed.) It was hard to changed that mindset because they had litteraly been burnt in the past. What this meant was a huge overhead in terms of time. This group is the opposite of insufficient testing because they wanted to do everything.

However, it is a fact of life (this has been well documented in a number of articles, blogs etc) that software testers cannot find everything. Software is complex (ask NASA), software can be daunting and despite testing things do go wrong – just ask the US Air Force

(http://en.wikipedia.org/wiki/F-22_Raptor#Recent_developments )

“While attempting its first overseas deployment to the Kadena Air Base in Okinawa, Japan, on 11 February 2007, a group of six Raptors flying from Hickam AFB experienced multiple computer crashes coincident with their crossing of the 180th meridian of longitude (the International Date Line). The computer failures included at least navigation (completely lost) and communication. The fighters were able to return to Hawaii by following their tankers in good weather. The error was fixed within 48 hours and the F-22s continued their journey to Kadena”

Was this fault because of insuffcient testing or was it the result of other factors? In my experience of failed projects, insufficient testing usually isn’t the cause rather a lack of cohesion between PM, vendor, BA’s, developers, testers – each group assumed a territorial stance and placed their ego in the way.

As Gen. Colin Powell (ret) says ” never let your ego get so close to your position that when your position falls, your ego goes with it.”

Often there was some sort of conflict or barrier (whether declared or otherwise) that existed in which the leadership group was unable to break through. Disharmony in a project team will definitely achieve less with more.

So then is insufficient testing clearly a fault of the test team?

 Sometimes it is.

If the team was not aligned to the Project goals and was off on their own agenda then yes. However, if there are external influences involved then insufficient testing may be a symptom of a bigger problem.