Good Exploratory Testing Practices webinar

Today (14th February 2011 @ 12oo hours – 12pm – New Zealand time) I will be presenting a webinar on Exploratory Testing practises that I use to help put guidiance around my testing.

To register for the webinar click here.

Also check here to see how New Zealand time compares with the time in your part of the world.

Look forward to having you tune in!

Weeknight Testing #04 – an experience report

I had the privilege of joining Weeknight testing (Twitter #WNTesting). This was my first session as I am generally not available for weekend testing sessions (By the way, WTANZ session #12 is on this weekend).

Ok – so what happened during the Weeknight testing session?

I was about 5-10 minutes late waiting for my laptop to boot up etc and when I did login,  there was a flurry of chatter (what I mean by this is that a testing session is held via im over Skype).

Darren McMillan was the facilitator who had the challenging of keeping up with the threads and multiple chats while at the same time guidiong direction in a subtle way (mainly by quoting interesting comments).

I found the *noise* challenging that I went *dark* (to steal a Tony Bruce phrase :)) for a while or in other words I didn’t contribute to the discussion(s) until i had read the mission, requirements document and getting used to the rhythm of the session. I found that while the first two are important, the rhythm is vital as it means that I was able to respond to questions or threads in *real-time* once i had the rhythm ofthe conversation(s).

So – what was it all about?

The mission was to *test* a set of requirements for a fictional company called CRM-R-US by “…reviewing and feeding back on the initial requirements to help identify any gaps, risks or potential issues in them.” This document is at an early stage of requirements gathering and was a first draft. The product is marketing tool centered around twitter.

Some of the participants mentioned they were off mind mapping so I followed suit – except I hand drew mine. I identified four major sections in the document but focused initially on one – the section on the Campaign Engine.

The main reason was threefold:

  1. The lack of *detail*
  2. The section was based on a vision and
  3. A comment stating… ‘Our CEO Patricia Elmer’s liked Brian’s idea so much she’s now seeing this as the key selling point of this feature.’. The CEO is someone who matters and has major influence and power and almost by default, the section to me, had high risk.

So, I began to ask some questions – a few at first and then once I got the rhythm, a lot more. By that time there was 40 minutes to go and questions and comments were coming thick and fast – there was a great question from Sharath B – What’s in it for me if I follow? This made me pause as I was thinking from a business user/call centre point of view whereas Sharath’s question made me think along the lines of the target audience and why would they want to follow our fictional company in twitter. For me, Sharath’s question made look at the broader picture and defocus my thinking. From a testing point of view, using a defocusing strategy helps look at the problem from a broader point of view. This was one of many fantastic ideas, thoughts and questions – the transcript will be posted soon (http://weekendtesting.com/archives/tag/weeknight-testing) – from which you can see some of the great thoughts and ideas that went on during the session.

Lessons Learned for me…

  • Sometimes pairing *may not* be the best option – some great pairs of testers working on a mind map tool weren’t able to pair as effectively as they might well have liked.
  • Tour the product
  • Ask ‘What is NOT being said’
  • Alert – if potential some bodies who matter (e.g. CEO) are mentioned throughout the document, flag it as a potential risk as they have influence/power/authority
  • Mind mapping is a good idea generator and framing tool – see the mind map – from Lisa Crispin and Mohinder Khosla and the mind map from Rakesh Reddy who were both involved in this session.
  • Focusing AND defocusing strategies work well together (focusing on a section to get specific, defocusing by looking at the bigger picture.)

These are some of the thoughts running through my head – I was able to connect with some really good thinking testers which in turn has helped me alot – all in the space of an hour or so!

If you haven’t tried weekend or weeknight testing, give it a go – it is a worthwhile investment!

Collaborating with thinking testers in India

Something is happening to testing!

A number of forward thinking testers in India have gotten together and formed Weekend Testers . Already there have been a number blogs posted about what an innovative idea this is – and these blogs post referrals/conference talks are from industry leaders such as James Bach and Michael Bolton which is high praise indeed.

I’ve been communicating with Parimala Shankaraiah who is one of the founders of Weekend Testers on Exploratory Testing (she has even taken the time to post some great comments on the google group Software Testers New Zealand.) If Parimala is an example of the thinking and passion towards testing in the Weekend Testers community then the Indian testing discipline is in good hands!

It does seem to me that are great inquisitive testers coming through every single day and the world-wide web is one way to keep track of and collaborate with these powerful thinkers!

STANZ 2009 Wellington New Zealand

STANZ (Software Testing Australia New Zealand) is the premier Software Testing conference this side of the equator! The conference kick off in Wellington New Zealand with Lee Copeland , James Bach, Karen N Johnson, Julian Harty and Brian Bryson forming the international cast of speakers along with a  host of talented local speakers.

Monday started with a keynote from Lee Copeland from which in outlined the innovations he sees coming. I found him warm, engaging and very humble.

James Bach was next and what impressed me the most was the way he *prowled* the side of the conference room before being introduced and then ran and jumped on stage! I was wondering a whole bunch of “what if’s” then! His talk Becoming a Software Testing Expert was vintage James Bach in which he discussed the plays of Euripides and other Greek tragedians and related them back to software testing. The point from my perspective is that testing is neither purely technical or engineering but that we can learn from all multple areas and disciplines (history, philosophy, pyschology etc). James also discussed his Huh-Really-So heuristic which he uses when someone makes a claim about something. Huh means i don’t understand, please explain what you mean. Really is what other approaches are there, what else could happen, what other tools could we use and So is to dismantle the argument or to determine whether or not the idea is worth pursuing (I hope i got this right! :))

Unfortunately i didn’t get to speak to either Lee or James one on one but i did manage to talk to Karen N Johnson and Julian Harty. Karen’s workshop on test pairing was very interesting but more so the discussion we had (myself, Karen and Sharon Robson) after. Karen also gave a wonderful keynote on story telling which i think as testers, is an area on which we can improved. We may test but how do we say what we see? How do we know who to talk to and how to talk to them?

The last highlight for me from a presentation point of view was Julian Harty’s presentation on security testing which i found extremely interesting. I came away from the talk with the ideas of :-

*Finding a mentor

*Use tools

*Threat modelling

*and continuous learning (including self study or self learning).

I managed to talk to Julian afterwards and what surprised me was that security testing is about 1% of what he does as a tester. However when he did do security testing, he taught himself/found ways to make himself knowledgeable and very effective.

STANZ was a blast! Great speakers, great conference and more importantly great people. I managed to catch up with a host of new/old friends and its was awesome to share STANZ with them!

The Power of Two

I am currently watching and listening to colleagues perform Exploratory Testing simultaneously. Instead of one working the keyboard and the other gathering oracles and recording paths, they are testing the application at the same time on different PC’s.

WOW! What a synergy! There is a flood of ideas, debates, discussions, agreements and the beginnings of their conclusions on this particular application.

The idea that Exploratory Testing is a cheap approach to find quick, superficial bugs is completely untrue….I’ve just in the last 30 minutes seen the converse to that argument! I am watching a creative collaboration of minds – coverage obtained – yes (i know that application enough to understand the coverage of functionality) diverse – yes, depth  – yes – Superficial – NO.

I have been involved in Exploratory Test sessions where the creative juices just absolutely flowed – to those that oppose Exploratory Testing with superfluous arguments like ‘its monkey testing with a million monkeys at the keyboard’ – miss the point (maybe its because they want to quantify creativity but can’t …somehow…fit the square peg…into the…round..hole).

The point to Exploratory Testing is that the mind is the key to testing for it is the mind that allows inspiration and ideas to be generated and therefore expressed onto the ‘canvas’. It’s not ‘touchy-feely’ and to suggest otherwise may also suggest that the spark of creativity is missing from that person.

Otherwise, how do you explain music? How do you explain that feeling of ‘being in the zone’? How do you explain the artist that adds the touches to their work of art guided by their inner feelings?

Testing may be part of computer science but that doesn’t mean we need to conform to the discipline like robots. Testing doubles its effectiveness when its couple with intelligent thought processes.

I’ve just seen it!

The Art of Championing Bugs – The Bug Advocacy Course

Well its been awhile since i’ve last had the opportunity to post and there are a couple things that i will comment on in due course. The first of these is the BBST (Blackbox Software Testing) course 200A – Bug Advocacy. This course is part of the Association of Software Testing’s course curriculum (http://www.associationforsoftwaretesting.org/drupal/courses/schedule).

There are a number of positives aspects to the method of delivery and to the content contained within the course. First of all, you (as a student) are connected with software testers around the world (i have ‘met’ testers from Australia, India, New Zealand, Sweden and of course the United States) and learning starts straight away. This is because my testing context in New Zealand may differ from someone in India and will differ from other’s in the US. This is valuable because you are now connected to some real thought leaders and people who have different experiences ground in practicality.

Second is the quality of the instructors – Professor Cem Kaner (a leader in the testing world) and Scott Barber (a guru in the Performance testing sphere) coupled with other quality instructors such as Doug Hoffman, Pat McGee et al (refer to the Association for Software Testing website for the course instructors and then google their names for context). The instructors have *been around* (excuse the term 8) ) and are willingly to share their knowledge and understanding freely. They critique with validlity meaning that what they have to say has substance and credence (i would cite the many examples from the course but that may detract from future opportunites of growth for the next crop of course participants) and allows the student to actually learn.

I can’t do that from a multi choice tickbox with no feedback given.

Thirdly, the questions in the exams/quizzes are designed to be read throughly and applied to the context at hand. I struggled with this. I could say that because i haven’t been to University and received a degree in anything (other than life!) my exam taking skills are outdated …. but that didn’t matter. See, you don’t need to have a degree to be successful in this course – just listening eyes, observant ears (yes that’s exactly what i mean) and a thinking mind. I struggled because i’m a jump in and do person – stepping back and thinking things through come second…

While i didn’t overcome this tendancy i did make progress and we as students got some great instructor led/peer feedback so learning was maximised through collaboration and guidance.

And lastly, working together as teammates in some course exercises (and this may be dependent on the course content) allowed us to utilise other testers thoughts, points of view and experiences together with our own ideas to deliver a stronger, better framed answer to some of the questions we were given.

Learning was therefore continual, learning was shared and learning was amplified. The AST courses are some of the best courses i had ever been on and i highly recommend them (…and they are free!)

Part of my email to Cem Kaner and Scott Barber capture my thoughts thus…

“…I have learnt alot from this course and i feel that i’ve gone better this time around compared to Foundations. Cem, the recent discussion on grading and call of questioning was like a big light bulb going off in my head when i read it….being someone that has not attended University, these ideas were ‘foreign’ to me but refreshingly interesting (i think my mind has ‘expanded’ during these two courses).

Scott, your insights and answers were ones that i learnt alot from and was drawn to (as well as Jeff’s, Dee’s and Anne’s) – you were like a stealth instructor/student…i’m sure that if you were my PM, i would flourish under your guidance! The discussion of Question 5 was gold!

 Bug Advocacy and Foundations – I have learnt more, made more mistakes, kicked myself, got mad at the questions but came away with a feeling of actually learning something and achieving it. I compare this to a certain certification that is now prevelant in the marketplace (well in this marketplace). I sat the course and pass the multi choice questioned exam very, very well….but i don’t remember alot of it (except the V-model which is now ingrained in my head despite the fact that i don’t know if i’ve ever worked in a V-model environment) and I’m not sure if i learnt much.

That certificate for me is, at this stage, my commercial ticket (in this marketplace) but the BBST courses are, for me, where the real growth and learning have come.

Thank you both, thank you Doug and Pat for your time and also all the participants on the bug advocacy course!

The Pursuit of the Unreplicable Bug

ghostbusters.gifI’ve been recently testing a web-based application that produced a very interesting defect. It seemed that in one particular screen, a user (with the right combination key strokes and mouse clicks) could actually enter a supposedly uneditable error message field and enter text! At first i wasn’t able to repeat this behaviour but with the words from a James Bach article ringing in my ears about “…ignoring unreproducible bugs at your peril”, i logged it waiting for the right opportunity to attack it.

I had already spent time looking for this ‘bug’ but figured that i would put it to one side and come back to it with fresh eyes and clearer thoughts. Interestingly enough, the developers caught hold of this bug and attempt to replicate in their dev environments – i was even ‘challenged’ in a joking way that if i couldn’t reproduce the bug within 5 attempts then it didn’t exisit!! Oh, did the competitive urges come out then! (This was done in good spirits – we have a tremendous rapport between developers, testers and BA’s). However, it was another developer that found the key/mouse strokes that generated the bug and we discovered that it was a validation error on that web page!

So what were the lessons learnt?

  1. Exploratory testing found this bug – some may say that discovery was a ‘fluke’ but scripted testing would never have picked this bug up.
  2. Fresh eyes and a clearer head can aid tremendously in trying to replicate a bug (especially one discovered late in the day!)
  3. Having a rapport with developers helps in solving bugs – personal agendas and politics are put to one side for the greater good of the ‘team’
  4. Working alongside developers generally breaks down communication barriers (percieved and physical)
  5. Unreproducable bugs ARE best ignored at ones own peril – in this case finding this bug lead to a tightening of field validation for the application
  6. Bugs are bugs are bugs…testers find them, developers fix them, buisness decide what they want done with them – never give up on trying to replicate bugs that are difficult to reproduce!
  7. Teamwork – i honestly believe the power of many can be greater than the power of one
  8. It’s tremendously satisfying finding a bug that is difficult to find and reproduce – the testing equilivant of a three-pointer!

Testing the Mindset

tao_yinyangearth2.jpgI recently read an interesting blog entitled ‘How can i become a better Tester’ – http://thoughtsonqa.blogspot.com/2007/12/how-can-i-become-better-tester.html

This was my comment i left… 

Hi John,

Enjoyed your article. I agree – its mindset (quality), its information gathering (read, read and more reading….asking questions…be involved)and finding that mentor who you can clicked with. Sometimes, when as a new tester, we can be blinded by the bias of that mentor so i would add – ‘When you are ‘ready’ question yourself, your understanding, your toolbox and then define yourself in the testing space’ – the trick is knowing when you are ready!
When i first started testing i was sure that testing was <b>ALL</b> about test scripts, test documents, writing documents and more documents because that’s how it was. Today, my thoughts and process have changed dramtically compared to when i first started testing but those earlier experiences shaped my thought processes today!
Great blog John!

Which got me thinking – how do our experiences shape our thought processes and ‘steer’ us towards one method or another? For me embracing a more Exploratory approach was a logical evolution in the testing space. It allowed me to be creative yet structured at the sametime – it increased my toolbox – and i gain immense satisfaction from this approach to testing. Why? Because when i was involved in the more traditional form of testing, i got to the point that i wondered what is the point to what i am doing….in other words i began to question myself and re-examined the ‘tools’ i had. That’s when i became open to different methods to testing.

If i wasn’t as receptive or i wasn’t at that questioning stage, i doubt that Exploratory testing would’ve taken off for me as it has!

So sometimes, it comes down to timing as well as being open to new ideas!