A student of the craft

I recently had a Skype session with James Bach. One of the topics we discussed was around guru’s. I said to James that I tell my classes that I am not a guru, I’m the dude at the front.

James said …

[22/06/2010 2:20:51 p.m.] James Bach: I have a name for that
[22/06/2010 2:20:58 p.m.] Brian Osman: whats that?
[22/06/2010 2:21:43 p.m.] James Bach: I say I’m a “student of the craft” and I want to connect with other students. I may be a more advanced student in some ways, and sure, I have a lot of opinions, but I’m still a student. That’s the attitude.
[22/06/2010 2:22:47 p.m.] Brian Osman: I like that – actually i remember you asking Lee Copeland something similar at STANZs last year. Do you mind if I share that title also?
[22/06/2010 2:23:10 p.m.] James Bach: no problem

So its *official* – I am a student of the craft – constantly learning in some way.

Teamwork – The value of a good team

How a good test team can help you become a better tester!

Teamwork

 

 

 

I’ve been watching New Zealand’s Junior Tall Blacks play at the U19 FIBA World Championships (Auckland New Zealand) and what struck me the most was the level of teamwork showed by the team. This was one of the contributing factors behind the team doing so well – i mean undersized, under gunned but plenty of heart, a good coach, sound systems AND generally good teamwork. What it did lack was the experience. Even though this was the U19’s, a number of teams had professional basket ballers in their team and that experience help decide close games.

When i think back to software testing teams i have been on i immediately think about the varying degrees of teamwork. I’ve worked on a team that was very hierarchical, there was a definitive pecking order and if you upset the head honcho (or in this case, honcho-ess), you quickly became ostracised. And this was regardless of skill, knowledge or enthusiasm and when you were out, you were out. This meant that the peripheral testing activities became harder to accomplish until you got back “in”. You had no or little peer support and pleas (subtle or otherwise) to management were fruitless. It didn’t bother me too much  because (either i was naive or ignorant) but one tester i saw felt this ‘pressure’ and it affected her ability to test. Why? Because she was too busy dealing and thinking about her social status that she couldn’t concentrate on testing (AND I mean thoughtful, critical testing.)

I’ve also worked as a sole tester in which, generally speaking, i never had to contend with team politics. I guess i was seen more as a project peer, an individual and not some annoymous member of an annoymous team. I was real and approachable and i guess this made it easier to build a rapport with. This is my experience but obviously it may not be typical. We have ‘control’ over ourselves but not much so over our environments.

I have also been part of a team that was supportive and encouraging and in essence allowed individuals to experiment, to try different things, expand and explore. And because these positive team attributes were in place, the opportunity to collaborate, share and test greatly increased. Whereas in the hierachial team i was in, knowledge was gold and he/she who had the most gold won, the supportive team wasn’t worried about which individual had the most gold but how much gold the team had collectively. Testing thrived because it was allowed to!

I have felt the value of a good teamwork. It goes along way to helping you get up in the morning and enjoying your day rather than dreading it.Testing is a human approach and its not just our interaction with the software but also with those we work with that helps us become better testers!

The one minute speed dribble syndrome

Rob Rangi is a very good friend of mine who happens to coach the St Mary’s Senior Girls Basketball team based in Wellington, New Zealand. He is blogging about his coaching experiences here.

He recently blogged about a recent session entitled Taking the Positives from the failures of drills. Coach Rangi is installing the Read and React offense, an offense that is based around principles rather than set plays.

Unlike a set play where, for example, player O1 passes to player O2 after player O2 was screened by player O3 (i.e. a structured offensive set), the Read and React is based on a group of principles in which the offensive players move depending on what is happening. This leads to an infinite number of possibliities in which the offense  can move, react and score. There is no blinked eyed approach whereby player O1 must do this in order to satisfy the pattern and potentially miss a scoring opportunity.

To quote from Coach Rick Tolbert (the Read and React creator), “…And that’s exactly what the Read and React Offense does: it provides a framework that can be used as an offensive system to develop players, teams, and programs. Or, it can be an offense for one team, an offense that builds upon itself, with a counter for anything any defense can throw at it.” Notice that Coach Tolbert talks about a framework. There is no mention of the words structured, pattern or set. In essence, the framework provides the heuristics (and the principles are collectively the oracle), the players apply these heuristics and adapt them  during game time.

Coach Tolbert also went on to stat his past season and found that 80% of his teams points came from principled basketball. Only 20% came from set plays and yet in practise, his team set spent 80% of the time on only 20% of the total point production!

Exploratory Testing is like the Read and React offense. It allows a creative (heuristics based), flexible (adaptable) approach (principles) to software testing that enables a tester to test a product with a broader mindset.

On the other side of the coin, writing test scripts (or if you like, using set plays)  is a very common testing practise which enables the tester to set out in advance, the steps he or she will follow.

One of the dangers of following a script is that the tester becomes a verifier of the steps as opposed to finding bugs or flaws or issues within the product.

And yet isn’t finding bugs the goal of testing?

Finding bugs is the value add testers bring to a project because by finding bugs and getting them fixed, the project team begin to increase the reliability of the system and potentially the quality as well.
This is nothing new. Glenford Meyers in his 1979 book ‘The Art of Software Testing’talks about his definition of testing

 “Testing is the process of executing a program with the intent of finding errors.”

It is not saying that testing should ensure that the product performs as specified or some such similar activity.

This is an important distinction – having the relevant mindset will steer us in the relevant direction. If we are looking to confirm that the product meets the specifications then it is likely that we can do this but will miss bugs. If, however, we are looking for bugs then we will find them (and along the way we will have false alarms or ‘non-bugs’ but isn’t that potentially better than missing some important bugs?).

Professor cem Kaner (Florida Institute of Technology) talks about this in the course Bug Advovacy  and also in his slide set that extends on his book Testing Computer Software. Prof. Kaner refers to what is called Signal Detection Theory. SDT quantifies the ability to discern between signal and noise and is a way in which psychologists measure the way decisions are made under conditions of uncertainity. When we are testing, there is nothing more uncertain as software we are have been just been given!

This of course can be influenced by the rules or limits or bias we set on ourselves or the group of testers we look after. Wikipedia has an excellent example of this bias

“Bias is the extent to which one response is more probable than another. That is, a receiver may be more likely to respond that a stimulus is present or more likely to respond that a stimulus is not present. Bias is independent of sensitivity. For example, if there is a penalty for either false alarms or misses, this may influence bias. If the stimulus is a bomber, then a miss (failing to detect the plane) may increase deaths, so a liberal bias is likely. In contrast, crying wolf (a false alarm) too often may make people less likely to respond, grounds for a conservative bias.”

In testing, if we influence testers to make sure the product conforms to requirements then we steer the bias in that direction. If we influence the bias towards finding bugs then that is what will happen and as Glenford Meyers has already pointed out, we begin to add value (potentially at a greater add than if we are looking to confirm that the product meets requirements).

Coach Rangi struck an interesting dilema at one practise. He asked his team to run a full court drill involving the speed dribble and read and react principles. This is what happened…

Coach : “OK Ladies we’re going to do a minute using the Speed dribble. Read the ball and react accordingly”

Players : “Yes Coach!”

Point Guard brings the ball to the top from our 2-man fast break. Our Wing running the outside lane, get her wing position and almost without hesitation cuts to the basket. So I stop the drill and pull her up.

Coach : “OK, What was your Read?”
Player : “Ah that was the speed dribble coach”
Coach : “OK So you made the cut although X actually hadn’t started the speed
dribble towards you”
Player : “Yeah, I was anticipating her doing the speed dribble at me”
Coach : “Why would you be anticipating it? You should be reacting to what she
does? What would happen if she drove or wanted to make a pass?”
Player : “But she wouldn’t do that Coach”
Coach : “And why is that?”
Player : “Cause you said we were running Speed Dribbles for a minute”

What an interesting sequence! Look at how Coach sets or influences the drill’s bias (just like following a script). Then the team interprets his instructions and follows the “script” to achieve the aim of the drill (“OK Ladies we’re going to do a minute using the Speed dribble. Read the ball and react accordingly”). The player then inteprets the instruction without question and becomes inflexible and doesn’t adapt to what the point guard was doing.

Coach Rangi then went on to say…
“…So after practice, I reviewed our training and was able to determine that the drills suffered from having a pre-conceived outcome based on a known condition eg we’re doing pass and cut for a minute then speed dribble for a minute then natural pitch etc. We needed to remove the pre-conception and make it random forcing the Wing to work.”

Fantastic! Much like in software testing where we have an expected result based on a known condition, our ability and effectiveness to analyse, think critcally and discover bugs is reduced by the bias surrounding our testing (test scripts or in basketball, set plays). We can become almost paralysed by following and completing each step in the script (been there, done that) and lose potential ideas, thoughts and creative ways in which to discover bugs (i have personally experienced both mindsets probably as most testers have at one stage or another).

How then did Coach Rangi fix this…

“We now have a new drill called “You make it up 2-man break”. We run 2 minutes using Circle movement options only – Dribbler drives, Dribbler drives and pitches, Dribbler drives and pivot pass to Safety valve. Then we run another 2 minutes using the other options – Pass and Cut, the Speed and Power Dribble. We also instigated a rule that says the next pair to go cannot do the same move as the pair in front has just done ensuring a different option each time down the court.”
Coach Rangi then finishes his blog by saying…

“In hindsight I should’ve seen this coming but there is nothing like getting it on the floor and letting players find the flaws for you. And honestly, I’m glad they did because it just made us a better basketball team!”

Much like in software testing, Exploratory testing is an approach that can help us become alot more flexible and help us avoid the “Cause you said we were running Speed Dribbles for a minute” syndrome!

Can’t Beat Experience!

It has been awhile since I’ve written and its mainly because i have moved into the “academic” side of testing – delivering software testing courses for Software Education in New Zealand. As a result i have been busy travelling and delivering!

One of the main things that I’ve noticed during the course delivery is  the degree of separation between a junior test analyst and someone with more experience. At the end of the day, it appears to come down to how many more stories or experiences that someone who has been in the *game* longer has to draw on. This lead me to think about a quote from W.Edward Deming:-

“Experience by itself teaches nothing.” This statement emphasizes the need to interpret and apply information against a theory or framework of concepts that is the basis for knowledge about a system. It is considered as a contrast to the old statement, “Experience is the best teacher” (Dr. Deming disagreed with that). To Dr. Deming, knowledge is best taught by a master who explains the overall system through which experience is judged; experience, without understanding the underlying system, is just raw data that can be misinterpreted against a flawed theory of reality. Deming’s view of experience is related to Shewhart’s concept, “Data has no meaning apart from its context” (see Walter A. Shewhart, “Later Work”). – http://en.wikipedia.org/wiki/W._Edwards_Deming

From my perspective, the more “battles” one has been in, the more experiences to draw from (even if one has “only” been in one organisation) and to some extent, the approach of the test analyst (Agile, Automated, Exploratory, Scripted, UAT etc) may not been as relevant as it serves to broaden the range of ones knowledge.

When i first started testing, one of the theories that i held/learnt was that testing breaks software. When i was asked what i do, i would often respond “I test  software by breaking it”.

Over time, associating with different projects and talking to many people, i view testing as Archeology – we sift through the dust and dirt using whatever approach is necessary to uncover the bugs – the software is already broken – we look for clues to find where these may hide!

Therefore, as testers, it is up to us to find our theory – whatever or wherever that may be. Seek to learn, broaden your skills and then apply this theory when getting your hands *dirty* . This will help to broaden our experience and maybe, just maybe, our individual value as a tester!

BTW – I’m currently reading an Introduction to General Systems Thinking by Gerald M Weinberg – so I’m beginning to walk the talk!

Happy learning!

An Expression of Thought – Testing Ideas

Having no way as way, having no limitation as limitationI have become a fulltime trainer working for Software Education in New Zealand ( www.softed.com ) delivering software testing courses. As i’m now sitting more in the Academic space as opposed to the Practioner space, I have been given the opportunity to meet many different people, with different backgrounds, looking to gather new ideas to use in their testing jobs.

Some people won’t do much, if anything with this new knowledge (it’s human nature after all especially when the work pressure comes on) but some will. It is these testers that will hopefully feel inspired to share their thoughts and ideas with us all.

The internet has made us a very small, very connected global community and each thought expressed or shared (particularly in testing) is a thought worth considering. Maybe you have discovered a new idea with regards to testing or maybe even reaffirming an existing idea (and adding your own wrapper around it).

To those testers that i have met or to those people who may be reading this and haven’t yet considered creating a blog, please rethink. Your thoughts are valuable and your ideas are at least worthy of expression and/or comment.

I would to *hear* them – please let me know if you do! 

Happy blogging!