Whom shall I serve?

Tweet - Mike Talks
Tweet – Mike Talks – KWST#2 – June 2012

Whom shall i serve?

A song, a hymn, or a reminder as to who our customers are? Who do we serve and why is that important?

During KWST#2 (June 2012, Wellington, New Zealand), the discussion about whom  we serve came up.

Mostly, the answers tended to support the obvious conclusion (to me at least) that whom we serve could be:-

Our employer(s)
The project manager
The developers
The business
The test manager
The test team
The project team
Our family

And these are all valid customers/people/organisations/groups that we give service to in some way. But there is one other element that sometimes we don’t consider…

Ourselves

Whom shall I serve? I think first and foremost it is ourselves. We are responsible for our own work, for our own ethics, our output, our own learning, our own interactions with others, our own interactions with other testers and our own interactions with the software testing community.

Sometimes we take a high degree of responsibility for one or some of these things and sometimes we don’t. What may be important is that we come to understand that we also serve ourselves and by seeing ourselves as a customer (if you will) then it allows us to appreciate who we are as a tester, what we can deliver, what skills we have and what we stand for.

Too often I have seen testers wilt in the face of criticism (and scrutiny for that matter) from management attempting to justify testing or test artifacts or activities. Knowing what we stand for gives us a moral ground to argue from. Unfortunately, it doesn’t mean that everything will be *perfect* because we are conscious of our position but at least we know our tipping point.

So how do you deal when reaching your tipping point?

Well, that does depend but some of the ways that I have used have been:-

  • Educate those that may be pushing you towards your tipping point – (in my experience, it is typically a manager)
  • Listen to those pushing you to your tipping point – (it is possible that we don’t understand their context)
  • Use your influence and credibility to help educate
  • Employ a stealth approach – (one project I was on, the project wanted test cases (with expected and actual results) and use what they saw as structured testing. While we spent time giving them what they wanted, the majority of the issues during test execution came, not from the test cases, but from an undeclared exploratory approach. OUR plan of attack became give the customer what they wanted, educate them along the way and use good exploratory testing to find valuable information *quickly*. The test cases in this instance were our checks, the exploratory test charters, our tests. The stealth here was from discerning the clients context,employing what became a blended approach and not necessarily letting management know that this is what was happening.)
  • Leave – (this is most likely the extreme option but sometimes it is more beneficial to/for you to leave a project/employer/organisation than having to adhere to rules that may not make sense. I have done this, it was a challenge but I’m glad I did it.)

So, whom do we serve? Ourselves first (it’s not as selfish as it may seem) and then those mentioned above. Putting ourselves first means that we are taking responsibility for the quality of our own work which means in turn, we are better placed to serve our customers.

The circle is now complete…

As of about two weeks ago, I went out on my own…

An independent

A boutique tester (to borrow from Matt Heusser)

A trainer

A consultant

And thus no longer tied to the policies of another organisation or what someone else may view as important/relevant/worthwhile, we have started our own venture – www.osmanit.com (currently under construction). This is what we do…

  • Training – I deliver a 2x day course that focuses on testing approaches and thinking about what/how/when to test/check a product. I am available to deliver training globally. Contact me on brian(dot)osman(at)osmanit(dot)com for details.
  • Consultancy – I will consult with you at the tester level, management level, strategy level
  • Writing – now that I am an independent, there should be more time for blog posts, twitter etc. In other words, staying in touch and sharing with the testing community

New path, new adventure and I’m excited to see where this road will lead!

 

 

 

Aggressive and Passive Testing

I’ve been thinking about how I *bucket* testing. Here is what I mean. I see testing as aggressive and passive.

 Aggressive testing, to me, is the art of asking the product questions, to think outside of the box (and the text-book) and to try different ways to test the application looking for interesting information (whether they be bugs, issues or curios). I prefer to be an aggressive tester. My mindset is to look for ways in which a product could fail. I see this as our value add as testers. When we find a bug, the bug is reported and resolved in *someway* thereby helping increase the quality of the product.

I believe that, while there is an element of passive testing (and what i mean here is checking), a tester is more beneficial to a project IF they are being aggressive and proactive and looking for potential failures or issues.

Detailed testing scripting can be *aggressive* in some ways but I’ve found that by having a pre-determined course of action, I am more likely to allow confirmation bias to influence the way I work. By exploring (and I mean having some structure – whether it is by session based test management or using high level test conditions/risks/ideas), I have found that I am more likely to be more aggressive in nature and pursue lurking bugs in the code as I have not been constrained by following detailed test steps.

I believe that to be an effective tester ultimately means that we are aware of the context of the project, application and environment, we are in pursuit of information (bugs, issues or curios – curio *term* taken from a discussion on twitter from James Bach and Michael Bolton) and we are feeding this information back into the project thereby helping management make more of an informed go/no go decision.

In one project I worked on there was a significant element of what i call Passive testing which came in the form of *running* a regression suite. This involved executing a test script which quickly devolved in an almost meaningless tick off and check exercise (check the test step – is it correct? – If so, check it off, if not do a superficial investigation (though I can state that no regression bugs were *found* as a result of this *testing*!)

This is bad passive testing which unfortunately is common in my part of the testing world. How many times have I seen disengaged *testers* running scripts that supposedly are meant to add value to the project – to my sceptical mind, all they add is paper.

Now, not all passive testing is as I’ve described. Even when we are aggressive in our approach there may still be elements of passive testing (checking against rules, contracts, laws, configurations, environments or anything that may require checks that help support what we do as engaged testers).

Aggressive and passive testing are NOT mutually exclusive – they are interdependent and intertwined – the issue I have is when the passive (non-engaged) testing is more prevalent than the engaged (by being engaged I mean brain switched on testing).

Unfortunately this is common in bigger organisations where I live. Fortunately, there are pockets of very engaged, context driven testers around that add much more value than the stock standard factory schoolers. I am thankful for that for it means that I’m not a lone voice in this part of the world!

New terminology?

Am i introducing a new set of terminology? No. Will others view testing as aggressive or passive? Maybe not. What I am doing is highlighting how I see testing. There is nothing wrong with that. I am not constrained by a glossary (though they could be useful) rather I am attempting to demonstrate what i mean by testing and how I view the craft that I work in.

Good Exploratory Testing Practices webinar

Today (14th February 2011 @ 12oo hours – 12pm – New Zealand time) I will be presenting a webinar on Exploratory Testing practises that I use to help put guidiance around my testing.

To register for the webinar click here.

Also check here to see how New Zealand time compares with the time in your part of the world.

Look forward to having you tune in!

Weeknight Testing #04 – an experience report

I had the privilege of joining Weeknight testing (Twitter #WNTesting). This was my first session as I am generally not available for weekend testing sessions (By the way, WTANZ session #12 is on this weekend).

Ok – so what happened during the Weeknight testing session?

I was about 5-10 minutes late waiting for my laptop to boot up etc and when I did login,  there was a flurry of chatter (what I mean by this is that a testing session is held via im over Skype).

Darren McMillan was the facilitator who had the challenging of keeping up with the threads and multiple chats while at the same time guidiong direction in a subtle way (mainly by quoting interesting comments).

I found the *noise* challenging that I went *dark* (to steal a Tony Bruce phrase :)) for a while or in other words I didn’t contribute to the discussion(s) until i had read the mission, requirements document and getting used to the rhythm of the session. I found that while the first two are important, the rhythm is vital as it means that I was able to respond to questions or threads in *real-time* once i had the rhythm ofthe conversation(s).

So – what was it all about?

The mission was to *test* a set of requirements for a fictional company called CRM-R-US by “…reviewing and feeding back on the initial requirements to help identify any gaps, risks or potential issues in them.” This document is at an early stage of requirements gathering and was a first draft. The product is marketing tool centered around twitter.

Some of the participants mentioned they were off mind mapping so I followed suit – except I hand drew mine. I identified four major sections in the document but focused initially on one – the section on the Campaign Engine.

The main reason was threefold:

  1. The lack of *detail*
  2. The section was based on a vision and
  3. A comment stating… ‘Our CEO Patricia Elmer’s liked Brian’s idea so much she’s now seeing this as the key selling point of this feature.’. The CEO is someone who matters and has major influence and power and almost by default, the section to me, had high risk.

So, I began to ask some questions – a few at first and then once I got the rhythm, a lot more. By that time there was 40 minutes to go and questions and comments were coming thick and fast – there was a great question from Sharath B – What’s in it for me if I follow? This made me pause as I was thinking from a business user/call centre point of view whereas Sharath’s question made me think along the lines of the target audience and why would they want to follow our fictional company in twitter. For me, Sharath’s question made look at the broader picture and defocus my thinking. From a testing point of view, using a defocusing strategy helps look at the problem from a broader point of view. This was one of many fantastic ideas, thoughts and questions – the transcript will be posted soon (http://weekendtesting.com/archives/tag/weeknight-testing) – from which you can see some of the great thoughts and ideas that went on during the session.

Lessons Learned for me…

  • Sometimes pairing *may not* be the best option – some great pairs of testers working on a mind map tool weren’t able to pair as effectively as they might well have liked.
  • Tour the product
  • Ask ‘What is NOT being said’
  • Alert – if potential some bodies who matter (e.g. CEO) are mentioned throughout the document, flag it as a potential risk as they have influence/power/authority
  • Mind mapping is a good idea generator and framing tool – see the mind map – from Lisa Crispin and Mohinder Khosla and the mind map from Rakesh Reddy who were both involved in this session.
  • Focusing AND defocusing strategies work well together (focusing on a section to get specific, defocusing by looking at the bigger picture.)

These are some of the thoughts running through my head – I was able to connect with some really good thinking testers which in turn has helped me alot – all in the space of an hour or so!

If you haven’t tried weekend or weeknight testing, give it a go – it is a worthwhile investment!

Mr T and the Art of Box Painting

It’s funny how one can take different media and apply them to what you want to…in this case software testing. I recently watched a World of Warcraft ad with Mr T from the A-Team days (http://www.youtube.com/watch?v=bqJE5TH5jhc )

Mr T created a new character, a Night Elf Mohawk – the ‘directors’ of the ad said that he couldn’t do that. In Mr T’s own way, he boldly announced that he was ‘handy with computers’ and ‘hacked his own Night Elf Mohawk.’

Like most things software, the developer is looking for a solution to a problem. A tester (in this analogy, Mr T) is looking for a problem in the solution or in other words looking outside of the box.

Being *bound* by specifications and scripts is what I mean by box. Now I don’t mean that i am anti specification and anti scripts (they may be valuable resources, oracles if you will, in the right context) but reliance on these solely leads to the box being painted (http://viscog.beckman.illinois.edu/flashmovie/20.php for an example of *box painting* – INSTRUCTIONS FOR THE CLIP: Count the number of passes made by the team in white. Record the number of passes and continue reading….(at the end of this post is the next set of instructions but don’t go there yet!)).

In the ad, Mr T is looking outside of the box. He is thinking outside of the bounds of the requirements.

Why?

If the software delivers as per the requirements, has it not passed?

No.

Outside of the *bounds* are the areas testers love to tread because we then are looking at potential bugs. When we find bugs and report them, they are resolved in some way. As they are resolved, then potentially the quality of the product is increased.

I once worked on an application whereby the requirement of an input field (stated in the specification) said “truncate 32 chars”.

This was a java based browser hosted financial application.

A colleague and I started testing. We typed into the input field and as much as we tried, we couldn’t type past 32 chars.

So we created a very large string (1,000’s +) and copied and pasted into the same input field.

BANG!

CRASH!

DEAD!

The application fell over completely!

The developer had followed the spec and had coded for it but he did not cater for a copy and paste (let alone a large string!)

It took the developers about an hour or two to resolve it.

In this case, we thought outside of the box – we dared to push beyond the realms of the spec. We tested for something that wasn’t considered and this is an important consideration for testers – to question and challenge what is in front of us. Challenge what we have been given and the value that we will add as testers will be made manifest (i.e. bugs!!)

Happy hunting!

**INSTRUCTIONS from the video clip continued – what did you notice? Was there anything interesting going on? If haven’t found anything, review the clip and defocus your vision or in other words, look outside of the box.

Collaborating with thinking testers in India

Something is happening to testing!

A number of forward thinking testers in India have gotten together and formed Weekend Testers . Already there have been a number blogs posted about what an innovative idea this is – and these blogs post referrals/conference talks are from industry leaders such as James Bach and Michael Bolton which is high praise indeed.

I’ve been communicating with Parimala Shankaraiah who is one of the founders of Weekend Testers on Exploratory Testing (she has even taken the time to post some great comments on the google group Software Testers New Zealand.) If Parimala is an example of the thinking and passion towards testing in the Weekend Testers community then the Indian testing discipline is in good hands!

It does seem to me that are great inquisitive testers coming through every single day and the world-wide web is one way to keep track of and collaborate with these powerful thinkers!

The Joy of Being Amongst Fellow Testers

I recently delivered a presentation on Session Based Test Management to the Auckland Test Professionals Network. It was my first presentation. It was fun and I really enjoyed being there.

For me though, the enjoyment factor came afterwards in talking and discussing software testing with other testers.

I noticed something.

There were some testers that had come to learn something. Not everyone did but I’m sure most took away at least one idea or thought. And my thought is this – why don’t we (software testers in New Zealand) actually share our knowledge a lot more?
Some of us blog, a number attend SIGIST meetings, conferences etc but we then either sit on that knowledge or we’re not sure how to share it. IF we grow our community, our discipline then we all benefit!

I was talking to Farid Vaswani and John Lockhart amongst other wonderful testers there. They were very willing to share their own thoughts and ideas on testing and we had a great discussion and explored multiple testing ideas.

Which created a second thought – since we geographically limited,and we are not able to mentor or share and discuss ideas easily in a physical sense, there are a myriad of ways to achieve this online. So i created a Google group called Software Testers New Zealand. And while it’s aiming for a New Zealand flavour, it is in no way limited by country. So if you are outside of New Zealand and wish to become part of this growing community, feel free to join and share your ideas and thoughts!

By doing so, lets mentor each other and take the best from each other.

Happy testing!

Teamwork – The value of a good team

How a good test team can help you become a better tester!

Teamwork

 

 

 

I’ve been watching New Zealand’s Junior Tall Blacks play at the U19 FIBA World Championships (Auckland New Zealand) and what struck me the most was the level of teamwork showed by the team. This was one of the contributing factors behind the team doing so well – i mean undersized, under gunned but plenty of heart, a good coach, sound systems AND generally good teamwork. What it did lack was the experience. Even though this was the U19’s, a number of teams had professional basket ballers in their team and that experience help decide close games.

When i think back to software testing teams i have been on i immediately think about the varying degrees of teamwork. I’ve worked on a team that was very hierarchical, there was a definitive pecking order and if you upset the head honcho (or in this case, honcho-ess), you quickly became ostracised. And this was regardless of skill, knowledge or enthusiasm and when you were out, you were out. This meant that the peripheral testing activities became harder to accomplish until you got back “in”. You had no or little peer support and pleas (subtle or otherwise) to management were fruitless. It didn’t bother me too much  because (either i was naive or ignorant) but one tester i saw felt this ‘pressure’ and it affected her ability to test. Why? Because she was too busy dealing and thinking about her social status that she couldn’t concentrate on testing (AND I mean thoughtful, critical testing.)

I’ve also worked as a sole tester in which, generally speaking, i never had to contend with team politics. I guess i was seen more as a project peer, an individual and not some annoymous member of an annoymous team. I was real and approachable and i guess this made it easier to build a rapport with. This is my experience but obviously it may not be typical. We have ‘control’ over ourselves but not much so over our environments.

I have also been part of a team that was supportive and encouraging and in essence allowed individuals to experiment, to try different things, expand and explore. And because these positive team attributes were in place, the opportunity to collaborate, share and test greatly increased. Whereas in the hierachial team i was in, knowledge was gold and he/she who had the most gold won, the supportive team wasn’t worried about which individual had the most gold but how much gold the team had collectively. Testing thrived because it was allowed to!

I have felt the value of a good teamwork. It goes along way to helping you get up in the morning and enjoying your day rather than dreading it.Testing is a human approach and its not just our interaction with the software but also with those we work with that helps us become better testers!

The one minute speed dribble syndrome

Rob Rangi is a very good friend of mine who happens to coach the St Mary’s Senior Girls Basketball team based in Wellington, New Zealand. He is blogging about his coaching experiences here.

He recently blogged about a recent session entitled Taking the Positives from the failures of drills. Coach Rangi is installing the Read and React offense, an offense that is based around principles rather than set plays.

Unlike a set play where, for example, player O1 passes to player O2 after player O2 was screened by player O3 (i.e. a structured offensive set), the Read and React is based on a group of principles in which the offensive players move depending on what is happening. This leads to an infinite number of possibliities in which the offense  can move, react and score. There is no blinked eyed approach whereby player O1 must do this in order to satisfy the pattern and potentially miss a scoring opportunity.

To quote from Coach Rick Tolbert (the Read and React creator), “…And that’s exactly what the Read and React Offense does: it provides a framework that can be used as an offensive system to develop players, teams, and programs. Or, it can be an offense for one team, an offense that builds upon itself, with a counter for anything any defense can throw at it.” Notice that Coach Tolbert talks about a framework. There is no mention of the words structured, pattern or set. In essence, the framework provides the heuristics (and the principles are collectively the oracle), the players apply these heuristics and adapt them  during game time.

Coach Tolbert also went on to stat his past season and found that 80% of his teams points came from principled basketball. Only 20% came from set plays and yet in practise, his team set spent 80% of the time on only 20% of the total point production!

Exploratory Testing is like the Read and React offense. It allows a creative (heuristics based), flexible (adaptable) approach (principles) to software testing that enables a tester to test a product with a broader mindset.

On the other side of the coin, writing test scripts (or if you like, using set plays)  is a very common testing practise which enables the tester to set out in advance, the steps he or she will follow.

One of the dangers of following a script is that the tester becomes a verifier of the steps as opposed to finding bugs or flaws or issues within the product.

And yet isn’t finding bugs the goal of testing?

Finding bugs is the value add testers bring to a project because by finding bugs and getting them fixed, the project team begin to increase the reliability of the system and potentially the quality as well.
This is nothing new. Glenford Meyers in his 1979 book ‘The Art of Software Testing’talks about his definition of testing

 “Testing is the process of executing a program with the intent of finding errors.”

It is not saying that testing should ensure that the product performs as specified or some such similar activity.

This is an important distinction – having the relevant mindset will steer us in the relevant direction. If we are looking to confirm that the product meets the specifications then it is likely that we can do this but will miss bugs. If, however, we are looking for bugs then we will find them (and along the way we will have false alarms or ‘non-bugs’ but isn’t that potentially better than missing some important bugs?).

Professor cem Kaner (Florida Institute of Technology) talks about this in the course Bug Advovacy  and also in his slide set that extends on his book Testing Computer Software. Prof. Kaner refers to what is called Signal Detection Theory. SDT quantifies the ability to discern between signal and noise and is a way in which psychologists measure the way decisions are made under conditions of uncertainity. When we are testing, there is nothing more uncertain as software we are have been just been given!

This of course can be influenced by the rules or limits or bias we set on ourselves or the group of testers we look after. Wikipedia has an excellent example of this bias

“Bias is the extent to which one response is more probable than another. That is, a receiver may be more likely to respond that a stimulus is present or more likely to respond that a stimulus is not present. Bias is independent of sensitivity. For example, if there is a penalty for either false alarms or misses, this may influence bias. If the stimulus is a bomber, then a miss (failing to detect the plane) may increase deaths, so a liberal bias is likely. In contrast, crying wolf (a false alarm) too often may make people less likely to respond, grounds for a conservative bias.”

In testing, if we influence testers to make sure the product conforms to requirements then we steer the bias in that direction. If we influence the bias towards finding bugs then that is what will happen and as Glenford Meyers has already pointed out, we begin to add value (potentially at a greater add than if we are looking to confirm that the product meets requirements).

Coach Rangi struck an interesting dilema at one practise. He asked his team to run a full court drill involving the speed dribble and read and react principles. This is what happened…

Coach : “OK Ladies we’re going to do a minute using the Speed dribble. Read the ball and react accordingly”

Players : “Yes Coach!”

Point Guard brings the ball to the top from our 2-man fast break. Our Wing running the outside lane, get her wing position and almost without hesitation cuts to the basket. So I stop the drill and pull her up.

Coach : “OK, What was your Read?”
Player : “Ah that was the speed dribble coach”
Coach : “OK So you made the cut although X actually hadn’t started the speed
dribble towards you”
Player : “Yeah, I was anticipating her doing the speed dribble at me”
Coach : “Why would you be anticipating it? You should be reacting to what she
does? What would happen if she drove or wanted to make a pass?”
Player : “But she wouldn’t do that Coach”
Coach : “And why is that?”
Player : “Cause you said we were running Speed Dribbles for a minute”

What an interesting sequence! Look at how Coach sets or influences the drill’s bias (just like following a script). Then the team interprets his instructions and follows the “script” to achieve the aim of the drill (“OK Ladies we’re going to do a minute using the Speed dribble. Read the ball and react accordingly”). The player then inteprets the instruction without question and becomes inflexible and doesn’t adapt to what the point guard was doing.

Coach Rangi then went on to say…
“…So after practice, I reviewed our training and was able to determine that the drills suffered from having a pre-conceived outcome based on a known condition eg we’re doing pass and cut for a minute then speed dribble for a minute then natural pitch etc. We needed to remove the pre-conception and make it random forcing the Wing to work.”

Fantastic! Much like in software testing where we have an expected result based on a known condition, our ability and effectiveness to analyse, think critcally and discover bugs is reduced by the bias surrounding our testing (test scripts or in basketball, set plays). We can become almost paralysed by following and completing each step in the script (been there, done that) and lose potential ideas, thoughts and creative ways in which to discover bugs (i have personally experienced both mindsets probably as most testers have at one stage or another).

How then did Coach Rangi fix this…

“We now have a new drill called “You make it up 2-man break”. We run 2 minutes using Circle movement options only – Dribbler drives, Dribbler drives and pitches, Dribbler drives and pivot pass to Safety valve. Then we run another 2 minutes using the other options – Pass and Cut, the Speed and Power Dribble. We also instigated a rule that says the next pair to go cannot do the same move as the pair in front has just done ensuring a different option each time down the court.”
Coach Rangi then finishes his blog by saying…

“In hindsight I should’ve seen this coming but there is nothing like getting it on the floor and letting players find the flaws for you. And honestly, I’m glad they did because it just made us a better basketball team!”

Much like in software testing, Exploratory testing is an approach that can help us become alot more flexible and help us avoid the “Cause you said we were running Speed Dribbles for a minute” syndrome!