My experience of the magic of StarWest 2012 – Day 1

October 3 and 4 – Star West 2012 – Disneyland Resort, Anaheim California. What better place to be than at Disneyland to experience the magic of sharing, learning and collaborating with fellow testers.

StarWest 2012 was my first experience of a Star conference and the expectations that I had were definitely surpassed. Yes, I suppose  in some ways, in could be termed a *trade show* for testers but it is also a very good opportunity to hear (and be heard), see (and be seen – networking) with fellow testers who are typically on the other side of the world to me.

The following is a summary of what I discovered at this years StarWest…

Day One – October 3th
The day started with me getting *lost* by following my iPad map and taking the wrong turn. This was a case of error between tablet and user.
Upon arriving and registering, we (delegates) were presented with a free breakfast – free? food? Yup i was there and didn’t turn it down.

As i ventured into the conference room, my immediate thought was that this place was big…and there are a lot of testers here (relative to Australasian conferences I’ve attended). Lee Copeland open the conference and as a nice gesture acknowledged all of the overseas testers in the room.
I park myself down to the side of the main stage – I think I spy Scott Barber, Griffin Jones and Michael Bolton and make a mental note that I must catch up with them.

So I opened the iPad, twitter fired up (and I notice how quick the internet access was) and settled down to two excellent keynotes.

The first talk was by Jonathan Kohl on Tapping into Testing Mobile Applications. This talk got my interest in that I don’t test in that space but I own a phone and the fact that Jonathan made his talk very interesting got me thinking that I may in future be testing devices and apps – so why not pay attention!?  These were some the ideas that I took away (as I captured them and recorded them on twitter)

#starwest.. Move beyond the black box…the black box is dead in mobile testing @jonathan_kohl #testing
#starwest…gamification…reward the best bug video of the week.. @jonathan_kohl … Me: great idea!
#starwest…gamification of work @jonathan_kohl .. Me: send testers on a quest, level up to test paladin level 1…sounds cool!
#starwest … Gamification of work movement… @jonathan_kohl me: Great idea!
#starwest … Social features..now do we harness the time spent on social media…use social media interruptions in your app testing
starwest … Testing mobile apps..get out in the real world..at home, on the move and away from home (mall,weather etc)
#starwest …build something amazing as opposed to strictly following a certain methodology @jonathan_kohl
#starwest…usability testing and how a user uses an app is paramount! @jonathan_kohl ..research usability testing approaches
#starwest …one type of project challenge with web apps…extreme time pressure @jonathan_kohl
#starwest …airline apps are developed on assumption of a strong network signal…at an airport not always true @jonathan_kohl
#starwest @jonathan_kohl …think about whats outside of the device…
#starwest think about testing apps in the grey box space. Black box may not be enough
#starwest..understand users motivation and emotions
#starwest …testers don’t use apps in ideal environments
#starwest deleting apps is an emotional thing. Think about emotions when testing mobile apps
#starwest apps…don’t give people reasons to delete your app #testing

The key points for me were about understanding a users emotions and motivations when testing a mobile device and that testing is not confined to just the *black box*. There is more to testing that the front end.

What also struck a chord with me is when testing an app is the  need to have in our mind the thought of not giving a user a reason to delete the app. It doesn’t take long to install and even less time to delete. Also the reason to delete an app may be magnified if a gripe about the app is spread on social media. If this was done, it may require a hail mary pass to come back from that!

Great talk – crowd now *fired* up!

The second keynote of the day was by Johanna Rothman on Becoming a Kick#$% Test Manager. To start with, Kick#$% here actually means awesome!

Here’s is what I captured via twitter…

#starwest…if there is no problem to solve then you don’t need a meeting @johannarothman
#starwest…if the meeting doesn’t have an agenda you don’t have to go…ask for 24hours notice for an agenda @johannarothman
#starwest…How to say no to multi tasking amongst projects @johannarothman http://t.co/JDLEib6l
#starwest…Build communities of practice…”lunch and learns” great for learning what others are doing/sharing @johannarothman
#starwest.. Forget about solo experts and multitasking…testers assigned to multiple projects..nothing gets done @johannarothman
#starwest…When people start caring about bugs depends on where they are in the products lifecycle @johannarothman
#starwest…An awesome manager has regular one on ones…they are more AWARE of what is going on @johannarothman
#starwest…testers are generalists …help them move into other positions with they want to.. @johannarothman
#starwest…coaching is when your offer options with support..when its needed @johannarothman
#starwest…as a test manager, how do we determine that our people are high worth to you? @johannarothman
#starwest…you need testers that can also understand the solution space. @johannarothman
#starwest….hire smart, high value people. When you start talking about price, you’re not talking about value @johannarothman
#starwest…@johannarothman … We test from the perspective of curiosity not victims….
#starwest…@johannarothman … Next StarWest keynote on becoming a kick@$$ test manager. Kick@$$ = awesome!

I dislike meetings or to be clear, meetings that do nothing and have no direction. They are time wasters and give the illusion of activity so when Johanna reiterated then need for an agenda or not having a meeting if there is nothing to solve, that got a big green tick in my mind. Like a lot of us, we have been in far too many non-productive meetings that generate…nothing.
Great talk – first time I’ve heard Johanna speak and the crowd was fired up again ( I thought I heard chanting like an English Premier League soccer match in the far corner acknowledging Johanna’s keynote – but i digress).

After the keynote, the trade show opened when a million tool vendors (of very similar tools) and consultants dazzled everyone with their wares (and swag – I was too slow for the Atlassian t-shirt but picked up a couple of usb sticks, a mouse pad and a whizzy pen that lit up.) Interestingly enough, the big focus appeared to be tools that tested mobile applications.

So, being the tester that I am, I decided to rock up to a few vendors and ask if they have any tools that work on iSeries.

Mostly I got blank looks.

One vendor attempted to find out more but admitted that they didn’t really know if their tool could – If it did and If I was looking at their tool for iSeries, I would sound them out. Why? They were honest enough to admit that they didn’t know (though did they did try to find out) and that *integrity* counts for something (some other vendors gave me a rehearsed sales pitch without listening to what i was saying – at that point, I just wanted to leave – but pass me the swag first! :))

I quite enjoyed walking the tool vendor hall and it was good to be amongst the *buzz*.

Spotted Michael Bolton and introduced myself and chatted to him. Bumped into Scott Barber and we had a good talk – 2 out of 2 so far…

From 11:00am till 12:00pm I was involved in giving free consulting sessions. Anyone that wanted to talk and ask could (and did). I had some good discussions particularly on the state of test automation in New Zealand (which unsurprisingly, was similar to the US – just a different degree of magnitude). I also managed to get a *free* consult with Doug Hoffman, a real gentleman amongst testers and I came way with some answers to some questions.

After lunch, track sessions began in earnest and there were six streams to choose from which I won’t go into detail here. Suffice to say that I took quite a few notes – I was like a kid in a candy store!

Leaving the conference after day one was a buzz. Ideas floating around my head – walking through downtown Disney with the Disney tunes playing non-stop –  spying Goofy and the Mad Hatter – it all added to the magic of StarWest 2012 (how can you NOT be entranced by having a conference here)!

I’ll talk on day 2 next post…

Whom shall I serve?

Tweet - Mike Talks
Tweet – Mike Talks – KWST#2 – June 2012

Whom shall i serve?

A song, a hymn, or a reminder as to who our customers are? Who do we serve and why is that important?

During KWST#2 (June 2012, Wellington, New Zealand), the discussion about whom  we serve came up.

Mostly, the answers tended to support the obvious conclusion (to me at least) that whom we serve could be:-

Our employer(s)
The project manager
The developers
The business
The test manager
The test team
The project team
Our family

And these are all valid customers/people/organisations/groups that we give service to in some way. But there is one other element that sometimes we don’t consider…

Ourselves

Whom shall I serve? I think first and foremost it is ourselves. We are responsible for our own work, for our own ethics, our output, our own learning, our own interactions with others, our own interactions with other testers and our own interactions with the software testing community.

Sometimes we take a high degree of responsibility for one or some of these things and sometimes we don’t. What may be important is that we come to understand that we also serve ourselves and by seeing ourselves as a customer (if you will) then it allows us to appreciate who we are as a tester, what we can deliver, what skills we have and what we stand for.

Too often I have seen testers wilt in the face of criticism (and scrutiny for that matter) from management attempting to justify testing or test artifacts or activities. Knowing what we stand for gives us a moral ground to argue from. Unfortunately, it doesn’t mean that everything will be *perfect* because we are conscious of our position but at least we know our tipping point.

So how do you deal when reaching your tipping point?

Well, that does depend but some of the ways that I have used have been:-

  • Educate those that may be pushing you towards your tipping point – (in my experience, it is typically a manager)
  • Listen to those pushing you to your tipping point – (it is possible that we don’t understand their context)
  • Use your influence and credibility to help educate
  • Employ a stealth approach – (one project I was on, the project wanted test cases (with expected and actual results) and use what they saw as structured testing. While we spent time giving them what they wanted, the majority of the issues during test execution came, not from the test cases, but from an undeclared exploratory approach. OUR plan of attack became give the customer what they wanted, educate them along the way and use good exploratory testing to find valuable information *quickly*. The test cases in this instance were our checks, the exploratory test charters, our tests. The stealth here was from discerning the clients context,employing what became a blended approach and not necessarily letting management know that this is what was happening.)
  • Leave – (this is most likely the extreme option but sometimes it is more beneficial to/for you to leave a project/employer/organisation than having to adhere to rules that may not make sense. I have done this, it was a challenge but I’m glad I did it.)

So, whom do we serve? Ourselves first (it’s not as selfish as it may seem) and then those mentioned above. Putting ourselves first means that we are taking responsibility for the quality of our own work which means in turn, we are better placed to serve our customers.

TWiST #11 podcast

Myself and testing buddy Jared Quinert  (Melbourne) were recently interviewed by Matt Heusser on behalf of Software Test Professional. Here is the link to that podcast http://www.softwaretestpro.com/Item/4913/Twist-11—Twist-down-under%21/

Thanks to Farid Vaswani, Test Manager from Auckland University (and associate producer for STP) for arranging the interview!

Why testing is like curling!

I was watching the Winter Olympics with curiousity – there are some amazing athletes and some amazing sports.

One among them was curling.

I didn’t get it.

The game reminds me of lawn bowls on ice except that the crowd went nuts!

So with this in mind, the analogy and test management came to mind. I had a discussion in a test management class and we came to similar conclussions.

Curling is a team game with 4 people with 2x team members called “sweepers” clearing the path so that the “stone” has a smooth journey to the “house” ( the bullseye.)

My question then is, how much of testing is like this?

How much time as testers/leads/managers do we spend on smoothing the path for testing?

The Pursuit of the Unreplicable Bug

ghostbusters.gifI’ve been recently testing a web-based application that produced a very interesting defect. It seemed that in one particular screen, a user (with the right combination key strokes and mouse clicks) could actually enter a supposedly uneditable error message field and enter text! At first i wasn’t able to repeat this behaviour but with the words from a James Bach article ringing in my ears about “…ignoring unreproducible bugs at your peril”, i logged it waiting for the right opportunity to attack it.

I had already spent time looking for this ‘bug’ but figured that i would put it to one side and come back to it with fresh eyes and clearer thoughts. Interestingly enough, the developers caught hold of this bug and attempt to replicate in their dev environments – i was even ‘challenged’ in a joking way that if i couldn’t reproduce the bug within 5 attempts then it didn’t exisit!! Oh, did the competitive urges come out then! (This was done in good spirits – we have a tremendous rapport between developers, testers and BA’s). However, it was another developer that found the key/mouse strokes that generated the bug and we discovered that it was a validation error on that web page!

So what were the lessons learnt?

  1. Exploratory testing found this bug – some may say that discovery was a ‘fluke’ but scripted testing would never have picked this bug up.
  2. Fresh eyes and a clearer head can aid tremendously in trying to replicate a bug (especially one discovered late in the day!)
  3. Having a rapport with developers helps in solving bugs – personal agendas and politics are put to one side for the greater good of the ‘team’
  4. Working alongside developers generally breaks down communication barriers (percieved and physical)
  5. Unreproducable bugs ARE best ignored at ones own peril – in this case finding this bug lead to a tightening of field validation for the application
  6. Bugs are bugs are bugs…testers find them, developers fix them, buisness decide what they want done with them – never give up on trying to replicate bugs that are difficult to reproduce!
  7. Teamwork – i honestly believe the power of many can be greater than the power of one
  8. It’s tremendously satisfying finding a bug that is difficult to find and reproduce – the testing equilivant of a three-pointer!

AST and the BBST Foundations Course

astlogo.gifIt has been awhile since my last post and its because I (along with 19 other esteemed test colleagues from around the world) have been ‘attending’ the Association of Software Testing online course – BBST – Foundations – see http://www.associationforsoftwaretesting.org/drupal/courses 

(as well as doing work of course!)

I have ‘met’ testers from Australia, New Zealand. India and the United States and to share in their knowledge has been superb! I have learnt alot and i have been challenged mentally with regards to my view on testing.

The instructors were Scott Barber http://www.perftestplus.com/ and Cem Kaner http://www.kaner.com/ and their knowledge and willingness to help everyone learn was outstanding. I highly recommend this course (actually a series of courses). The following is an email that i wrote to Scott…

Hi Scott,

Thank you very much…it was a privillege to have learnt from the ‘best’ – from the participants and of course our esteemed instructors! Yes it would be fine to post my name on the website. Again, as i’ve explained in my course evaluation – i have sat ISTQB and passed well BUT this means more to me – it was more challenging, stimulating and has me rethinking the way i approach things (either as good reminders or changes to my testing habits). Thank you once again and i hope we all can stay in touch.
Kind Regards
Brian

Exhaustive Testing

exhausted.jpg

 The following is  a response i sent to Kit who commented on my blog on ‘Insufficient Testing’ ….

Thanks for your comment. It’s almost a catch 22 situation. One of the principles of testing (according to ISTQB) is that Exhaustive Testing is impossible – I agree but the question is how much do you test and when do you know enough is enough?

For a complex system my thoughts would center around risk and priorities as your starting point. The approach or method used would ultimately rest on what level of auditability you must provide to the Business (they ultimately make the decision to go or no go.) Personally I would still use Exploratory Testing (if I was ‘allowed’ to) because in my experience I would be more likely to find something of value more often than through scripts.

However, in saying that, if the test team is involved right at the beginning of the project through walkthroughs, reviews or inspections (or any other type of review)than clarification and understanding will no doubt increase amongst the testing team with regards to the system.

After doing a Wikipedia search on Dr. Deming, one of his quotes is quite applicable to software testing… “Acceptable Defects: Rather than waste efforts on zero-defect goals, Dr. Deming stressed the importance of establishing a level of variation, or anomalies, acceptable to the recipient (or customer) in the next phase of a process. Often, some defects are quite acceptable, and efforts to remove all defects would be an excessive waste of time and money.” It is known that major commercial software often ships with known (and unknown) defects – MS Windows, Firefox v2.0 etc – its is reasonable then for the business to decide how much of the ‘risk’ they wish to carry. Testers should provide the necessary information to enable business to make that decision (good or bad).

At one New Zealand bank that I worked in, the test team I became involved with tried hard to exhaustively tested everything in a very complex application. The upshot was that one release took almost 12 months to ‘complete’ testing (there were other factors involved – personnel, political and management)BUT I guarantee that they could not say that that application was bug free. So I guess that leads to the second question – how much is enough?

James Bach says “When I exhausted the concerns of my internal critic (and external critics I asked to review my work), I decided it was good enough” (refer http://www.satisfice.com/articles/how_much.shtml).

NASA’s software safety standard (http://satc.gsfc.nasa.gov/assure/nss8719_13.html) NASA-STD-8719.13A September 15, 1997 – Section 3.4.5 says “The test results shall be analyzed to verify that all safety requirements have been satisfied. The analysis shall also verify that all identified hazards have been eliminated or controlled to an acceptable level of risk. The results of the test safety analysis shall be provided to the ongoing system safety analysis activity.” What then is an acceptable level of risk and acceptable to whom? Risk is then defined in this document as “…As it applies to safety, exposure to the chance of injury or loss. It is a function of the possible frequency of occurrence of the undesired event, of the potential severity of resulting consequences, and of the uncertainties associated with the frequency and severity.” Also in the document under section 1.4 Tailoring it says “….The tailoring effort shall include definition of the acceptable level of risk, which software is to be considered safety-critical, and whether the level of safety risk associated with the software requires formal safety certification.” Therefore at the end of the day , it’s a business decision taken within context of the business. As testers, we can test complexity within the context of the project and report back our findings – it is then up to those charged with making the ‘big’ decisions, to make them – or not!

Software Testing and the Intercepting Fist

Master of Software Testing PhilosophyWelcome to my first blog on software testing – hopefully one of many!
Testing is an exciting intellectually stimulating discipline to be involved in. I have been in the ‘game’ now for over 8 years in Wellington, New Zealand. I have worked in both the private and public sectors and over that time, my knowledge and understanding has been shaped by the experiences i have come across and the ‘teachings’ i have absorbed from other practitioners, organisations and learned colleagues around the world (Cem Kaner, James Bach, Brian Marick, Bret Pettichord among others). The combination of these things have shaped my view and understanding of testing as i know it today.

One of the greatest, if not the greatest martial artist of the 20th Century – Bruce Lee – espoused his philosophies on martial arts through a framework called ‘Jeet Kune Do’ (JKD) or Way of the Intercepting Fist.

One of the theories of JKD is that a fighter should do whatever is necessary to defend himself, regardless of where the techniques used come from. This is like testing – should we subscribe to only ‘one way’ or one technique to test software when there is a multitude of methods that may suffice? Should we follow the ‘classical mess’ (scripted testing) or become ‘formless like water'(Exploratory testing)?

Don’t get me wrong – test scripted pre-arranged testing has its place and it works – i have no qualms about that – I’ve been there, lived that. The only voice of dissent i have is when so called Test Practitioners disregard any other way of doing it. I worked once for a bank in which i prepared a paper on the benefits of exploratory testing – especially used in a complementary way with expected mode of scripted testing. It never gained traction no matter how much i tried – the Test Manager went as far as to say “…it doesn’t work and you’ll never find anything with that.”

I beg to differ.

As you can tell, i am an exponent of the Context Driven school and Exploratory testing. However, i know and recognise the need for scripting and i am comfortable with that. The trick is seeing what you test in context of what you are doing and what you are trying to achieve.

You may be familiar with the following – its applies whether you Exploratory test or not…

THE SEVEN BASIC PRINCIPLES OF THE CONTEXT-DRIVEN SCHOOL

1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices.
3. People, working together, are the most important part of any project’s context.
4. Projects unfold over time in ways that are often not predictable.
5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
6. Good software testing is a challenging intellectual process.
7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Have a great week testing!