My experience of the magic of StarWest 2012 – Day 1

October 3 and 4 – Star West 2012 – Disneyland Resort, Anaheim California. What better place to be than at Disneyland to experience the magic of sharing, learning and collaborating with fellow testers.

StarWest 2012 was my first experience of a Star conference and the expectations that I had were definitely surpassed. Yes, I suppose  in some ways, in could be termed a *trade show* for testers but it is also a very good opportunity to hear (and be heard), see (and be seen – networking) with fellow testers who are typically on the other side of the world to me.

The following is a summary of what I discovered at this years StarWest…

Day One – October 3th
The day started with me getting *lost* by following my iPad map and taking the wrong turn. This was a case of error between tablet and user.
Upon arriving and registering, we (delegates) were presented with a free breakfast – free? food? Yup i was there and didn’t turn it down.

As i ventured into the conference room, my immediate thought was that this place was big…and there are a lot of testers here (relative to Australasian conferences I’ve attended). Lee Copeland open the conference and as a nice gesture acknowledged all of the overseas testers in the room.
I park myself down to the side of the main stage – I think I spy Scott Barber, Griffin Jones and Michael Bolton and make a mental note that I must catch up with them.

So I opened the iPad, twitter fired up (and I notice how quick the internet access was) and settled down to two excellent keynotes.

The first talk was by Jonathan Kohl on Tapping into Testing Mobile Applications. This talk got my interest in that I don’t test in that space but I own a phone and the fact that Jonathan made his talk very interesting got me thinking that I may in future be testing devices and apps – so why not pay attention!?  These were some the ideas that I took away (as I captured them and recorded them on twitter)

#starwest.. Move beyond the black box…the black box is dead in mobile testing @jonathan_kohl #testing
#starwest…gamification…reward the best bug video of the week.. @jonathan_kohl … Me: great idea!
#starwest…gamification of work @jonathan_kohl .. Me: send testers on a quest, level up to test paladin level 1…sounds cool!
#starwest … Gamification of work movement… @jonathan_kohl me: Great idea!
#starwest … Social features..now do we harness the time spent on social media…use social media interruptions in your app testing
starwest … Testing mobile apps..get out in the real world..at home, on the move and away from home (mall,weather etc)
#starwest …build something amazing as opposed to strictly following a certain methodology @jonathan_kohl
#starwest…usability testing and how a user uses an app is paramount! @jonathan_kohl ..research usability testing approaches
#starwest …one type of project challenge with web apps…extreme time pressure @jonathan_kohl
#starwest …airline apps are developed on assumption of a strong network signal…at an airport not always true @jonathan_kohl
#starwest @jonathan_kohl …think about whats outside of the device…
#starwest think about testing apps in the grey box space. Black box may not be enough
#starwest..understand users motivation and emotions
#starwest …testers don’t use apps in ideal environments
#starwest deleting apps is an emotional thing. Think about emotions when testing mobile apps
#starwest apps…don’t give people reasons to delete your app #testing

The key points for me were about understanding a users emotions and motivations when testing a mobile device and that testing is not confined to just the *black box*. There is more to testing that the front end.

What also struck a chord with me is when testing an app is the  need to have in our mind the thought of not giving a user a reason to delete the app. It doesn’t take long to install and even less time to delete. Also the reason to delete an app may be magnified if a gripe about the app is spread on social media. If this was done, it may require a hail mary pass to come back from that!

Great talk – crowd now *fired* up!

The second keynote of the day was by Johanna Rothman on Becoming a Kick#$% Test Manager. To start with, Kick#$% here actually means awesome!

Here’s is what I captured via twitter…

#starwest…if there is no problem to solve then you don’t need a meeting @johannarothman
#starwest…if the meeting doesn’t have an agenda you don’t have to go…ask for 24hours notice for an agenda @johannarothman
#starwest…How to say no to multi tasking amongst projects @johannarothman http://t.co/JDLEib6l
#starwest…Build communities of practice…”lunch and learns” great for learning what others are doing/sharing @johannarothman
#starwest.. Forget about solo experts and multitasking…testers assigned to multiple projects..nothing gets done @johannarothman
#starwest…When people start caring about bugs depends on where they are in the products lifecycle @johannarothman
#starwest…An awesome manager has regular one on ones…they are more AWARE of what is going on @johannarothman
#starwest…testers are generalists …help them move into other positions with they want to.. @johannarothman
#starwest…coaching is when your offer options with support..when its needed @johannarothman
#starwest…as a test manager, how do we determine that our people are high worth to you? @johannarothman
#starwest…you need testers that can also understand the solution space. @johannarothman
#starwest….hire smart, high value people. When you start talking about price, you’re not talking about value @johannarothman
#starwest…@johannarothman … We test from the perspective of curiosity not victims….
#starwest…@johannarothman … Next StarWest keynote on becoming a kick@$$ test manager. Kick@$$ = awesome!

I dislike meetings or to be clear, meetings that do nothing and have no direction. They are time wasters and give the illusion of activity so when Johanna reiterated then need for an agenda or not having a meeting if there is nothing to solve, that got a big green tick in my mind. Like a lot of us, we have been in far too many non-productive meetings that generate…nothing.
Great talk – first time I’ve heard Johanna speak and the crowd was fired up again ( I thought I heard chanting like an English Premier League soccer match in the far corner acknowledging Johanna’s keynote – but i digress).

After the keynote, the trade show opened when a million tool vendors (of very similar tools) and consultants dazzled everyone with their wares (and swag – I was too slow for the Atlassian t-shirt but picked up a couple of usb sticks, a mouse pad and a whizzy pen that lit up.) Interestingly enough, the big focus appeared to be tools that tested mobile applications.

So, being the tester that I am, I decided to rock up to a few vendors and ask if they have any tools that work on iSeries.

Mostly I got blank looks.

One vendor attempted to find out more but admitted that they didn’t really know if their tool could – If it did and If I was looking at their tool for iSeries, I would sound them out. Why? They were honest enough to admit that they didn’t know (though did they did try to find out) and that *integrity* counts for something (some other vendors gave me a rehearsed sales pitch without listening to what i was saying – at that point, I just wanted to leave – but pass me the swag first! :))

I quite enjoyed walking the tool vendor hall and it was good to be amongst the *buzz*.

Spotted Michael Bolton and introduced myself and chatted to him. Bumped into Scott Barber and we had a good talk – 2 out of 2 so far…

From 11:00am till 12:00pm I was involved in giving free consulting sessions. Anyone that wanted to talk and ask could (and did). I had some good discussions particularly on the state of test automation in New Zealand (which unsurprisingly, was similar to the US – just a different degree of magnitude). I also managed to get a *free* consult with Doug Hoffman, a real gentleman amongst testers and I came way with some answers to some questions.

After lunch, track sessions began in earnest and there were six streams to choose from which I won’t go into detail here. Suffice to say that I took quite a few notes – I was like a kid in a candy store!

Leaving the conference after day one was a buzz. Ideas floating around my head – walking through downtown Disney with the Disney tunes playing non-stop –  spying Goofy and the Mad Hatter – it all added to the magic of StarWest 2012 (how can you NOT be entranced by having a conference here)!

I’ll talk on day 2 next post…

Advertisement

Agile @ StarWest Software Testing Conference 2012

In my experience, what makes agile so powerful is the  encouragement of rapid, effective communication to achieve, uncover and discover what is wanted, what is being built and what could be going wrong. Practices such as collaboration and co-location can be effective tools for any project regardless of whether your project is agile or not.

I will be at Star West 2012, Anaheim, California next week (1 October 2012 – 5 October 2012) and will be speaking Thursday on how I borrowed some agile practices for a non-agile project and the lessons that I learnt.

If you’re attending Star West, come a long and say hello otherwise I will blog and tweet where I can!

See http://www.sqe.com/StarWest/Concurrent/Default.aspx?Date=10/4/2012#T19

Whom shall I serve?

Tweet - Mike Talks
Tweet – Mike Talks – KWST#2 – June 2012

Whom shall i serve?

A song, a hymn, or a reminder as to who our customers are? Who do we serve and why is that important?

During KWST#2 (June 2012, Wellington, New Zealand), the discussion about whom  we serve came up.

Mostly, the answers tended to support the obvious conclusion (to me at least) that whom we serve could be:-

Our employer(s)
The project manager
The developers
The business
The test manager
The test team
The project team
Our family

And these are all valid customers/people/organisations/groups that we give service to in some way. But there is one other element that sometimes we don’t consider…

Ourselves

Whom shall I serve? I think first and foremost it is ourselves. We are responsible for our own work, for our own ethics, our output, our own learning, our own interactions with others, our own interactions with other testers and our own interactions with the software testing community.

Sometimes we take a high degree of responsibility for one or some of these things and sometimes we don’t. What may be important is that we come to understand that we also serve ourselves and by seeing ourselves as a customer (if you will) then it allows us to appreciate who we are as a tester, what we can deliver, what skills we have and what we stand for.

Too often I have seen testers wilt in the face of criticism (and scrutiny for that matter) from management attempting to justify testing or test artifacts or activities. Knowing what we stand for gives us a moral ground to argue from. Unfortunately, it doesn’t mean that everything will be *perfect* because we are conscious of our position but at least we know our tipping point.

So how do you deal when reaching your tipping point?

Well, that does depend but some of the ways that I have used have been:-

  • Educate those that may be pushing you towards your tipping point – (in my experience, it is typically a manager)
  • Listen to those pushing you to your tipping point – (it is possible that we don’t understand their context)
  • Use your influence and credibility to help educate
  • Employ a stealth approach – (one project I was on, the project wanted test cases (with expected and actual results) and use what they saw as structured testing. While we spent time giving them what they wanted, the majority of the issues during test execution came, not from the test cases, but from an undeclared exploratory approach. OUR plan of attack became give the customer what they wanted, educate them along the way and use good exploratory testing to find valuable information *quickly*. The test cases in this instance were our checks, the exploratory test charters, our tests. The stealth here was from discerning the clients context,employing what became a blended approach and not necessarily letting management know that this is what was happening.)
  • Leave – (this is most likely the extreme option but sometimes it is more beneficial to/for you to leave a project/employer/organisation than having to adhere to rules that may not make sense. I have done this, it was a challenge but I’m glad I did it.)

So, whom do we serve? Ourselves first (it’s not as selfish as it may seem) and then those mentioned above. Putting ourselves first means that we are taking responsibility for the quality of our own work which means in turn, we are better placed to serve our customers.

Learning from the frustration of test case debates

What is a test case?

The reason I ask is that recently I have been following (and commenting) on a question in the LinkedIn group Software Testing & Quality Assurance  – ” hi guys do u think that creating Test Cases is must? According to me, creating Test Cases is just a waste of time rather we should utilize the time in doing testing. what is your opinion? ”

At first glance I thought it would relatively *easy* and pick apart this question and the ensuing replies. However, after reading through the comments, I immediately felt frustrated. Why?

Upon reflection, I noticed a couple of things.

First, it helps to view the comments from the start. I had missed the fact that there were something like 100 comments before I jumped in. Realising this would’ve help save the frustration because Griffin Jones said it from comment one

@Pradeep – I forecast that this conversation will become confusing because:

a. people will be using unstated assumptions about what is a “Test Case”. Some people’s working definition will exclude other people’s working definition of the term. This “shallow agreement” problem will not become obvious until comment # 78.

And Griffin’s prophecy came to pass.

Which led to *the* problem:

Comments were roughly divided between *a test case is a set of inputs with expected results* group that talked of the test case as a tangible artifact. The second group tended towards seeing the test case as an instance of a test idea and generally speaking this second group were the ones that seem to constantly challenge the assertions of the first group.

And then it dawned on me.

The second group appeared to be aligned with the context driven school of testing and as such realise that were *a lot* of dangerous assertions in the comments made by the first group. For example:

Testcases ensures the tester does not over do his testing and makes sure when and at what stage of his testing he could exit and say that the testing is complete.

If we were to look at the above statement a number of questions spring to mind. First of all, how does a test case ensure that a tester does not over do his testing? What does it mean to overdo testing and if testing is *overdone* what is it compared to be deemed overdone?  If the commenter means ignoring the risk of testing something else or finding information outside of the scope of the test case then overdone has potentially risky consequences for him or her (as they have now have jumped outside of the test case box and may find interesting information…tsk, tsk as now they may not meet their execution test case complete target because now they are THINKING about what they are doing as opposed to just doing *something*.). If the tester became engaged then they would be aware of their coverage and risk model and seek after information that may challenge that model. Notice that the engaged tester does not complete a test just because they have ticked off all of the steps to execute;   otherwise, we end up blindly following a script and we’re checking not testing. This highlights an issue of the commenter viewing a test case as a tangible item when in reality it is an abstraction. It is an idea (or collection of ideas) and *passing* a test case does not guarantee that the idea is finished with. Rather a good tester will most likely extend that idea into many ideas.

Of course we could critically pull apart the rest of the comment and show the fallacy in the statement (such as how does finishing your test cases mean that your testing is complete? It could in some circumstances but I suspect that the commenter meant completing testing – full stop). There are a number of comments like this and they all follow the same theme. We write test cases so that we can cover the requirements and we have repeatable tests so that we can teach others and because the v-model aligns with Saturn and Mercury in the house of Leo – so it must be good!

But I digress…

AND this was  frustrating for me. It seemed that no matter now many times (and in different ways), the second group (lets called them Team CDT) highlighted flaws in the first groups arguments (lets called them Team Factory) then another equally inane comment appears and it made me realise that (to paraphrase James Bach)…

If you are frustrated then it means that something is frustrating!

Realising this then made the rest of the journey…well…more fun.! I realised that I could not wilfully change anyone’s mind except my own. I realised that, regardless of what I shared, others are free to disagree. I realised that no matter how many times I pointed out a fallacy in someone’s argument, it’s up to them if they take heed or not.

AND I realised that I could actually benefit from this and not let the emotion of frustration take hold.

How you say?

By looking for like minded individuals and engaging with them knowing that I’m mostly likely to get a meaningful discussion coming back. By practising pulling apart a comment and challenging someone’s assertions. By applying James Bach’s Huh?Really?So? heuristic and what was initially a frustration quickly became a learning experience.

While it’s galling to see many testers fall into Team Factory, I am hearted to see a number of testers critical of the *status quo* and challenge them (as demonstrated by their replies to team Factory comments). It is through challenging that we grow the craft into something that is stronger, assertive and more critical overall.

KWST#2 – Day 2 – some thoughts

Day 2 had  a different (positive) vibe and in part I think it was due to the solid first time experience reports by Katrina Edgar, Mike Ward and Mike Talks. All three gave reports that really highlighted some of the ethical challenges we face as testers in our day-to-day world.

Some of the thoughts as tweeted on day 2 were…

  • Testing based on rituals is NOT testing – [the blind adherence to a test tool or process or syllabus is NOT testing. Testing is a brain engaged activity]
  • Testers are there to report the truth, not the convenient truth – [Testing is about presenting the facts as they stand and not manipulating them to suit an agenda (for an example of the *truth* being misused see http://www.theaustralian.com.au/australian-it/states-health-payroll-change-was-adopted-untested/story-e6frgakx-1225888223958]
  • If challenging, is your reputation strong enough to withstand any ethical fall out?
  • As a tester have to look through the eyes of the people who matter. Those with whom we have a contract.

During discussion, the topic of agency came up and the point of who do we serve as testers (see http://en.wikipedia.org/wiki/Law_of_agency )? This discussion brought a renewed energy into the room with Geoff Horne leading this. The question that was asked was…

Who do we serve?

Geoff stated…

We are engaged by an individual, and by the organisation behind the individual

Which is true to a point and from the perspective of the engagement between tester and *client*. However, I see the question from the perspective of the tester which is…

I serve myself first before any institution, as I’m responsible for my own ethics

In other words, as a tester, I am responsible for the ethics I hold and I carry. I am responsible for making sure that my house is in order first before the needs of the organisation are considered. At this point, the extension of what I consider ethical is extended to who I serve literally (or in Geoff’s case, from the perspective of the engagement) which are the people I work for/with and the organisation at large.  The discussion of agency and other ethics topics can be summed up quite nicely by a tweet by @NZTestSheep (aka Mike Talks)..

The great thing about an event like #KWST2 is how it challenges our models and maps, and we’re still processing it days afterwards

Learnings:

KWST takes a lot of organising and it is the detail that count such as…

  • A good venue (space, lighting etc)
  • Internet connection ( VERY helpful)
  • Appropriate twitter tag
  • The RIGHT people to invite [This year revealed some really good thinkers and it will be exciting to be working with them at future KWST conferences]
  • It can spawn off-shoots (like David Greenlee’s OZWST)
  • Facilitation is king – it takes practice, a firm hand and the ability to know when to let the conversation flow
  • Preparation before hand FROM everyone (and reminding everyone know that they are potential *speakers*)

Thank you for all those that attended KWST (see http://hellotestworld.com/ and http://martialtester.wordpress.com/2012/06/18/kwst2-what-a-ride/ and http://martialtester.wordpress.com/2012/06/19/kwst2-happy-snaps/ )

Thank you James Bach for your time in helping build a credible, professional, thinking community of testers down under and thank you software Education for your support in hosting KWST#2!

***EDIT: Much thanks must also go to Oliver Erlewein, Richard Robinson and Aaron Hodder for their drive and passion in prompting thinking, engaged testing especially here in Wellington, New Zealand. ***

Aggressive and Passive Testing

I’ve been thinking about how I *bucket* testing. Here is what I mean. I see testing as aggressive and passive.

 Aggressive testing, to me, is the art of asking the product questions, to think outside of the box (and the text-book) and to try different ways to test the application looking for interesting information (whether they be bugs, issues or curios). I prefer to be an aggressive tester. My mindset is to look for ways in which a product could fail. I see this as our value add as testers. When we find a bug, the bug is reported and resolved in *someway* thereby helping increase the quality of the product.

I believe that, while there is an element of passive testing (and what i mean here is checking), a tester is more beneficial to a project IF they are being aggressive and proactive and looking for potential failures or issues.

Detailed testing scripting can be *aggressive* in some ways but I’ve found that by having a pre-determined course of action, I am more likely to allow confirmation bias to influence the way I work. By exploring (and I mean having some structure – whether it is by session based test management or using high level test conditions/risks/ideas), I have found that I am more likely to be more aggressive in nature and pursue lurking bugs in the code as I have not been constrained by following detailed test steps.

I believe that to be an effective tester ultimately means that we are aware of the context of the project, application and environment, we are in pursuit of information (bugs, issues or curios – curio *term* taken from a discussion on twitter from James Bach and Michael Bolton) and we are feeding this information back into the project thereby helping management make more of an informed go/no go decision.

In one project I worked on there was a significant element of what i call Passive testing which came in the form of *running* a regression suite. This involved executing a test script which quickly devolved in an almost meaningless tick off and check exercise (check the test step – is it correct? – If so, check it off, if not do a superficial investigation (though I can state that no regression bugs were *found* as a result of this *testing*!)

This is bad passive testing which unfortunately is common in my part of the testing world. How many times have I seen disengaged *testers* running scripts that supposedly are meant to add value to the project – to my sceptical mind, all they add is paper.

Now, not all passive testing is as I’ve described. Even when we are aggressive in our approach there may still be elements of passive testing (checking against rules, contracts, laws, configurations, environments or anything that may require checks that help support what we do as engaged testers).

Aggressive and passive testing are NOT mutually exclusive – they are interdependent and intertwined – the issue I have is when the passive (non-engaged) testing is more prevalent than the engaged (by being engaged I mean brain switched on testing).

Unfortunately this is common in bigger organisations where I live. Fortunately, there are pockets of very engaged, context driven testers around that add much more value than the stock standard factory schoolers. I am thankful for that for it means that I’m not a lone voice in this part of the world!

New terminology?

Am i introducing a new set of terminology? No. Will others view testing as aggressive or passive? Maybe not. What I am doing is highlighting how I see testing. There is nothing wrong with that. I am not constrained by a glossary (though they could be useful) rather I am attempting to demonstrate what i mean by testing and how I view the craft that I work in.

Good Exploratory Testing Practices webinar

Today (14th February 2011 @ 12oo hours – 12pm – New Zealand time) I will be presenting a webinar on Exploratory Testing practises that I use to help put guidiance around my testing.

To register for the webinar click here.

Also check here to see how New Zealand time compares with the time in your part of the world.

Look forward to having you tune in!

Weeknight Testing #04 – an experience report

I had the privilege of joining Weeknight testing (Twitter #WNTesting). This was my first session as I am generally not available for weekend testing sessions (By the way, WTANZ session #12 is on this weekend).

Ok – so what happened during the Weeknight testing session?

I was about 5-10 minutes late waiting for my laptop to boot up etc and when I did login,  there was a flurry of chatter (what I mean by this is that a testing session is held via im over Skype).

Darren McMillan was the facilitator who had the challenging of keeping up with the threads and multiple chats while at the same time guidiong direction in a subtle way (mainly by quoting interesting comments).

I found the *noise* challenging that I went *dark* (to steal a Tony Bruce phrase :)) for a while or in other words I didn’t contribute to the discussion(s) until i had read the mission, requirements document and getting used to the rhythm of the session. I found that while the first two are important, the rhythm is vital as it means that I was able to respond to questions or threads in *real-time* once i had the rhythm ofthe conversation(s).

So – what was it all about?

The mission was to *test* a set of requirements for a fictional company called CRM-R-US by “…reviewing and feeding back on the initial requirements to help identify any gaps, risks or potential issues in them.” This document is at an early stage of requirements gathering and was a first draft. The product is marketing tool centered around twitter.

Some of the participants mentioned they were off mind mapping so I followed suit – except I hand drew mine. I identified four major sections in the document but focused initially on one – the section on the Campaign Engine.

The main reason was threefold:

  1. The lack of *detail*
  2. The section was based on a vision and
  3. A comment stating… ‘Our CEO Patricia Elmer’s liked Brian’s idea so much she’s now seeing this as the key selling point of this feature.’. The CEO is someone who matters and has major influence and power and almost by default, the section to me, had high risk.

So, I began to ask some questions – a few at first and then once I got the rhythm, a lot more. By that time there was 40 minutes to go and questions and comments were coming thick and fast – there was a great question from Sharath B – What’s in it for me if I follow? This made me pause as I was thinking from a business user/call centre point of view whereas Sharath’s question made me think along the lines of the target audience and why would they want to follow our fictional company in twitter. For me, Sharath’s question made look at the broader picture and defocus my thinking. From a testing point of view, using a defocusing strategy helps look at the problem from a broader point of view. This was one of many fantastic ideas, thoughts and questions – the transcript will be posted soon (http://weekendtesting.com/archives/tag/weeknight-testing) – from which you can see some of the great thoughts and ideas that went on during the session.

Lessons Learned for me…

  • Sometimes pairing *may not* be the best option – some great pairs of testers working on a mind map tool weren’t able to pair as effectively as they might well have liked.
  • Tour the product
  • Ask ‘What is NOT being said’
  • Alert – if potential some bodies who matter (e.g. CEO) are mentioned throughout the document, flag it as a potential risk as they have influence/power/authority
  • Mind mapping is a good idea generator and framing tool – see the mind map – from Lisa Crispin and Mohinder Khosla and the mind map from Rakesh Reddy who were both involved in this session.
  • Focusing AND defocusing strategies work well together (focusing on a section to get specific, defocusing by looking at the bigger picture.)

These are some of the thoughts running through my head – I was able to connect with some really good thinking testers which in turn has helped me alot – all in the space of an hour or so!

If you haven’t tried weekend or weeknight testing, give it a go – it is a worthwhile investment!

Software test leadership is alive in New Zealand!

New Zealand flagI’ve been lamenting the state of testing in New Zealand or more specifically test leadership. Now, I’m not talking about the number of test leads or managers – I’m talking about leaders in our community.

I felt there weren’t very many leaders with most testers here settling for a *just do my job* mentality.

Until last night.

Last night, Software Education held a customer evening by inviting customers to view Software Education’s new premises. I met some interesting people and had some great discussions and then it dawned on me – I’ve met some strong testing community leaders already but I had thought of them individually not collectively and I’ve discovered that there are more test leaders than I’ve realised. Now, when I’m talking about community leadership, I’m talking about context driven, lets discuss and debate and better our craft type of leaders (and this is irrespective of whether these leaders are part of the ISTQB certification program or what have you).

And so what I would like to do is highlight these leaders as testers to watch because in their own way, they are helping the craft grow in New Zealand. 

Farid Vaswani – Test manager at Auckland University, associate editor for Software Test Professional and implementor of SBTM at Auckland University.

Oliver Erlewein – Performance tester/test Manager at Datacom Wellington, context driven space, will debate or challenge the status quo. Weekend Testers Australia New Zealand facilitator.

Trevor Cuttris – Team Leader IAG – involved in mentoring and upskilling testers in many different ways (at work SIGiSTs groups etc). We had a good discussion around ET and SBTM.

Rob O’Connell – Assurity Consulting – very similar to Trevor. Lots of passion. Not willingly to accept the status quo if it provides no value. Mentoring, upskilling, uplifting and highlight the craft.

Katrina McNicholl – AMI Insurance – Christchurch based – passionate about the craft, about learning and about sharing ideas and thoughts on testing at the local level.

Tessa Benzie – AMI Insurance – Christchurch based – the same as Katrina – involved wanting to better the testing craft at a local level.

John Lockhart – Webtest Auckland – context driven test automation – *guru* with fitness – first met Jon through the AST BBST series of courses.

Matt Mansell – DIA – is involved in many different areas that result in testing being given a higher profile particularly in the Wellington market.

Honorable mention: Aaron Hodder, Shawn Hill (what an awesome presentation at STANZ 2010!), Christo Bence, Andrew Black, Sophia Hobman, Richard Robinson, Jonathon Wright.

Is this an exhaustive list? No.

Are these the only community leaders in New Zealand? No – but these are testers that I’m tagging that will have an impact on the testing community – whether it’s locally or nationally and will help improve the state of our craft here in New Zealand.

Have I missed some testing leaders? Most likely – BUT i hope you come forward, I hope you stand up and I hope you begin to share your passion for testing with us all (conferences, SIGiST groups, STANZ, blogs, twitter – the list is endless).

To those whom I’ve *outed* – it’s time to highlight the incredible talent we have here in testing – and its time to share the passion that you have with everyone and become …leaders.