Learning from the frustration of test case debates


What is a test case?

The reason I ask is that recently I have been following (and commenting) on a question in the LinkedIn group Software Testing & Quality Assurance  – ” hi guys do u think that creating Test Cases is must? According to me, creating Test Cases is just a waste of time rather we should utilize the time in doing testing. what is your opinion? ”

At first glance I thought it would relatively *easy* and pick apart this question and the ensuing replies. However, after reading through the comments, I immediately felt frustrated. Why?

Upon reflection, I noticed a couple of things.

First, it helps to view the comments from the start. I had missed the fact that there were something like 100 comments before I jumped in. Realising this would’ve help save the frustration because Griffin Jones said it from comment one

@Pradeep – I forecast that this conversation will become confusing because:

a. people will be using unstated assumptions about what is a “Test Case”. Some people’s working definition will exclude other people’s working definition of the term. This “shallow agreement” problem will not become obvious until comment # 78.

And Griffin’s prophecy came to pass.

Which led to *the* problem:

Comments were roughly divided between *a test case is a set of inputs with expected results* group that talked of the test case as a tangible artifact. The second group tended towards seeing the test case as an instance of a test idea and generally speaking this second group were the ones that seem to constantly challenge the assertions of the first group.

And then it dawned on me.

The second group appeared to be aligned with the context driven school of testing and as such realise that were *a lot* of dangerous assertions in the comments made by the first group. For example:

Testcases ensures the tester does not over do his testing and makes sure when and at what stage of his testing he could exit and say that the testing is complete.

If we were to look at the above statement a number of questions spring to mind. First of all, how does a test case ensure that a tester does not over do his testing? What does it mean to overdo testing and if testing is *overdone* what is it compared to be deemed overdone?  If the commenter means ignoring the risk of testing something else or finding information outside of the scope of the test case then overdone has potentially risky consequences for him or her (as they have now have jumped outside of the test case box and may find interesting information…tsk, tsk as now they may not meet their execution test case complete target because now they are THINKING about what they are doing as opposed to just doing *something*.). If the tester became engaged then they would be aware of their coverage and risk model and seek after information that may challenge that model. Notice that the engaged tester does not complete a test just because they have ticked off all of the steps to execute;   otherwise, we end up blindly following a script and we’re checking not testing. This highlights an issue of the commenter viewing a test case as a tangible item when in reality it is an abstraction. It is an idea (or collection of ideas) and *passing* a test case does not guarantee that the idea is finished with. Rather a good tester will most likely extend that idea into many ideas.

Of course we could critically pull apart the rest of the comment and show the fallacy in the statement (such as how does finishing your test cases mean that your testing is complete? It could in some circumstances but I suspect that the commenter meant completing testing – full stop). There are a number of comments like this and they all follow the same theme. We write test cases so that we can cover the requirements and we have repeatable tests so that we can teach others and because the v-model aligns with Saturn and Mercury in the house of Leo – so it must be good!

But I digress…

AND this was  frustrating for me. It seemed that no matter now many times (and in different ways), the second group (lets called them Team CDT) highlighted flaws in the first groups arguments (lets called them Team Factory) then another equally inane comment appears and it made me realise that (to paraphrase James Bach)…

If you are frustrated then it means that something is frustrating!

Realising this then made the rest of the journey…well…more fun.! I realised that I could not wilfully change anyone’s mind except my own. I realised that, regardless of what I shared, others are free to disagree. I realised that no matter how many times I pointed out a fallacy in someone’s argument, it’s up to them if they take heed or not.

AND I realised that I could actually benefit from this and not let the emotion of frustration take hold.

How you say?

By looking for like minded individuals and engaging with them knowing that I’m mostly likely to get a meaningful discussion coming back. By practising pulling apart a comment and challenging someone’s assertions. By applying James Bach’s Huh?Really?So? heuristic and what was initially a frustration quickly became a learning experience.

While it’s galling to see many testers fall into Team Factory, I am hearted to see a number of testers critical of the *status quo* and challenge them (as demonstrated by their replies to team Factory comments). It is through challenging that we grow the craft into something that is stronger, assertive and more critical overall.

Author: bjosman

Principal Consultant at OsmanIT brian.osman@osmanit.com

9 thoughts on “Learning from the frustration of test case debates”

  1. Well said. It is surprisingly difficult to get software testers from different backgrounds “on the same page” about what is meant by a “test case.”

    Griffin is more brave than I am; I generally avoid the LinkedIn discussion forums because I often find that participants (and of course I’m overgeneralizing here) (a) tend to be more interested in “talking at” others as opposed to “listening to” others, and (b) the level of thought a lot of participants put forth is not very impressive.

    – Justin

    1. Hi Justin,

      I appreciate your comment and I agree – It does feel like it is difficult to get people to agree because of points a) and b) that mention (especially a))!

  2. Kudos to you for managing to get something positive from the LinkedIn forums – as Justin says above, I now tend to avoid them as it is too frustrating.

    So your post has now raised some thoughts, should I carry on avoiding them or should I be trying to engage the people there – but at least now after reading this I can think a bit more about why I find it so frustrating

    thanks for the post

    1. Hi Phil,

      Thanks for commenting and I’m glad it prompted a potential rethink of contributing to linked in groups. Hopefully, your credibility and standing in the community may influence testers in a positive way 🙂

  3. Well done for sticking with the discussion. I would have left before now. 🙂

    One thing to consider though is this:

    Does it matter if some people think a test case is X and others think it is Y?

    If they never need to communicate under normal circumstances (i.e. outside of LinkedIn) then is it a problem? Or should we try to challenge things we *think* are wrong when we see them?

    Great post and thanks for sticking with that thread and then posting your learnings. You lasted a lot longer than many people do 🙂

    Rob..

    1. Hi Rob,

      Thank you for commenting – I appreciate that!

      “Does it matter if some people think a test case is X and others think it is Y?”

      Good question and i guess it depends….
      *If assumptions exist between testers and everyone/anyone else on the project that matter then could be a problem.
      *If the contract asks for one thing but testers assume another and deliver what they consider to be a test case (and not what the client expected – which could also be an educational matter i.e. dear client this what we do and this is how we do it and this what we deliver etc)
      *A supposed body of authority proclaim that test cases are defined as X and disagree and challenge that for the *better of the craft*
      *A person (client/TM/TA/PM etc) has an unethical use of what they define a test case to be
      *A challenge to debate so that hopefully the concept/definition of a test case maybe refined (if not of everyone at least someone)
      * And so on…

      Thanks again for comment and questions Rob

    2. “Does it matter if some people think a test case is X and others think it is Y?”

      There are two contexts where this has mattered to me:

      – Our company is selling our enterprise-level product to a customer that wants to do some kind of acceptance testing and asks us for some insights (which seems literally corrupt to me, but I digress).

      – We are cooperatively putting together an enterprise-level solution with a partner/OEM and we have to cooperate on testing, or communicate what kinds of testing we did before giving them the product to then customize.

      In these contexts it is just as important to communicate to these other people, who are often from a totally different testing culture, what we do, what we recommend they do, and so on.

      And I have to say, one thing about those people who are born, raised, and deeply steeped in Factory School is that, even if their testing doesn’t really *do* much, they can sure talk convincingly about it. In contrast to which, I know that when I’m on my game, I find a lot of stuff, much more than the factory-schooler with his chapter-and-verse test plans does. But the description of what I’m doing tends to devolve into “here’s a one page explanation of the overall process and please personally trust me after that”, which, as I get older and a little outside my domain-knowledge comfort zones, people don’t.

      I was able to forge a compromise with one Factory School external test group by giving them a couple-hundred-line spreadsheet sprinkled liberally with “this is the reason for this section; do these steps to teach yourself what the expected functionality is and then use your judgment for other things to do”. I didn’t get any objections from their group, and it seemed to go over OK, but I never really got a phone call or e-mail after the whole project to find out whether it really bridged the gap or not.

      –JMike

  4. Just read the blog, and the linkedin diatribe (which looks to have travelled further in the days since you posted this). It floors me that this sort of debate can miss the qualification of not just what is being argued for or against, that gaping assumptions are left unqualified or tested, but that core issues to “why” people within the testing community use this for their testing and measure its use unquestioningly. The issues are never addressed (or just ignored) and replaced by an either you are for us or against us, but you have no idea whether what you are arguing for or against is in fact the same thing under a different name, or fundamentally different or better.

    Having worked in testing for a while now, I know that test cases differ in name, form, and often what is delivered almost as much by whom produces them as by industry or organisation. But the end point has to be information for me – the business, the internal and external customer and those decision makers need to have information to make the best informed decisions. So what does testing do to help this. It looks to cover and mitigate risk by exercising the applications, hardware and interfaces, and call out where this is not possible so the business are able to proactively manage their solutions. How does testing achieve this? IT DEPENDS… on the time available, the skills in the team, the domain and technical make up of the team, the need for reuse or repeatability or the testing, whether the testing is unit or technical or business process focussed, whether the solution is highly complex, across multiple sites, countrys, time zones and teams, or across a co-located team.

    Great points above Brian, but people need to debate testing from a position of professional curiousity, informed constructive and inquisitive analytical questioning, and listening for what the core messages are. Too often we fall into the “I am right – you must be wrong” trap without truly questioning what I am arguing for – or against!

Leave a comment