TLDR; Variation in testing often leds us to test data, flows and environments. We can increase the chance that we learn something relevant by reading more blogs, watching YouTube and attending conferences. We can also trigger insights, that we ourselves have gained, by reading meaning into unrelated text, content or koans.
A long time ago, I wrote The Evil Tester Sloganizer
There are multiple versions but the default listed above is the main live example.
This uses a Generative Grammar to randomly generate slogans or phrases related to Software Testing.
For most people this is a novelty or a toy. But I’m going to try and convince you, in this post, that this is a serious tool for helping you add more variation into your Software Testing thinking.
Why Variation?
Variation is one of the key concepts that we harness during our testing. If we keep doing the same thing, in the same way, with the same data - would we ever expect it to trigger a different result?
Well, it might, if we didn’t expect the ’thing’ to have changed, then doing the same activity might reveal to us, that the ’thing’ had changed. And it may have changed detrimentally - that seems to be the basic concept underpinning the phrase “Regression Testing”.
But… we know that we don’t have to do exactly the same thing, we could do an equivalent thing. i.e. some process and data combination that generates an equivalent result.
Words we might consider researching if we find this concept useful:
- Isomorphic
- Topological Property
We could vary data if it was equivalent, to add variation into our process with low risk of impacting the expected result. We incorporate this concept in Software Testing as Equivalence Partitioning.
We might be tempted to generalising Software Testing as a hunt for “A Difference that Makes a Difference”. What variations are important because they provide relevant information to our decision making processes?
Typical Variation
Typical Variation we harness when testing:
- Data
- Flow
- Environment
- Tooling
And all of that may help us explore the applications more thoroughly and increase our opportunities to identify information nuggets.
Adding Variation to our Thought Processes
As a tester we might use Heuristics or Models (from a different perspective) to help us think about the application in different ways.
How do we add variation into our thinking about Software Testing itself?
i.e. how do we find a difference that makes a difference to how we think about testing such that we gain new insights into Software Testing itself?
We might choose to:
- read the work of other people in blogs and books
- watch conference talks on YouTube
- go to conferences and meetup groups and confer
This can help identify nuances or new approaches from within the Software Development domain itself.
Sometimes a difference can come from studying related fields and incorporating insights. This can often expand our understanding of Software Testing into entirely new areas.
Using Variation to Trigger Our Own Ideas
I try to encourage people to ‘own’ their own definitions and approaches to Software Testing. And that was the key concept I try to get across in Dear Evil Tester.
And as an additional tool to support this move to ownership and responsibility, I created the Sloganizer.
The Sloganizer generates a random phrase. Some of which will appear as non-sense. Some may provoke. Some may be uncomfortable to read.
All of them, need to be interpreted by the reader to identify meaning.
The Sloganizer is a piece of software that randomly traverses a set of rules. The text output by the Sloganizer has no purpose. The Sloganizer is not attempting to communciate. Any meaning is added by the reader.
It is designed to ’trigger’ the reader into finding meaning where no meaning was communicated.
This may trigger us into thinking differently about Software Testing.
e.g. The sloganizer the generated the following:
- Act less critical now!
- Are you a unduly random little tester?
- I will try to create a log of what I explore
- Do I appear bad? It could be the weather! LOL!
Some may seem more serious than others. Some may appear to have no value whatsoever. But the exercise is to identify some value and information. Because the information comes from within, and can provide insight into how you view Software Testing.
Exercise:
- How did each item make you feel?
- What source led to that feeling? A bias? A belief? An experience?
- Do you want to hold on to that source, or is it worth exploring and expanding?
A Worked Example
“Do I appear bad? It could be the weather! LOL!”
We don’t really know why people react the way they do. We posit explanations when faced with aberrant behaviour - Did I say something they don’t like? Did I phrase an email badly? Did they read into my body posture or facial expression? The only way to really know is to ask. Otherwise we might be engaging in mind reading, which might lead us to the wrong conclusion, which might mean we make changes to our behaviour when no change is required. Because their reaction might be caused by something as simple as the bad weather making them a tad grumpy.
Lead By Example
This is an exercise I engage in. And I have expanded the Sloganizer over the years to support me in doing that.
And, I’m starting to make the results of that exercise public.
It has taken me some time to get around to automatically generating social media friendly images from the Sloganizer - its been on my todo list for some time.
And I have now created a mechanism for generating images from the slogans which makes it easier for me to use the slogans as a public vehicle for exploring my biases, beliefs and attitudes towards Software Testing.
The obvious immediate destination for that seems to be Instagram or Facebook. I might add them to LinkedIn as well.
And I will probably collate them on the blog for those of you that don’t use Instagram and Facebook, so if you see the return of slogans, it is because I am using them to trigger micro-insights for myself, and to add variation into how I think about Software Testing.