I have had “A Mind At Play” by Jimmy Sonni and Rob Goodman sitting on my shelf for a while now, waiting patiently with a bookmark on page 219 for me to revisit for reflection. This is the section of the book that describes Claude Shannon’s 6 Strategies for Creative Thinking.
I had a look online to see if I could find the material there to save me making too many notes.
The original paper can be found on Archive.org in Shannon’s Miscellaneous Writings, on page 528 if you want to leaf through to find it… or the direct link below to the original 10 page talk transcription.
There is a transcription here which is easier to read:
There is an extended analysis on business insider:
Strategies Named
The named strategies are:
- Simplification
- Supplementing - find existing answers to similar questions
- Restate - change the words, change the viewpoint
- Structural Analysis - split into smaller parts
- Invert - start from a conclusion and work backwards to find out how you arrived there
- Once solved, extrapolate and extend
The original paper is worth reading to help build your own model of how this might apply to testing.
Shannon does point out that these strategies on their own are not “magic” there is an expectation that people have had the training and experience of the domain to make best use of them. And that the person is motivated to learn and apply their knowledge and creativity to a problem.
Shannon also talks about one motivation being “constructive dissatisfaction”. I think this manifests for me in different ways. One is a dissatisfaction that people are overly confident in the software without the corroborating evidence, and that dissatisfaction leads me to gathering evidence… even if it shows the software in a poor light.
I think the modelling approaches are very useful and there are strong mappings to the way we can approach automating and testing.
Simplification
We use Simplification in multiple ways for testing.
- create graph models of complex systems
- to identify risk points and interfaces
- to help track coverage of our work
- to view it from a high level and thing about a whole
- abstraction for automating
- write execution code at a level that changes very little and communicates intent
- communicating issues and coverage to other people
- remove variable data and focus on the invariants
- what is necessary to trigger a condition and what can we vary
- this is a key skill to create data driven automating
- variation is often an equivalence class, which as we explore it, discover that it wasn’t as equivalent as we expected
- etc.
- And there are more.
Often, if I can’t simplify something, it is a clue that I don’t understand it well enough.
Supplementing
If we find a bug in one area, look for it in another.
If we learn about a bug in one system… could it be found in ours?
What external oracles can we use from ‘similar’ products, competing products?
Are there any analogous processes or metaphors that might give us new ideas and insights into this system?
e.g. is logging in a bit like opening a door?
- what if the door is locked?
- is there an alarm hooked to the door? Can we trigger it?
- locks can be picked
- SWAT teams break doors down
- what if the electronics on the passkey are offline?
- what if the door is open and unlocked?
- what if there is a door but the building behind it has fallen down?
- what if the door is locked and barred from the inside?
- what if the window next to the door is open?
- etc.
This is often where our study of ‘other’ topics helps us test because we bring in new ideas to help us.
Restate
We often do this to think outside the requirements to think of how people would actually use this, not just how the designer thought it would be used.
What if we ignored all the constraints that were imposed by our rules and ’the user would never do that’?
Reframing the system, problem, use-case and harnessing our creativity to explore is one of the ’natural’ approaches to testing. Why do some people seem ’naturally’ better at finding issues than other people? Did they reframe or restate the functionality in a way that made sense to them, but not the creator or builder?
Shannon also describe using a process of generalisation. This can help identify cases where the generalisation does not apply. All the ’edge’ cases.
Generalisation can also lead on to creative uses of tooling.
A proxy isn’t just a ‘security testing tool’, it is a way:
- of observing and interrogating web traffic to help us understand the system,
- to manipulate web traffic to bypass client validation and trigger server conditions unavailable to normal humans,
- of tracking the inputs and coverage we achieve automatically, without us having to expend mental energy in tracking at a low level,
- a mechanism for reviewing and inspecting the responses for information we might have missed first time out,
- etc.
Structural Analysis
Creating more detailed models of the physical implementation and structure and not just a logical abstraction.
This often sits behind a logical abstraction.
When I teach automating I talk about:
- Logical Abstractions - what we use it to do e.g. login
- Physical Abstractions - how we do it e.g. type into field, click button
The structural analysis leads to a deeper understanding of the tech which leads to new classes of risk and issues.
A structural analysis of the system can often help us isolate sub systems for more focussed testing and evaluate the risk in either not integrating them, or focussing too much on the integration.
Invert
If “this” problem did exist in the software, how would I trigger it?
When we have an uneasy feeling that something is wrong here, but we just can’t make it happen yet.
Working backwards from a problem, to find a trigger.
I often approach automating like this. I write the code I want to see, then I implement it. I do not always refactor to abstraction layers. I model my automated execution as a final solution, then create the lower level implementations that make it happen. When I start, I may not actually know how I’m going to be able to automate it, but I know what I want the end result of automating it to look like.
Extrapolate and Extend
Your first solution is not likely optimum.
This also covers the Tester’s Dilemma, “when is enough testing enough?”, “How do we know when to stop?”
But it also leads on to defect investigation. Is this impact the highest impact it can have? Is there some data that can make the problem more severe? Is this really the most important person that the problem matters to? What would be the impact if this was shared on social media?
Put it into Practice
The transcription ends with Shannon putting these ideas into practice.
“I’d like now to show you this machine which I brought along and go into one or two of the problems which were connected with the design of that because I think they illustrate some of these things I’ve been talking about.”
Put it into practice in your daily testing.
Put it into practice by using applications designed to support testing.
- Butch Mayhew has a list here:
- Also I have created a lot of apps to support practicing:
- API Challenges - https://www.eviltester.com/page/tools/apichallenges/
- Buggy Apps - https://www.eviltester.com/page/tools/buggyapps/
- Buggy Games - https://www.eviltester.com/page/tools/buggygames/
- Todo List - https://www.eviltester.com/page/tools/thetodoapp/
- Compendium of apps - https://www.eviltester.com/page/tools/compendiumtesting/
- The Pulper - https://www.eviltester.com/page/tools/thepulper/
- The REST Listicator - https://www.eviltester.com/page/tools/restlisticator/
- The Test Pages - https://www.eviltester.com/page/tools/testpages/
What I like most about the paper, is that, like the process of improving and learning, it doesn’t actually end:
“In order to see this, you’ll have to come up around it; so, I wonder whether you will all come up around the table now.”
…
I post content on a daily basis to Patreon. For only $1 a month you can receive my weekday blog posts, adfree access to my YouTube videos, and short online training courses. patreon.com/eviltester