Skip to main content

Episode 021 - Context In Context Driven Testing - The Evil Tester Show

What is Context Driven Testing? And what does context really mean?

Subscribe to the show:

Show Notes

This episode explores how to navigate context in testing environments, adapt our approaches, and effectively challenge and evolve systems. Discover the importance of context-driven testing in software development, exploring models, adaptability, and useful practices.

This was released to Patreon supporters early, with full transcript and no ads.

Notes

These were the notes I used to create the podcast from. They are not a transcript.

What is Context?

There is no such thing as Context Driven Testing but it is a useful phrase when thinking about our approach to testing, so in that sense you could call your approach Contextually Driven.

There is no such ’thing’ as context, it is a model.

Context is a relationship between our model of the real world and our mental model of what is going on.

Context changes over time.

Our being part of the context, changes the context.

What is context driven testing?

Context-Driven-Testing.com

In the olden days, the world of testing was trying to show that not all testing is the same, not all testing can be standardised, and standardised testing holds people and the craft of testing back.

https://context-driven-testing.com/

The website lists the creation of the concept as initially developed by

James Bach, Brian Marick, Bret Pettichord, Cem Kaner

I used to think about this in terms of Systems within Systems - System of Business, inside is a System of Development, inside a System of Programming and Testing and Analysis and Design, and then projects and programs and individuals have their own System.

Context Driven Testing seemed like a fine set of words to describe this.

I didn’t sign up to the school or manifesto I didn’t like the wording of the 7 basic principles, and I didn’t think it needed principles and why are their only 7?

This is an old site and an old list and everyone who was involved initially has moved on and created their own ways of talking about it.

But the reason for mentioning it is… what site current ranks top for the phrase “Context Driven Testing”? Yes, context-driven-testing.com

  • What are we doing?

  • Is it helping more than hindering?

  • What does better look like?

  • How can we do it better?

  • Span systems - People systems, Role systems, Project Systems, computer systems etc.

  • I do not impose methodologies or processes and call it testing.

Context Driven Testing takes into account: stakeholders, process, communication, individuals, aims, beliefs, mandates, etc.

Ideology

  • context, avoiding ideology

  • explore everything can work

If your beliefs about something can be predicted then you might be in the grip of an ideology.

Context requires beliefs that open up options rather than shutting them down.

As an exercise you can use Chat GPT to provide answers about a topic and if you find yourself agreeing with it all then you’re really agreeing with an average position, so think of a situation that breaks the advice or description in chat gpt. Identify a context which exposes the ‘wisdom of the crowds average’ as an ideology.

We want every individual to be unique.

Context is the same, applied to environments, projects, companies. Context is about allowing environments to be unique and to change and to grow.

Introducing ideology into an environment can restrict the context.

Ideologies:

  • tool X is better than tool Y

  • we need more automated execution at the Unit level than the UI level

  • Unit tests don’t interact with anything else

  • In a Unit test every interaction is mocked

  • Page Objects are bad, use Screenplay

  • Screenplay is bad, use Page Objects

  • etc.

Important to note that ’everything can work’

Agile Testing

There is no such thing as Agile Testing, there is testing within an Agile Context where the Agile Context might be different for each team and project within the company.

Could, Should, Would

Really what we can look at is:

  • capability

  • context

  • contextual fit

Capability comes down to “could” you do it, or “can” you do it.

Context covers “should” you do it. What’s the impact?

Contextual fit covers “would” you do it. Will you commit to doing it? How will you do it? What changes do you have to make?

e.g. Forcing/Teaching testers to code so that automating our systems speeds up our release process

I think people read the statement in terms of capability. i.e. is it possible for people to learn to program if they are testers? If they do learn to program can they become good at it? The statement suggests… probably not. But the answer to both of the questions in my experience is Yes.

But it depends on the context and contextual fit.

If they are being ‘forced’ then you need to have really good training, really good experiential training, really safe ways to learn, and really good ’this is what good looks like’.

People need to be motivated to learn, and if they are being ‘forced’ then they might not.

If the context that is forcing people doesn’t know what ‘good’ is then they might stop at the point where they have given people the most basic skills and that can translate into ‘bad’.

Sometimes the ‘forced’ might be - If I don’t learn this then I won’t get a job. So the context is survival. This can be quite motivating. But… if you don’t have the right training, and don’t have a safe place or time to learn then you might not get over the initial learning hump and might not take the time to get good.

Contextual fit also covers is the change happening for the right reasons? Because people might be trying to achieve unrealistic goals with the wrong approach.

Adaptation

I’d rather adopting a more scientific approach of:

  • Observing what we do,

  • Modeling (rather than standardising) to understand,

  • Investigating our inefficiencies (often caused by lack of understanding, or forgetting because we don’t do the same thing the same way multiple times every day)

  • Experimenting (to improve against our existing model and observed approaches)

  • Letting and experiment settle so we are evaluating it after developing competency in it, and not trying to perform too many experiments at once

  • Repeating to evolve based on the needs of the people and the environment

We have to be prepared to challenge the approaches and even interpretation of results.

If someone claims something is a success, then that can often prevent learning from all the mistakes that happened to create that success.

Contextual error patterns

Contextually, looking at errors that have happened on applications in your own environment, generalising the error, then looking for that error in other applications in your environment, is often an easier way to find issues that are likely in your own environment.

From an observation work back through the application and discover the design issue that triggered the problem.

This now becomes a higher likelihood error pattern in my model of the legacy apps and is something I look for more, or treat as a higher risk.

Follow on

Have a look at James bach’s site for the Context Driven Methodology post

https://www.satisfice.com/blog/archives/74

There is no such ’thing’ as context, it is a model.

Context is a relationship between our model of the real world and our mental model of what is going on.

Context changes over time.

Our being part of the context, changes the context.

There is no one true way. We make decisions and we need to be able to justify those decisions


Episode Summary

In Episode 21, we explore the boundaries of “context-driven testing,” considering it less as a fixed methodology and more as a model to understand dynamic testing environments. Our journey begins with a look at the historical context of the term and how it was coined. We unravel the layered systems approach, examining how interlocking systems within a business shape the context and how testers must adapt to these systems.

Moving into the heart of testing, the discussion opens up on changing contexts and the resulting dynamic processes. As contexts within testing environments are ever-evolving, testers are encouraged to think adaptively and engage with the environment in a way that embraces its complexity and unpredictability.

We also consider the importance of capabilities—what can be done, what should be done, and how context influences these decisions. We stress the importance of the ability to adapt, how we challenge assumptions and interactions within the project. Additionally the importance of deliberate error detection and close with thoughts on models and reality, stressing the necessity of adaptability and openness to change.


Key Takeaways

  • Context-driven testing isn’t fixed; it’s an adaptable approach to evolving testing environments.
  • Understanding the interplay between systems and context is essential for effective testing.
  • Testing isn’t about following best practices but evaluating what’s right for the current context.
  • Constant adaptation to change—both personal and environmental—is crucial.
  • Identifying and learning from error patterns in context-specific situations enhances problem detection.
  • Testing is about decision making and persona responsibility.
  • We are part of the context.
  • Our actions shape the context.
  • The context can push back.

Quotes and Examples

“There’s no such thing as context; we create models that interact with real-world situations.” [01:10]

“Projects unfold unpredictably, so our testing approach must comfortably handle these variations.” [17:11]

“It’s not just the practice; it’s how people view the practice and how the results are perceived.” [07:04]

“Our beliefs should expand options and facilitate experimentation, not limit them.” [14:39]

“Deliberate error detection in context can uncover unique issues on your project.” [22:48]

  • “Testing isn’t about best practices; it’s about what’s suitable for the context.”
  • “Adaptation is key as the environment and our understanding continuously evolve.”
  • “Context is an ongoing relationship between personal and external models.”
  • “Error patterns particular to a context can illuminate unseen issues.”
  • “Never let ideologies limit your testing explorations and experiments.”

Discussion Questions and Exercises

  • Defining Context: How do you define ‘context’ in relation to testing?
  • Context Changes: How does being part of a project team influence the context? Can you provide an example from your own experience where team dynamics changed the context of a project?
  • How have you deliberately tried to change the context?
  • How has your presence on a project changed the context?
  • Context-Driven Testing Principles: Have you read the context-driven principles? Do you agree with them? Do you agree with the wording of them? How would you state them in your own words?
  • Systems within Systems: The podcast considering testing in terms of “systems within systems.” Do you view systems like this? How could this perspective influence your approach to testing?
  • Adaptive Approaches: What does an “adaptive and evolving approach” mean with respect to testing? How does this differ from traditional or process-driven methods?
  • Avoiding Ideology: Is avoiding rigid ideology in testing important? How can one ensure their beliefs about testing remain flexible and context-aware?
  • Role of Experimentation: The podcast describes the necessity of experimentation in context-driven testing. Can you think of a situation where experimenting led to a better understanding or improvement in your testing approach?
  • Agile and Context Driven: How does context-driven testing align with or differ from Agile testing? Have you encountered any challenges in melding these approaches?
  • Capability, Context, and Fit: What are the differences between capability, context, and contextual fit as described by Alan? Have you analysed your approach in these terms? Contextual Error Patterns: How can identifying contextual error patterns improve your testing process? Can you identify any recurring errors in your current or past projects that might benefit from this approach?
  • Have you tried the chat GPT exercise? Ask Chat GPT to describe part of your testing approach or testing definitions. Do you agree with it? Find the differences and gaps between your description and the Chat GPT description.