A great way to start introducing automated execution and tooling into your test process is to conduct a proof of concept (POC).
A POC can often provide, in a few days, far more information than an extended RFP and trial evaluation process, assuming the right people are involved and the aims are clear.
Phrase Main Aim as a Question
I’ve listed some Proof of Concept projects that I have been involved in below:
- will WebDriver work with a specific browsers and front end technology combination?
- can a GUI client tool be replaced with Java abstraction layers and an HTTP library?
- does a specific commercial tool provide flexibility for scripting and customisation?
- can we create unit tests for in tool custom scripts?
Note that each of the above was phrased as an experimental question which acted as the main aim guiding the work.
Ideally, a POC will be carried out by someone experienced in the tooling and the technology you plan to use. This can cut down on the time it takes to gain value from the POC and you will be more confident that the POC will provide the information that you expect.
Without experience, it can be hard to estimate the time required for the POC, and if the POC fails, it can be hard to tell if the failure was due to lack of experience or if the concept cannot implemented.
Whenever you engage in a POC, try to use the most experienced people possible, do not pass this off to junior staff because “they have the time”.
Secondary Aims
Secondary aims for a proof of concept are often phrased in terms of identified risks. You may not be able to identify all the risks that exist since you are not familiar with the tool or technology. Working closely with an experienced team conducting the POC can help identify the secondary aims, their previous experience will guide you in identifying risk.
e.g. for a WebDriver project some secondary aims might be:
- identify GUI controls that are hard to synchronise against
- identify any parts of the application that we should not try to automate
- identify any development practices that prevent effective automating e.g. lack of attributes to use as locators
The above were derived from experience, and knowledge of risks associated with automating GUI applications e.g. there is a risk that:
- custom controls might be hard to synchronise against and require custom waiting strategies
- parts of the application might be so complicated to automate that they are not worth the effort
- application might not support reliable and understandable locators
Expectations
Proof of Concepts require expectations to be clarified prior to starting any work.
- maximum time available
- priority of scope
- reporting frequency
- output from the POC
Why? Because it is too easy for the client to assume that certain things will be done but which the implementor knows are not within the estimated scope or feasible within the time.
e.g. if I estimate three days for a POC using WebDriver with Java and JUnit on a React application, what do you expect as the output?
- a single Test class that uses the application and passes successfully?
- a set of tests organised in packages for the high priority areas of the React application?
- abstraction layers that represent the application that can be used as a basis for future work?
- a report describing risk areas in the application and how the application needs to change to support automating?
- effective synchronisation so that the execution is robust, or do you expect intermittent failures when run?
General advice for expectations:
- any stated aim is a best effort because a POC may discover issues that impact the work
- aims are what you will investigate for feasibility, rather than what you will actually achieve
Expectations should not be set in isolation, both client and implementor need to discuss the expectations, this way:
- any mandatory outputs from the client can be identified and factored into the time and cost
- any success pre-requisites can be identified and agreed e.g. access to environments, support contacts
- a shared understanding of the work can be built prior to starting
Outputs
Here are the type of things that I expect from a Proof Of Concept, and which I provide when I’m working on a POC:
- ongoing communication i.e. not just a ‘report at the end’. An ongoing communication feed of progress, issues, risks and scope change
- full access to all materials i.e. all source code, all notes, all data. The work is proprietary for the client, not the implementor.
- a final report describing: how the aims were met, lessons learned, what was ‘hard’ and what can be done to make it easier, what was removed from scope and why, risks identified during the POC, any unaddressed risks, alternative strategies that were not tried during the POC but might be useful.
The final report should be used to understand the POC but also to help guide any future work. Ideally the final report should also describe how the experienced practitioner would have approached the implementation if it had been a strategic and ongoing piece of work, rather than a POC.
Clearly the reporting takes additional time and needs to be factored into the cost. But without this, any decisions about the POC have to be taken immediately and carried forward by the same team, otherwise the knowledge will be lost.
General Advice
- try to keep proof of concepts simple, with a few high priority questions to be answered.
- keep the time of the proof of concept small. I would rather extend a successful short POC than find out after a month that the POC was failing.
- use experienced staff and involve them in the scoping
- communicate frequently during the POC
- create a ’legacy’ description of the POC work which also includes recommendations
- build time into the POC for handover, if the final report is clear then you might not need to use this, but make sure a handover period is available if necessary
- unless the POC includes the aim “create work we can build on”, assume that the next tranche of work will start afresh using the lessons learned from the POC
Sometimes a consultant is brought in to perform the POC, because that is the fastest way to bring the experience of a new tool or approach into your environment. When a POC is successful, hopefully the implementation is then performed in-house. It can be useful to engage the consultant or other experienced practitioners to periodically review the ongoing work to help ensure that the implementation is sustainable and not building up risk and future technical debt.
A Proof of Concept exercise can provide a lot of information very quickly, if you set out the right questions, and work with experienced practitioners.
Alan Richardson works a Software Development consultant helping teams improve their development approach. Alan has performed proof of concepts for clients for automated execution, and has worked with teams to help them implement fast and effective experiments to improve their process. Alan consults, performs keynotes and tutorials at conferences, and blogs at EvilTester.com. Alan is the author of four books, and trains people world wide in Software Testing and Programming. You can Contact Alan via Linkedin or via his consultancy web site compendiumdev.co.uk.