Requirements are a tricky business.
As testers, we know that a lot of the ‘finished’ requirements we see require investigation on our part to find out ‘how’ we can actually test them and to get to the core of what the requirement actually means.
We also know that the development process is notorious for claiming that users changed their requirements, and kept changing them as the development process continued. So structured processes prefer to avoid change, and extreme processes embrace change. In some cases, and this has been my experience, what might be happening is that there really was no change at all, we simply didn’t capture the requirement, we didn’t figure out what the stakeholder really wanted.
All of this, and more, was the subject of some Tom & Kai Gilb courses that I attended in London during September 2003. This particular article is, in one sense, a review of the courses (and I’ll tell you now, they were worth going on) and in another sense, is some of my notes from the course and the directions that it got my mind moving.
So to start, I guess I have to answer the question, “What courses?”:
- Competitive Engineering for Systems and Software Engineers
- Evolutionary Project Management
- Advanced Requirements Engineering
- Priority Management for Strategic & Organisational Planners
- Specification Quality Control: SQC The New Agile Inspection and Review Process
Each of the courses explored the subject topic using the Gilbs’ recent work on Planguage which is documented in the forthcoming ‘Competitive Engineering’ book, and other books available for free on the gilb.com website.
I was unable to attend the Evolutionary Project Management course that according to one participant was ’the best one of all’.
Also available on Gilb.com are the slides for much of the courses.
Tom also referred to two of his in-print texts:
- Principles of Software Engineering management
- Software Inspection
I’ll lump all the courses together to discuss the concepts in general and then if you want you can read on to have the courses described independently.
In General there were a number of concepts that stood out for me, from the courses:
- All requirements should be quantified
- Requirement satisfaction targets should be documented for all the stakeholders
- Fixed weight prioritisation is not the most effective prioritisation or planning technique, qualified targets are better
- Planguage: a formal, natural language modelling notation adds rigour to the requirement documentation
- Competitive Engineering involves analysing the relationships in the project
- Inspections can be agile and should lead to process improvement rather than artifact correction
- Feedback should be fast, often, measurable and meaningful
Remember, all of the above is my wording and my interpretation. Tom has carefully documented everything he covers on the courses and in the Competitive Engineering book in a highly detailed glossary.
From my point of view, there was a lot of useful information presented and the courses are of immense value to testers.
I find immense parallel between the notion of a test condition and the requirements documented using Planguage. Although I haven’t ever seen the concept of a test condition formally defined, I use it to mean a testable requirement. The requirements documented in Planguage are certainly testable.
I would be happy to see people documenting requirements in the formats described in Competitive Engineering, this would aid and abet the testing process enormously. And, of course, as a sideffect would help the development process enormously, but as a tester I’m really only thinking of my own work load.
But should All Requirements be quantified. Well, probably not, words like All are a red flag to something being defined too generally (this is always the case by the way). Although, I can’t think of a single long term, structured development project that I have been on, which would not have benefited from approaching their requirements in a more quantifiable and stakeholder focused manner. But I’m not yet convinced that All requirements have to be quantified. Although I am convinced that they all could be quantifiable. But I now do keep this point firmly in mind when approaching requirements and when I’m asking questions like “How will you know”, what I’m really often asking for is… quantification.
I also found parallels between the language used in Psychotherapy to the processes documented in Competitive Engineering for effective requirements analysis.
Consequently, to my mind, I can summarise some of the Competitive Engineering processes in the following questions (I noted all these down at various points in the seminars):
- How will you know that you have achieved this?
- How do you know you haven’t got this already?
- How is this different from what you have?
- And what is the best that it could be?
- Hmm, and this is according to whom?
- How specifically?
- When you have this, what will be different?
- What will this gain you?
The questions may not be what Tom uses when he is consulting, or even what he intended to have people ask during the process, but they are what I asked myself during the course exercises.
It is entirely possible that I have misrepresented the information presented by Tom and Kai and unduly simplified it. I certainly haven’t come close to covering all of what we explored, just the parts that have jumped out at me. And I am still thinking about items we covered on the course so I found it to be of immense value.
Competitive Engineering is a dense body of work. I will be reading the books in more detail and I know I’ll learn more by doing that, I hope that you will too.
Visit [www.gilb.com] and see if there are any courses in your neighbourhood any time soon.
The Courses
By the way, all the slides for these courses are available on the Gilb website [www.gilb.com] so if you want to see what was covered check them out. I’ll point out the obvious by noting that just reading the slides isn’t as satisifying as having one or other of the Gilbs talking through them, so if you get the chance to do one of the courses, do go along.
Tom and Kai are overly generous in providing supplementary information on CD for the course participants. I now have over 0.8 gig of material to refer to.
Competitive Engineering for Systems and Software Engineers
I’ve recently written the paper and presentation that I will be presenting at EuroStar 2003, which is on my experiences with Beta Testing and I do that in order to test software that has been written for a competitive environment. Rather than only testing software in an industrial environment which is my normal bread and butter testing regime.
This particular seminar explored Tom Gilb’s book Competitive Engineering and was looking at effective requirements elicitation, management and analysis as a means for competing effectively.
A lot of companies obviously have competition in mind when they are writing their products. I have worked for companies where competition was firmly in their minds and the software that they were producing, although not being sold, would allow them to offer a service that would have to compete in the open market. But the analysis of the requirements with this frame of mind was not done. At the very least, Tom’s book can help make this point clear and it is a useful lesson to apply to all software development projects.
The seminar also gave an overview of Planguage which is the requirement modelling language that Tom has been using and refining throughout his career and early instantiations of it can be found in his published books, but Competitive Engineering is the first book to really make clear what Planguage is.
This particular seminar was a great overview of the techniques and principles in the Competitive Engineering book, and focussed my attention on the stakeholders of a project and on the use of quantified and qualified requirements as a prioritisation approach.
Evolutionary Project Management
Sadly, you’ll just have to download the book and read it as I didn’t attend.
Advanced Requirements Engineering
On this particular seminar we explored what it means to write an effective requirement. To analyse who the stakeholders were. Stakeholders being anyone that could cause your programme to fail for whatever reason.
I was interested to note that on the system I analysed, that I had written down beta testers, reviewers, evaluators, technical authors, marketing staff in addition to the normal stakeholders that I might have considered i.e. the development team and users.
By working through each of these additional stakeholders (I actually had 20 on my list - had I drawn the list before the course I would have had 3) and looking at the requirements that they could have then I get a much better picture of what tasks have to be conducted on the project on order to meet those requirements and subsequently a better estimate of the project time.
I can’t go into the details of the project I looked at here but the estimate of the project time was about half what it should have been.
The course also explored Planguage in more detail as a method for documenting the requirements. The basic premise behind requirements in Planguage is that they should all be quantifiable using some scale that allows you to measure the effectiveness of having met the requirement.
Planguage itself has a lot of different concepts for adding extra dimensions to the requirements in terms of weightings, and introducing different qualifiers on the scale to document different goal levels to aid prioritisation.
Priority Management for Strategic & Organisational Planners
Prioritisation was the focus on this particular seminar. Tom doesn’t like the notion of fixed weight prioritisations e.g High, Medium, Low and made a strong case for only using quantified scales with qualified levels of achievement which we can measure against.
Different stakeholders have different levels of prioritisation for each requirement and knowing what these are, and the circumstances which surround them, allows us to prioritise more effectively to meet the needs of the most important stakeholders.
Too often have I been on the end of a fixed weight prioritisation scheme that hasn’t worked to not feel an affinitity with this approach. Lest that statement is confusing, I will be experimenting with this prioritisation approach and seeing how I get on.
Specification Quality Control
Testers are far too used to receiving signed off specifications that are riddled with ambiguity and there is often no agreed basis for handing back the documentation to the producer to have them tighten up the document.
Some development cultures have an Inspection procedure for examining the documentation and then fixing the document. This is both labour and time intensive.
This particular course took us through the agile inspection process documented in the competitive engineering book.
The agile process involves inspecting a sample of the document against a defined set of rules and seeing how many deviations occur. And on the basis of this, extrapolating how many others might be in the document. This knowledge is then used to improve the process so that these types of defect are not injected into the document and then the offending portions of the document rewritten with the improved process.
Because the sampling process is quick and cost effective it can be applied early and often during the specification process.
This is something that, I think, most testers would like to see applied to structured and long term development processes as it introduces the notion of testing early and often as a method of feedback to help improve the process.
The main benefit of the book in explaining it is that it makes it far easier to introduce into corporate development processes and provides a set of review criteria and processes so that it becomes very easy to introduce the technique very quickly.
More info on [www.gilb.com]