Where Bugs Come From

[Editor: This is a repost (but edited and updated, including links to other relevant content) of 2005-Nov 26th’s Requirements and software development process and where bugs come from. According to my server logs – the old post didn’t survive the migration from our old domain to the new domain – and is generating 404 errors regularly. Since the content is worth reading, I’m reposting to a new post today and updating the old links to point to the new post.]

bug

Where bugs come from

In the Foundation series article on software processes we introduce a definition of software process as three steps – (decide, develop, deliver). That article will provide some contextfor this discussion, which dives more deeply into the three steps (decide, develop, deliver).

Rewind three years into the past…

Three years ago a co-worker loaned me a copy of The Goal: A Process of Ongoing Improvement, by Eliyahu M. Goldratt. I enjoyed the book quite a bit, and it lead us down an interesting path of thinking about the software development process as an analogue to manufacturing processes.

At that time, my co-worker and I explored redesigning the personal and team development processes for a large software development team.We were able to leverage much of the research done in analyzing the sources of bugs into manufacturing processes. There is a huge body of work that makes this type of analysis straightforward. By describing the software development process as a set of inputs and processing steps (much like materials inputs and creation of code/docs/tests), we were able to develop some insights into the process and communicate clearly to some of the less technical stakeholders at our client (a major manufacturer with a large internal software development team).

Fast forward three years to the present…

I heard a blurb on a radio ad for The Goal… and it occurred to me that I could take that idea again (thinking about software as a process) and use it to help my current client. Our goal is to develop a good strategy for augmenting their approach to quality. This client is a major manufacturer with a small internal software development and test team.

Here’s a diagram similar to one we discussed, but in a more general context. It shows the gathering of requirements, development of software, and deployment to the field as a process. This is a simplified diagram, designed for managers of technical teams, who don’t have a detailed background in software development or requirements management.

Simple process view of software development


The process starts with stakeholders (all beneficiaries of the software system to be deployed, including users) identifying their objectives.

A requirements manager documents the requirements needed to fulfill the needs of the stakeholders.

On the left side of the diagram, QA folks will define the validation tests required to assure that a particular requirement has been implemented. These are functional tests.
On the right side of the diagram, developers will design and then implement the solution, and also define the whitebox and blackbox tests of their implementation. These tests confirm that the code is working “as designed”.

Once the software is developed and the tests are defined and are passed successfully, the software is deployed.

Users then interact with the software, after it has been deployed in the field.

Overlaying the diagram with six sources of introduced errors.

E1 – The wrong requirements. The first source of errors is stakeholders who don’t describe (or don’t know) what they really want. Or they don’t know why they want it. Everyone who’s gathered requirements has heard things like “Now that I see it working, I realize that what I really want is…” We’ve had the most success in minimizing this situation through rapid-development techniques (repeated iterations of deployment), development of prototypes, and interaction with stakeholders throughout the design process – helping them envision what you are creating before you create it. We won’t succeed by using the spec as a defense: “But we implemented what the spec says” – your clients should not be expected to visualize what a software solution would look like just by reading a spec – that’s your job.

E2 – Incorrect requirements documentation. The second source is incorrectly documented requirements. The customer knows what they want, but that’s not what you document. It could be a case of not writing (formally) what you jotted down during an interactive session. It could be misunderstanding what the client wants, and documenting your (incorrect) understanding of the needs. Regardless, the end result is a specification that documents the wrong thing. The best technique for preventing this is validating the requirements with the stakeholders. After you’ve documented, don’t just send your giant spec around in an email asking for signoff. Use active listening, and other techniques to satisfy yourself (as much as possible) that your doc accurately represents what the customer needs.

E3 – Misinterpreting the requirements. The developers can implement something that doesn’t match the requirements. This can happen in either be a faulty design or a faulty implementation. It may be that the developers didn’t understand the requirements (perhaps they were too vague or ambiguous), or it could be that they were incomplete and didn’t account for all of the possibilities. Validation of the requirements with the developers is critical to making sure that your spec is unambiguous and complete. Developers bring a level of rigor and analysis that can help you make a spec bullet-proof. Use their skills to fix a bad spec before it’s been signed off as “correct”. Even after you do all that, the implementation may not match the spec. That’s one reason why we test.
E4 – Testing for the wrong implementation. Developers will create tests of their implementation. Unit tests are the most common example. A developer could incorrectly test their implementation (incomplete coverage, incorrect analysis). However, even good implementation tests can only make sure that what was intended (by the developer) was achieved. If the developer misunderstood the requirements, the test won’t assure that the desired outcomes are achieved.

E5 – Testing for the wrong requirements. Requirements validation tests account for the possibility that the implementation does not match the requirements. This is another source of possible misinterpretation of the requirements (testing the wrong thing). We have more details on this error in our post, Passing the wrong whitebox tests.
E6 – False positives in user acceptance tests. When the deployed system is tested by the users (and reviewed by other stakeholders), we can introduce errors in terms of “false positive” bug reports – when someone reports a bug that isn’t really a bug, it still takes time and effort to validate that the software is working as designed. Technically, use of the system isn’t the creation of a bug, but it is worth noting that it is a source of testing expense. Maybe it doesn’t belong on this diagram, but we felt that it helped in communicating with some non-technical folks about the “cost of quality”.

If we follow the manufacturing analogy, we can incorporate the steps we’ve described above (like active listening) as feedback loops in the software process. There are several more feedback loops; I’ve only drawn the “requirements management” loops.

Adding feedback loops to the software process

We’ve had success in using this presentation framework to get our clients to improve their testing. We’ve treated this as a “first step” safety net to get in place before tackling the tougher problem of introducing the requirements validation feedback loop. This can be more difficult when the responsibilities cross organizational boundaries, as office politics play a greater role in getting agreement that there is in fact a problem, much less agreement about an approach to solving it.

What are some other techniques that you’ve used to improve the software development process?

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

6 thoughts on “Where Bugs Come From

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.