Monthly Archives: January 2006

From MRD to PRD: The key to defining a spec

keys

They key to writing a great spec is knowing how to specify software that mets our customers’ needs.

It can be a daunting task. First, we have to define what our customer needs. High level requirements are just requirements that are too vague or high-level to be directly actionable. “We must reduce our cost of fullfilling orders by 20%” is a high level requirement. We can’t start writing code with only that information. In an early post, we talked about functional requirements being written at the right level – don’t confuse the level of clarity required for writing a functional spec with that required to define goals.

A market requirements document (MRD), as we discussed earlier, discusses the problems (to be solved) or the needs of the market. When working with a customer, that customer will identify one or more strategic objectives.

As an aside – this case study demonstrates use of the OST (objectives, strategy and tactics) approach to initiating and managing projects. Check it out for context. You can just skim the bold parts in the OST sections if you want to stay on topic with this post.

lightbulb

The question is – How do we get from an MRD to a great PRD?

A product requirements document (PRD) captures the capabilities of the software in order to address the market needs. From these key capabilities comes the design of the software. How do we get from needs to ideas?

This is an ideation task. A product manager must apply high level design skills when writing the specification. Haven’t we said repeatedly that requirements should not describe the implementation or design? Yes. Previously, we talked about the importance of asking why, this is the same issue, approached in the other direction – starting with the why and asking how.

We’re not talking about specifying implementation details – just articulating capabilities. Here is a list of “market need : product capability” that demonstrate the transition from MRD to PRD.

  1. Customers are cancelling 5% of their orders because of shipping delays : Software will enable shipping within 24 hours of order
  2. Our competitor is gaining market share by offering free plugins : Software will support a plug-in architecture
  3. 80% of potential customers (visitors) leave our website without ordering : Navigation must be simple on the website
  4. Our software keeps crashing and we don’t know why : Software will send error information from client to server

One of the things that makes this requirements design activity so difficult is that we have to have good ideas about how to solve the problems we are tasked with solving. Look at example number three – there are many different ways to attempt to solve the problem – the challenge is in picking the right one. Requirements elicitation will unearth the required capabilities.

Organizing, validating, and prioritizing these capabilities is the hard part. The output of this effort is a PRD. A product roadmap (a vision of what a product will be capable of doing, over time) is another potential output.

Top five presentation tips

microphone

From Start to End has a great post, Some tips on presentations. Very little we can add here – check it out.

Our top five presentation tips (our first four picks are from the list behind the link)

  1. Know your audience. A key preparation – you have to have a goal for a presentation. Are you convincing, educating or inspiring people? What do those people care about (and what do they already know?)? Also – do you actually know the people in the audience?
  2. Revise and rewrite. Editing is the best thing ever. When we first put ideas down, it’s generally from our point of view. Validate that the content is targeted at the audience.
  3. Minimize the text on the slide. Eyecharts distract from the presenter. People read ahead – the slide content should provide cues for you to speak, and for your audience to remember. If we need a bunch of text to support our point, we include it in a handout.
  4. One idea per slide. Focus!
  5. Include supporting slides. We’re already simplifying the content we present to maximize the impact of the ideas, which means that there is more content somewhere, but we haven’t shown it. Often someone in the audience (generally interested person, micro-manager, dude-trying-to-look-smart) will ask drill down questions – “Where did you get that data?” “Isn’t that diagram overly simplified?”. Adding those supporting slides (created in previous presentations, or prior to revision) after a blank slide (with the title “End of presentation”) to the deck. Don’t plan on showing these slides, just have them at the ready.

The best advice I know about preparing content for a presentation: Plan the formal part of the presentation to share 2/3 of what you want to tell the audience. Draw that last third out through engaging conversation and informal asides during the formal presentation.

Where Bugs Come From

[Editor: This is a repost (but edited and updated, including links to other relevant content) of 2005-Nov 26th’s Requirements and software development process and where bugs come from. According to my server logs – the old post didn’t survive the migration from our old domain to the new domain – and is generating 404 errors regularly. Since the content is worth reading, I’m reposting to a new post today and updating the old links to point to the new post.]

bug

Where bugs come from

In the Foundation series article on software processes we introduce a definition of software process as three steps – (decide, develop, deliver). That article will provide some contextfor this discussion, which dives more deeply into the three steps (decide, develop, deliver).

Rewind three years into the past…

Three years ago a co-worker loaned me a copy of The Goal: A Process of Ongoing Improvement, by Eliyahu M. Goldratt. I enjoyed the book quite a bit, and it lead us down an interesting path of thinking about the software development process as an analogue to manufacturing processes.

At that time, my co-worker and I explored redesigning the personal and team development processes for a large software development team.We were able to leverage much of the research done in analyzing the sources of bugs into manufacturing processes. There is a huge body of work that makes this type of analysis straightforward. By describing the software development process as a set of inputs and processing steps (much like materials inputs and creation of code/docs/tests), we were able to develop some insights into the process and communicate clearly to some of the less technical stakeholders at our client (a major manufacturer with a large internal software development team).

Fast forward three years to the present…

I heard a blurb on a radio ad for The Goal… and it occurred to me that I could take that idea again (thinking about software as a process) and use it to help my current client. Our goal is to develop a good strategy for augmenting their approach to quality. This client is a major manufacturer with a small internal software development and test team.

Here’s a diagram similar to one we discussed, but in a more general context. It shows the gathering of requirements, development of software, and deployment to the field as a process. This is a simplified diagram, designed for managers of technical teams, who don’t have a detailed background in software development or requirements management.

Simple process view of software development


The process starts with stakeholders (all beneficiaries of the software system to be deployed, including users) identifying their objectives.

A requirements manager documents the requirements needed to fulfill the needs of the stakeholders.

On the left side of the diagram, QA folks will define the validation tests required to assure that a particular requirement has been implemented.  These are functional tests.
On the right side of the diagram, developers will design and then implement the solution, and also define the whitebox and blackbox tests of their implementation. These tests confirm that the code is working “as designed”.

Once the software is developed and the tests are defined and are passed successfully, the software is deployed.

Users then interact with the software, after it has been deployed in the field.

Overlaying the diagram with six sources of introduced errors.

E1 – The wrong requirements. The first source of errors is stakeholders who don’t describe (or don’t know) what they really want. Or they don’t know why they want it. Everyone who’s gathered requirements has heard things like “Now that I see it working, I realize that what I really want is…” We’ve had the most success in minimizing this situation through rapid-development techniques (repeated iterations of deployment), development of prototypes, and interaction with stakeholders throughout the design process – helping them envision what you are creating before you create it. We won’t succeed by using the spec as a defense: “But we implemented what the spec says” – your clients should not be expected to visualize what a software solution would look like just by reading a spec – that’s your job.

E2 – Incorrect requirements documentation. The second source is incorrectly documented requirements. The customer knows what they want, but that’s not what you document. It could be a case of not writing (formally) what you jotted down during an interactive session. It could be misunderstanding what the client wants, and documenting your (incorrect) understanding of the needs. Regardless, the end result is a specification that documents the wrong thing. The best technique for preventing this is validating the requirements with the stakeholders. After you’ve documented, don’t just send your giant spec around in an email asking for signoff. Use active listening, and other techniques to satisfy yourself (as much as possible) that your doc accurately represents what the customer needs.

E3 – Misinterpreting the requirements. The developers can implement something that doesn’t match the requirements. This can happen in either be a faulty design or a faulty implementation. It may be that the developers didn’t understand the requirements (perhaps they were too vague or ambiguous), or it could be that they were incomplete and didn’t account for all of the possibilities. Validation of the requirements with the developers is critical to making sure that your spec is unambiguous and complete. Developers bring a level of rigor and analysis that can help you make a spec bullet-proof. Use their skills to fix a bad spec before it’s been signed off as “correct”. Even after you do all that, the implementation may not match the spec. That’s one reason why we test.
E4 – Testing for the wrong implementation. Developers will create tests of their implementation. Unit tests are the most common example. A developer could incorrectly test their implementation (incomplete coverage, incorrect analysis). However, even good implementation tests can only make sure that what was intended (by the developer) was achieved. If the developer misunderstood the requirements, the test won’t assure that the desired outcomes are achieved.

E5 – Testing for the wrong requirements. Requirements validation tests account for the possibility that the implementation does not match the requirements. This is another source of possible misinterpretation of the requirements (testing the wrong thing). We have more details on this error in our post, Passing the wrong whitebox tests.
E6 – False positives in user acceptance tests. When the deployed system is tested by the users (and reviewed by other stakeholders), we can introduce errors in terms of “false positive” bug reports – when someone reports a bug that isn’t really a bug, it still takes time and effort to validate that the software is working as designed. Technically, use of the system isn’t the creation of a bug, but it is worth noting that it is a source of testing expense. Maybe it doesn’t belong on this diagram, but we felt that it helped in communicating with some non-technical folks about the “cost of quality”.

If we follow the manufacturing analogy, we can incorporate the steps we’ve described above (like active listening) as feedback loops in the software process. There are several more feedback loops; I’ve only drawn the “requirements management” loops.

Adding feedback loops to the software process

We’ve had success in using this presentation framework to get our clients to improve their testing. We’ve treated this as a “first step” safety net to get in place before tackling the tougher problem of introducing the requirements validation feedback loop. This can be more difficult when the responsibilities cross organizational boundaries, as office politics play a greater role in getting agreement that there is in fact a problem, much less agreement about an approach to solving it.

What are some other techniques that you’ve used to improve the software development process?

Software testing series: A case study

gauges

This post is a test automation case study, but at a higher level.

We’ll talk about it in terms of defining the problem, and then discuss the objective (what we proposed to do to solve the problem), the strategy (how we went about doing it) and the tactics (how we executed the strategy). Since this happened in the real world, we’ll also identify the constraints within which we had to operate.

Our hope is that it will spur questions that allow us to dive deeper in conversation on topics mentioned, implied and inspired.

Why we needed something (aka The Problem)
I was working with a client manager in the past, and he had a “quality problem.” This manager was getting pressure from his VP, who was getting negative feedback from users about the quality of one of the manager’s software products. Bugs in this software regularly led to 10s to 100s of hours (per bug) in cost when they reached the field. These bugs would also introduce a risk of lost sales or profits. This manager was responsible for development and testing, but not requirements.

This existing enterprise application was written about ten years ago, had significant changes in every monthly release, and had a small development team averaging about five people, with regular rotations onto and off the project. There were over a quarter of a million lines of code in this application. The application had a very large user interface, and complicated integration with other systems. The team had an existing process of manual testing, both by developers and dedicated testers, and a large suite of automated blackbox system tests. The developers did not have practical experience in creating unit tests or applying unit test automation.

An analysis of the bugs revealed a majority of them as being introduced in the development cycle, with requirements bugs in second place.

The Objective

  1. Immediately improve the perception of quality of the software by outside organizations.
  2. Improve quality measurably for the long term.
  3. Reduce the cost of quality for the software from existing levels.

The Constraints

  1. No personnel changes – any changes must be supported by the current team (no permanent additions or replacements).
  2. No changes in existing committments to stakeholders – commitments are in place for 6 months (at the full capacity of the team).
  3. Small budget for the project – a one-time cost of less than 5% of the operating budget (for the current team), with long term costs offset by other gains in productivity.

The Strategy

  1. Improve existing automated regression testing to improve quality for the long term.
  2. Change the development process to include regression testing as part of code-promotion (versus the current practice of regression testing release candidates).

The Tactics

  1. Use unit testing (specifically whitebox testing) to augment existing test framework – overall, a gray box testing process. To minimize the maintenance effort over time, the testing framework was developed to use data-driven scripts that represent user sessions with the software. This allowed the team to easily create (and delegate creation of) scripts that represented user sessions. These scripts were combined with a set of inspections that tested the application for particular parameters, outputs, and behaviors. The intersections of scripts and inspections result in unit tests.
  2. Immediately start writing test for all new code. We flipped a switch and required developers “from this day forward” to replace their current manual feature testing for ongoing development with creation of automated unit tests for ongoing development. Kent Beck first suggested this technique to me about five years ago as a way to “add testing” to an existing application. His theory is that the areas of the code that are being modified are the areas of the code most likely to be broken – existing code is less likely to spontaneously break, and is not the top priority for testing. Over time, if all of the code gets modified, then all of the code gets tested.
  3. Jump start a small initial test suite. We timeboxed a small inital effort to identify the “high risk” areas of the application usage by defining the most common usage patterns. These patterns were then embodied in a set of scripts that were used in the testing framework. We also set aside time for creating a set of initial inspections designed to provide valuable insight into the guts of the application. The developers identified those things that they “commonly looked at” when making changes to the application. These inspections instrumented elements of the application (like a temperature gauge in your car – it tells you if the coolant is too hot, even if it doesn’t tell you why).

Unfortunately, we can’t share the results beyond the client’s internal team. Anecdotally, a very similar approach for a different client, team, and application netted a 10% reduction in development effort and had a dramatically positive affect on both quality and perceived quality. At Tyner Blain, we strongly encourage using this approach.

Top seven tips for rolling out this plan effectively

  • Set expectations. A key constraint for this approach was “don’t spend a bunch of money.” The process is designed to improve quality over time, with little (and dwindling) incremental cost. Every month, as more tests are added along with new code, the opportunity for bugs to be released to the test team (much less to the field) goes down. The rate of quality improvement will be proportional to the rate of change in the code base. Also point out that only those bugs introduced in the development cycle will be caught – requirements bugs will not be caught.
  • Educate the development team. When asking developers to change the way they’ve been writing and releasing code for a decade, it can be tricky to get acceptance. Responses can be as bad as “We don’t have a problem with quality” or “I know how to do my job – you’re telling me that I don’t?” if this isn’t done. Start with education about the techniques and highlight the tangible benefits to developers. There will be reduced complaints about the quality of the code – most developers are proud of their work, and will gladly adopt any technique that helps them improve it – as long as they don’t feel defensive about it.
  • Educate the managers. Help managers understand that unit testing isn’t a silver bullet – it can’t catch every bug, but done correctly, unit testing will catch the most bugs per dollar invested.
  • Educate the test team. No, we’re not automating you out of a job. A gray box testing strategy is comprehensive. Automating regression testing effectively allows manual testers to focus on system level testing and overall quality assurance. The time saved can be applied to testing that should be, but isn’t being done today.
  • Establish ownership. The developers are being asked to take ownership explicitly for something they already own implicitly. Before incorporating regression testing as part of the development cycle, the contract with the test team was “The new stuff works. Let me know if I broke any of the old stuff.” With this process in place, the contract between the development team and the test team becomes “The new stuff works, some of the old stuff still works, and the new stuff will continue to work forever.”
  • Provide feedback. Track the metrics, such as bugs versus lines of code (existing and modified) or bugs versus user sessions. Absolute numbers (bugs in the field, bugs found in test, numbers of inspections and scripts and unit tests) are also good. Communicate these and other metrics to everyone on the team – managers, developers, testers. Provide the feedback regularly (at least with every release). This will help the project gain momentum and visibility. That will validate the ideas, and help propogate the approach to other software development cycles.
  • Leverage the feedback cycle to empower the team members to make it even better.

[Update: The series of posts, Organizing a test suite with tags recounts a real-world followup to the solution implemented as described in this post. We explore a design concept associated with refactoring the solution from above. The first of those posts is here, or you can follow the links below]
– – –

Check out the index of software testing series posts for more articles.

Requirements Document Proliferation

too many documents

Too many companies don’t document their requirements.

Worse still, too many companies over-document their requirements.

Roger Cauvin and Cote’ have started a great conversation about the proliferation of requirements documents. To follow the thread, start with Roger’s post, make sure you read the comments there as well. Then check out the post by Cote’.

The main point that they are making is that having 4 levels of requirements document is ridiculous. The four levels (shown in more detail in their posts) are:

  1. MRD – Market requirements document – used by marketing people. Describes the needs of the market, like “Driving downtown takes too long. There’s a need for a better solution”
  2. PRD – Product requirements document – used by product managers. Describes what a product must be capable of doing, in order to address the needs of the market, such as “Transporter must move people from rural areas to downtown in less than half the time of driving.” From the wikipedia definition, we see that the PRD has much of the same content as an FRS.
  3. FRS – Functional requirements specification – used by program managers*. Describes the same thing that a PRD does. Personally, I’ve never seen both used on the same project. Here’s a good definition of an FRS at mojofat.
  4. SRS – Software requirements specification – used by software developers. Describes the same thing that an FRS does. Here’s a good explanation of what’s in one from MicroTools, Inc.

Many people get so frustrated with all of these different ways to document requirements that they either look for a novel approach (or another here), or declare that requirements are counter-productive. The problem gets exacerbated when a bunch of former technologists attend a training class and start preaching the importance of (pick one of the docs above) without an understanding of the big picture. The current software-development outsourcing trend in the US has forced a lot of people to scramble to find new homes in the org chart. Cote’ is spot-on with his application of Conway’s law to this problem.

Cote’ suggests that we need a single person/team that “does it all”, flattening the hierarchy. Several people commented on a post here about CRUD requirements, and the discussion touches on a similar issue- drawing the line between requirements and design. Some of those folks came to the same conclusion. And I agree, when we can find supermen who can write code that solves valuable problems (which they identify), we can have great software. When we have to collaborate as a team of specialists, we need to include requirements documentation to get the best return on our investments.

I do disagree with Cote’ that some of these layers of documents exist to enable people who aren’t “technical enough” to participate. Different people play different roles, and care about different information. Communicating with these people, in their language, is critical.
question mark

What the heck should we do?

  1. Understanding the needs / problems in the market is critical to succeeding. The build it and they will come illusion of the late 90’s has been broken. Only the companies and products that provided real value survived that shakeup. Should we document those market needs? Yes. Is an MRD the right document? Probably. Roger knows more about this than I do, and he and other folks I respect believe in the MRD. If you don’t, at least codify your understanding of the market somehow – maybe this way.
  2. Building a vision for software that addresses those needs is critical to success. I left a previous employer with a philosophy tatoo that is stuck in my head like a bad song from the 80’s (Oh Mickey, you’re so fine…). That phrase is filler versus killer – and it was applied to every proposed feature for new software. Are those filler features that just take up space and time, or are they killer features that solve real problems and provide real value? Creating a software-vision designed to address market needs is an ideation process. And it should be documented. PRD, FRS, SRS – a rose by any other name. When forced to choose, I would call it a PRD, because in practice it is harder to avoid writing about implementation details than it is to avoid overlapping with market needs.
  3. Designing software that achieves that vision is critical to success. We can’t leave out design. Agile approaches work well because (among other things) they do even more design – it just isn’t all “up front”. Regardless of what process you choose, build and document a design based upon the requirements.

Executive summary: Document market needs in an MRD. Document requirements for your software in a PRD. Document your designs.

Foundation Series: Unit Testing of Software

Requirements class students

What are unit tests?

monkey at keyboard

Testing software is more than just manually banging around (also called monkey testing) and trying to break different parts of the software application. Unit testing is testing a subset of the functionality of a piece of software. A unit test is different from a system test in that it provides information only about a particular subset of the software. In our previous Foundation series post on black box and white box testing, we used the inspections that come bundled with an oil change as examples of unit tests.

Unit tests don’t show us the whole picture.

A unit test only tells us about a specific piece of information. When working with a client who’s company makes telephone switches, who’s internal software development team did not use unit tests we discussed the following analogy:
Unit tests let us see very specific information, but not all of the information. Unit tests might show us the following:

bell

A bell that makes a nice sound when ringing.

dial

A dial that lets us enter numbers.
horn

A horn that lets us listen to information.

We learn a lot about the system from these “pictures” that the unit tests give us, but we don’t learn everything about the system.

phone

We knew (ahead of time) that we were inspecting a phone, and with our “unit tests” we now know that we can dial a phone number, listen to the person on the other end of the line, and hear when the phone is ringing. Since we know about phones, we realize that we aren’t “testing” everything. We don’t know if the phone can process sounds originating at our end. We don’t know if the phone will transmit signals back and forth to other phones. We don’t know if it is attached to the wall in a sturdy fashion.

Unit testing doesn’t seem like such a good idea – there’s so much we need to know that these unit tests don’t tell us. There are two approaches we can take. The first is to combine our unit tests with system tests which inspect the entire system – also called end to end tests. The second is to create enough unit tests to inspect all of the important aspects. With enough unit tests, we can characterize the system (and know that it is a working phone that meets all of our requirements).

old phone with unit tests

Software developers can identify which parts of their software need to be tested. In fact, this is a key principal of testing-driven development (TDD) – identify the tests, then write the code. When the tests pass, the code is done.

Why not use system tests?

The system test inspects (or at least exercises) everything in the software. It gives us a big picture view. Ultimately, our stakeholders care about one thing – does the software work? And for them, that means everything has to work. The intuitive way to test, then, is to have tests that test everything. System testing is also known as functional testing.
old phone

These comprehensive tests tell us everything we want to know. Why don’t we use them?

There is a downside to system testing. In the long run, it’s more expensive than unit testing. But the right way to approach continuous integration is to do both kinds of testing.

In our Software testing series post on blackbox and whitebox testing we discuss several tradeoffs associated with the different types of testing. For most organizations, the best answer is to do both kinds of testing – do some of each. This is known as greybox testing, or grey box testing.

System tests are more expensive, because they are more brittle and require more maintenance effort to keep the tests running. The more your software changes, the faster these costs add up. Furthermore, with Agile practices, where portions of the system are built and tested incrementally, with changes along the way, system tests can be debilitatingly expensive to maintain.

Because unit tests only inspect a subset of the software, they only incur maintenance costs when that subset is modified. Unit testing is done by the developers, who write tests to assure that sections of the software behave as designed. This is different from functional testing, that assures that the overall software meets the requirements.
There are more articles on software testing in our software testing series.
– – –

Check out the index of the Foundation series posts for other introductory articles.

Prioritizing requirements – three techniques

A zillion identical ice cream bars

Now that we’ve gathered all these requirements, how do we determine which ones to do first?

The less we know about our client’s business, the more the requirements appear to be equivalent. We’ll talk about three different approaches to prioritizing requirements.

  1. Classical. Let stakeholders assign priority to the requirements.
  2. Exhaustive. Explore every nuance of prioritization and its application to requirements.
  3. Value-based. Let ROI drive the decisions. (hint: this is the best one – scroll down if you’re in a real hurry)
  4. [bonus]. A look at how 37signals prioritizes features for their products.

1. Classical

We’ve all been in a discussion at one point where we ask people to prioritize a set of work. The classic example is during triage of outstanding bugs – high priority gets overused so much that people start inventing very high priority and ultra high priority.

French fries

It’s like when McDonalds got rid of the small size french fry – now they have medium, large, and super size fries. What happened to small? Wendy’s is even worse – large, biggie, and great biggie sizes. They even lost the medium size.

We can try to avoid this problem and force an even distribution of high, medium and low requirements. A simple approach, and we used it when we are facilitating a brainstorming session. This works great when we’re working in the abstract, or prioritizing brand new ideas as a starting point for future discussions. However, when truly prioritizing requirements, we run into people who think everything is critically important. Try to force these people into three equally sized buckets and they will revolt. Without careful prompting, I’ve never seen a session result in fewer than half of the “real” features (or bugs) in the high priority bucket.
We can learn something from the french fries here. While marketing may put names like “value sized” and “eat this and it’s free” on the different sizes, the bottom line is that there is one size which is the smallest, one size which is the largest, and one size which is in the middle. It’s all relative. The same is true with requirements, so…

Subdivide the high priority requirements.

More than half of our requirements are high priority, with the remainder in medium or low priorities. Ask the folks to break down the high priority pile into two or three piles (like biggie and great biggie fries). They still won’t split things evenly, but now the largest group of requirements is roughly a third of the total set of requirements (and represents the highest priority requirements). These are the new high priority. With the remaining three or four piles of prioritized requirements – combine the groups to create medium priority and low priority requirements.

We now have workable classifications of priority for our requirements.

The problem with this approach is that we don’t know how much more important the more important requirements truly are.

2. Exhaustive.

Here is a very extensive article about prioritization written by Donald Firesmith of the Software Engineering Institute. Early in the article, Don gives us three possible interpretations of the definition of prioritization. He talks about why we prioritize, the risks of not doing it right, and explains no fewer than 14 different axes along which we might prioritize. He references several good resources, details a process (and a sub-process), and tells us more than we would ever want to know about prioritization.

My suggestion – skim the article, read about the different axes (section 5) if you plan on using the classical technique above, and bookmark the reference for the future. This is pretty dry reading – and I even enjoy reading technical specs for telecomm switches. However, I do recognize the breadth and validity of the content he has pulled together.

Unfortunately, what Donald doesn’t do for us is prioritize his content. The organization is logical, but some of it is clearly fluff, and some of it is valuable, and we can’t tease those sections apart.

3. Value-based.

Our customers buy our software because it increases their profits. It’s an investment for them. The payback can be in cost-savings (bottom-line growth) or increased sales (top-line growth), or anywhere in between. We could be cutting overhead (and therefore cost of goods sold) by reducing their cost-to-quote. We could be optimizing their supply chain (reducing the dollars invested in work-in-process inventory), or we could be opening up a new sales channel (a portal website for resellers to directly submit orders to the factory). The bottom line is that it all comes down to ROI.
In a comment on a recent thread, Marcus asked how we would trace requirements to corporate strategy. One example he used is the tactic of becoming the low-cost provider in a market. We have to abstract that back up to get to the dollars. Follow the money. A low-cost provider can increase market share, and potentially lower costs through automation. The strategy was presumably accepted (by the business) based upon a projected impact on company profits. We need to understand that impact-projection, and use the resultant profitability forecast to value our requirements.

This is a good example of why document analysis is important to eliciting requirements.

Prioritize requirements based upon their explicit impact on profitability. With requirements traceability, we can break down the impact of supporting requirements as a percentage of the impact of those requirements that they support. This represents implicit value for a particular requirement. This is one of the benefits of using a structured requirements framework. Also, when using composition in requirements, we will distribute the impact assessment to the sub-requirements.

If at all possible, implement the most valuable requirements first.

In the real world, there are two constraints that we have to live in when taking this approach. First, there are implementation dependencies. There are some parts of a work breakdown structure that must be done before others (due to entanglement in the design, or availability of resources, for example). When incorporating this reality into the schedule, still prioritize more value ahead of less value.

The second real-world consideration is the executive whim – there are often personal agendas, political pressures, and perceptions held by the stakeholders that will create pressure to implement some features with low intrinsic value ahead of some features with higher value. While people may optimize by nature, it isn’t always the company’s bottom line for which they are optimizing. Try and work with these people to prioritize the high value features first. Be compelling. It may be that there are tactical considerations (the CEO may demand that the website match corporate look and feel standards before it allows for new order submission), and the funding for the project may be dependent upon addressing someone’s pet peeve in the first release. We just have to do it some times.

Prioritize based upon the value that a set of features will bring to the business.

When we’re writing the specs for multi-customer software, the business who’s value we prioritize is ours. This abstraction can be harder to address. But a given capability will be expected to have an impact on our ability to sell the software (or raise the price of the software). And it will come with an inherent cost. Leverage strategic marketing expertise to pick the right capabilities (more importantly – solve the right problems), and properly value them.

By changing the customer from them to us, we can apply the same principals for value-based prioritization.

4. Bonus. We talk in another post about how 37signals approaches software requirements prioritization.

Brainstorming – Making Something Out of Everything

Previously, we talked about brainstorming as one of the best elicitation techniques for gathering requirements. Here are some details about how to facilitate a general brainstorming session with a group of people in 5 easy steps (and then another 5 easy steps).

Seven to ten people is a good number to pull together in a brainstorming session. With creative and vocal people, a smaller number can work.

Five Steps to making brainstorming effective

  1. Set the ground rules. Let people know that this is a brainstorming session, which means that all ideas are valuable. They may be bad ideas, but they can lead to good ideas. The most important thing to make sure people don’t do is criticize any ideas. People need to feel no fear – this is a creative release and they need to feel secure that any ideas they throw out are for the good of the cause. I have run brainstorming sessions where people have felt inhibited by their managers who might think an idea is stupid. Unless we know that this won’t be the case, we should exclude managers from the sessions, and try and fill the room with peers.
  2. Set a time limit. There have been studies that show that creative thought is more effective when there’s a time limit. Creating Passionate Users has a post titled Creativity on Speed, where this idea is pursued in interesting ways, including creativity deathmatches. Makes for a nice segue. Set a 20 minute time limit on the session – long enough to get some juices flowing, but short enough that people won’t feel like it’s a waste of time.
  3. Define a starting point. We don’t want people coming up with ideas for space elevators or edible plates – we need some focus. Since we’re eliciting requirements for a specific product, we have a context. Identify the high level goals of the business for this project, and write them on the whiteboard (or flip-chart paper taped to the wall) before the meeting. People will read this during the setup and subconciously start thinking of ideas. A statement like “Acme Bricks and Mortar needs a website to sell directly to customers online.”
  4. Shout out and write. This is the fun part. Everyone in the room shares ideas as they come to them. Write them all down. Don’t editorialize the ideas. Some ideas will be requirements like “Make the borders of the page look like bricks” and others will be ideas like “90% of our sales is to existing customers.” Write it all down. If the group is too raucus, get a second person writing down ideas. Sometimes we end up with a room of people who aren’t comfortable, or aren’t interested in getting started. Start throwing out some ideas – say them out loud and write them down at the same time. If we don’t have strong relationships with the participants, this could easily backfire – we could cool off the room we’re trying to heat it up. An alternative is to start asking questions – “what do people care about when buying mortar online” or “what else do people buy when they buy bricks?” Think about some starter questions before the meeting. Don’t try and cram all the ideas together into nice organized lists – just write them wherever is convenient, this isn’t the time for structure. Prioritize quantity over quality at this point.
  5. Pick the best requirements. The most important requirements are determined by the group, as described in the next 5 steps.

Five steps to picking the best requirements

  1. Flag the requirements that should be considered (all of the requirements, but none of the thought-fragments, goals or general ideas) with a star or a colored post it note. If we’ve done this right, we will have several sheets of flip-chart paper or whiteboards covered in requirements, ideas, words and fragments (maybe even pictures). Tape the flip-chart paper on the walls if that’s where the ideas are written.
  2. Count the requirements. We’re going to create three evenly sized priority buckets and place the requirements in the buckets (1,2,3). Each person will rate every requirement as a 1,2 or 3 (1 being most important). Give each person a stack of post it notes and a marker, and have them make out a fixed number of 1,2, and 3 post-its (evenly divided, with the remainder as 2s). It’s important that people be forced to divide the scoring evenly so that they don’t make every requirement a 1.
  3. Everyone prioritizes the requirements. Have everyone physically get up, mill about, and stick their post-it-note priorities on all of the requirements. The scoring is somewhat subjective and individual. Provide a guidance about how ideas should be rated (value, feasibility, alignment with strategy), but ultimately each person will make a judgement call, and that’s ok.
  4. Tally the scores. Add up all the scores, and pick the requirements with the highest priority (lowest totals). Throw all the scores in a spreadsheet – and look at a quick X-Y graph. It’s the weirdest thing – there always seems to be a cluster of scores for “important requirements” and then the rest of the requirements sort of taper off. There’s probably a mathematical reason for it, but someone smarter than I will have to explain it to us. The requirements in the top-cluster (probably between 1/4 and 1/3 of the ones we just scored) are the “first cut” requirements.
  5. Make a list of the first cut requirements. We’re not done, however. There is usually at least one really good idea that didn’t fall in the top cluster because the “group” didn’t all agree that it was important. Give everyone in the room a chance to nominate one of the leftover requirements into the first cut group. Let them make a case for it. If we see something that we suspect is valuable, ask questions about it. Pull these ideas into the list.
    We don’t have a spec. Yet.Brainstorming isn’t the key to writing a requirements document. There’s a reason that design by committee and group think make us cringe – because nothing great comes (solely) of it. A brainstorming session gets us a starting point when we are faced with customers who “don’t know what they want.” Even people who don’t know what they want generally have a good idea about what they don’t want. It’s easy to be a critic. Use this first cut list of requirements as a starting point. Review the list in individual interviews. Understand the ROI of these ideas, validate their strategic alignment with the stakeholders. Having a concrete set of requirements is the easiest way to get someone to say “We don’t need that, what we need is this!”Use the results of the brainstorming session to seed the cloud of ideas in one-on-one interviews. Don’t just spell-check them and hand them over to the dev team for implementation.- – -20060131: I just found this great link to some other brainstorming techniques at Never Work Alone -> Brainstorming 101. Check them out

Announcing Alkali Marketing – A little marketing for a big reaction

Alkali Marketing main image

Lauren Arbittier Davis recently started Alkali Marketing, a boutique marketing outsourcing company here in Austin.  I’ve personally worked with Lauren over much of the past decade, and couldn’t be more excited about her company!  When Tyner Blain is ready to build awareness (beyond our current viral approach), Alkali is who we’ll call.  In addition to a personal reccommendation of Lauren for anything you need, I also know some of her clients, have seen the work Lauren and Alkali have done for them, and know how pleased they are with it.

Check out Alkali Marketing’s site, read their reviews, see what they can do, and schedule a meeting.
Contact Alkali Marketing here.  Seriously.

Scott

How To Interview When Gathering Requirements

colbert report interview screenshot

We previously stressed the importance of understanding why something is a requirement. Unfortunately, we can’t just ask “why why why?!” until we reach the end of the chain. This won’t be any more effective for us now than it was when we were in kindergarden. Eventually, our listeners will get frustrated, or worse, defensive.

Understanding why is still our goal – but we have to be smart about our interviews to get this information. In our previous post, we identify interviewing as a key technique for eliciting requirements. Interviewing is the cornerstone of our elicitation techniques – even if we gather the bulk of our information in group meetings, we have to follow-up, clarify and validate with individuals. There’s truth behind the old saw that nothing good is designed by committee.

Before the interview

Failing to plan is planning to fail. OK, stop groaning, I know it’s a cliche. Regardless, the first thing we do when interviewing is identify who we need to speak to, and what we need to speak about. If we’re going to talk to a sales manager, about our sales-support software, we will likely talk about user adoption, business processes, and the organization of the sales team. If we are talking to a sales person (a representative user), we will talk about how they do their job today, and how it could be different with the new software. In both cases, we plan our conversation before we have it.

During the interview

Amplifying your Effectiveness has an article on how to run interviews when gathering requirements. This is a great article, and one I’ve added to my links at del.icio.us (you should too).

Some key takeaways from their article:

  • Use “how” and “what” questions to get to “why” answers – avoiding the knee-jerk defensive reaction.
  • Use “tell me more” questions to drill down – “What happens next”, “Can you show me”, etc.
  • Use open-ended questions. Yes/no questions are good for validating what you’ve learned already – not for learning new information.
  • Don’t bias the results. It’s easy for the interviewer to ask leading questions. We need to realize if there is an implicit premise in the way we ask a question, and if that would bias the results. When we’re first starting out, this meta-perspective is almost impossible, but it becomes second nature over time. If we review our results (recording our questions helps too) after the interview.

Cliche though it may be – a book I’ve read at least a dozen times is How to Win Friends and Influence People by Dale Carnegie.

In that book, he helps us understand that people enjoy talking about what they do. He provides tips and suggestions about how to gain contacts, gather information, maintain relationships, and speak publicly. If you haven’t already read it, it’s in my top two “books that make you better” list. so go read it. One thing that will make you laugh is seeing the dated dollar amounts in many of his anecdotes (the book was written in 1936). My copy was published in 1964, and the pages are starting to get that nice yellowing that comes from great writing.

Carnegie stresses, encourages, and provides techniques for talking to people about what’s important to them, which is directly related to gathering requirements from the beneficiaries of a new system (and the users, who should benefit, but might not if we don’t gather the right requirements).

After the interview

Here’s where many good interviewers drop the ball. This is the time to put a little extra effort in managing our relationships. We should always followup with the interviewee, and let her know “how it went”. We should give her an update on status and let her know which of her great insights is being incorporated into the spec. Anecdotal data is fine, we don’t need to create a laundry list – just an affirmation that her needs are being addressed, and that the time she spent in our interview was valuable. If there’s something that is important to this stakeholder that didn’t make the cut – give her a head’s up.

If we get stuck for a good pretense to having the conversation, we can always use communication of the schedule as an excuse.

These follow-up conversations establish a long term relationship that is good for future releases, helps with change management of rolling out the solution, and establishes or firms up our credibility.