Monthly Archives: April 2006

Where Did You Get That Estimate?

lottery balls

How good are our estimates? We can use PERT to estimate the time it will take to implement each requirement. We can use timeboxes to schedule the requirements within each release. If we don’t know how good our estimates are, its an exercise in futility. Scheduling is about more than predicting the future, its about knowing how much faith to have in our predictions.

Context

We deliver software incrementally. Each release will include a set of requirements that enable one or more use cases, because half a use case is worthless. Each use case can be traced to a set of requirements, the designs that they drive, and the code and tests that implement them.

Use case traceability diagram

We prioritize our desired functionality in the order that we want it to be implemented. We schedule the functionality based on how long it takes to implement it. Within each release, we create a timebox and fill it up with functionality based upon the estimates for how long the work takes. And we use PERT estimates for our tasks.

But where did those PERT numbers come from?

Types of Estimates

There are five common methods of creating estimates, according to the Software Estimation Technology Report. We don’t have to use the same type of estimate for all of our estimates within a project, and probably shouldn’t. Every estimate should use one of these approaches, and should document which approach is used.

    1. Estimate by Analogy
    2. Estimate from the Bottom Up
    3. Estimate from the Top Down
    4. Use Expert Judgement
    5. Estimate Algorithmically

    Estimate by Analogy

    Use knowledge of previous, similar tasks to create the estimates. We apply what we experienced in previous efforts to what we’re estimating today. This is applying the actual values as approximations for the new activities. This approach suffers when historical data does not closely match the activity we’re estimating.

    Estimate from the Bottom Up

    Perform detailed estimates of the smallest elements of each task, then roll them up. This type of estimate is very effective because smaller tasks are easier to estimate accurately. The problem with this approach is that we risk spending too much time creating estimates.

    Estimate from the Top Down

    This is an approach to estimating an entire project, rather than it’s constituent parts. We’ve listed it for completeness, but it doesn’t apply to creating estimates in this workflow. It is valuable for creating pre-planning ROM (rough order of magnitude) estimates. This approach looks at the big picture – integration points, system level estimation, etc. It has the benefit of being very fast, and the detriment of being very coarse.

    Use Expert Judgement

    Because our guru said so. We apply what we learned in past projects to create estimates for this task. These estimates are only as good as our gurus.

    Estimate Algorithmically

    Use an estimate of lines of code, or function points, or objects anticipated with the design for the task to create an estimate. This has the benefit of appearing to be very precise. The danger is that it appears to be precise. Here’s a Gedanken experiment for you: Which task will take longer – the one with 100 function points in project A, or the one with 100 function points in project B?

    This is like estimating how long it will take to walk 20 miles, with little insight into the terrain.

    Conclusion

    There are many ways to approach estimation. Each task will have one or two comfortable approaches – use one of those. And document the estimation method, so that the project manager can better account for the risk of bad estimates.

Communicate Relevant Quality Metrics

ruler

Most teams think about testing in terms of code coverage – what % of the lines of code are covered? What matters to our stakeholders is how well the software works. More precisely, how well does the software let the users work? We should be targeting our quality message in relevant terms, because users care about what they can do with software, not how well we programmed it.

The problem is that we tend to think about our software from the inside out, and our customers think about it from the outside in. We need a way to communicate our understanding of the insides within the customer’s framework of understanding the outside.

Inside Out Quality Measurement

Inside-out measurement of quality is what most people developers and testers think about. Users don’t. Executives don’t. Customers don’t. This section recaps this view in order to contrast it with the outside-in view.
We’ve talked about how to view software as a framework of structured requirements, designs, code, and tests. This is the right way to think about it, when thinking about the inside of our process. A diagram we’ve used before to show the structured requirements view of the world puts it pretty succinctly.

Wiegers' view of structured requirements

Interpreting the Inside-Out Diagram

  • The user has goals.
  • Each goal is achieved by enabling one or more use cases.
  • Each use case is enabled by implementing one or more functional requirements with a set of characteristics and subject to restrictions.
  • Each functional requirement drives a set of design decisions.
  • Each design element is implemented by writing software (code).
  • Each element of code is tested with one or more tests. These are generally unit tests, and by definition are whitebox tests.

Incorporating an interaction design process into this approach results in a more complex, blended view of the world. Wiegers’ view is simpler, so we’ll focus on how to communicate with our users in this framework. These ideas can be easily extended to other frameworks.

One Step Back

Taking one step back, we can see a slightly bigger picture. In the following diagram, we collapse all of the requirements elements into a single rectangle, and add testing.

inside view

This diagram shows a single use case, enabled through two requirements, each of which is driving a design element. Each design element is implemented with a section of code, and each section of code is also tested with one or more white box tests.

The Problem with Inside-Out

While inside-out is the way that we have to think about our software when developing it, it couldn’t be more wrong as a way to describe our quality to our stakeholders. We might be able to communicate the overly simplified diagram above to a client, but even adding one level of complexity will derail the message. The diagram below will make most stakeholders’ eyes water, even though it is still simplistic.
inside view

When we deliver a release, we need to communicate about the quality of the release. We can do this by providing the results of our test suite. The test suite is represented by the “T” boxes in these simplified diagrams. We can tell our stakeholders that we have 90% code coverage, or that 85% of our tests pass. Most measurements of quality are meaningless once you get outside of the box.

More gibberish for our customers.

Outside-In Quality Measurement

Our customers view software from the outside in.

Outside view of software

Interpreting the Customer-View Diagram

  • The user has one or more goals. (WHY?)
  • The user achieves those goals by enacting use cases. (WHAT?)
  • The use cases are enabled by buying software. (HOW?)

We engage with users during developmen in an agile or iterative process. During that engagement, the users will care about the next level of detail (requirements), but only because what they care about are use cases (or scenarios). We need to write requirements so that they can get value out of the software. The responsibility is ours, they care about how they use the software.

Using Inside Knowledge for Outside Communication

We need to communicate some message about the quality of each release to our stakeholders, because keeping them in the loop keeps our project alive. It sets expectations, can create momentum, and prevents surprises. All of these are very good things(tm).

We can do this by providing the results of our test suite. What we want to tell them is “Use Case 1 has 100% passing tests, UC 2 has 100% passing tests, and UC 3 has 50% passing tests.” This lets our stakeholders know that Use Case 1 and UC 2 are both ready start generating ROI for them. UC 3 is not ready yet, and needs more work. When we combine this “quality by use case” message with “release planning by use case” we are providing a clean message for our customers, that is targeted for them, and makes sense from their perspective.

In the diagrams, we see how each level of the diagram is supported by the next level down. Conversely, each level is supporting the level above it. By following the arrows backwards, we can see which code is supported by a given test case. We can then determine which design element is supported by that code, and keep moving up until we find the use cases. In our example, the mappings would look like the following:

Mapped use cases

Interpreting the Use Case Mapping Diagram

  • The first three test cases all support Use Case 1
  • The next two test cases support UC 2.
  • The last two test cases support both UC 2 and UC 3.

The last two test cases are doing double duty in our example, because both UC 2 and UC 3 depend upon the same requirement. This is a very common element of real-world diagrams like this one. The tests of a common code element will support multiple use cases.

Quality Measurement and Motivation

Suddenly, some test cases are more important than others. When there is a system of metrics in place, people tend to optimize on those metrics.

With inside-out quality measurements, all test cases are created equal. If 5 of 1000 tests fail, quality is really good. Maybe. What if those failed test cases are in the database connection that is critical to every use case? Five tests fail (half a percent!) and nothing works.

With outside-in quality measurements, critical test cases carry the most weight. The five failing test cases will cause all of our use cases to fail, and they will get the attention they deserve.

The same approach can be used for measuring code-coverage, cyclomatic complexity, or any other (normally) inside-out metric. Developers are smart. When they see that they can kill N birds with one stone, they jump at the chance. Fixing a critical bug, or adding a well-placed test case can have multiplied impact with this approach.

Use cases that are isolated will get the least attention. Unless we also prioritize them.

Conclusion

We write software for our customers. They buy it because it is valuable to them. Our customers think about that value in terms of what they can accomplish with the software.

When we communicate with our customers about quality, it should be on our customer’s terms, not ours.

Market Segmentation or Senseless Mistake?

new coke

A grass roots campaign has been started by Peter Provost to get Microsoft to include unit testing support included with all versions of Visual Studio 2005 (VS). Currently, Microsoft is only including it with Visual Studio Team System (VSTS) versions of Visual Studio. This looks to be a great example of a killer feature in a product providing so much surprise and delight that people are demanding that it be universally available. This is also a great example of market segmentation by Microsoft. The irony is that there is an open source alternative that makes the opportunity cost very low, and yet people are still clamoring. Let’s see why.

Background

Visual Studio 2005 is a development environment for developing .NET applications. Microsoft offers several versions of the software – 8 in the 2005 packaging. Even for people familiar with the product, the market segmentation strategy can be pretty confusing. To oversimplify, each version offers more capability than the less-expensive version below it. Rob Caron provides the best explanation of the product definition strategy in his Hitchhiker’s guide to VSTS. He starts by explaining the Visual Studio 2003 packages, and then shows the evolution to the VS2005 appraoch, including the Team System versions.

Unit testing support is offered only in the four most expensive, most capable versions – the Team System versions. The petitioners argue that unit testing is critical to all developers, and should be included in every version of the product. Unit testing is a form of whitebox testing where developers create automated tests of their code.
Microsoft is implementing a classic market segmentation strategy with this approach.

Market Segments

Markets are not homogenous – we don’t sell products to clones. Everyone has a different set of criteria for purchasing software. They make different tradeoffs in terms of price versus performance, or cost versus capability. Imagine a market roughly divided into two populations – price sensitive people, and feature-driven people.

populations

The price-sensitive people like getting extra features, but will only pay marginally more for them. The feature-driven population is willing to pay a higher premium for added capabilities.

If we treat our potential customers as a homogenous market, we will make one of three mistakes:

  1. Price the product so that everyone buys it. If we set the price based on the price-sensitive population, we are leaving money on the table. The feature-driven people would gladly pay more for the features.
  2. Price the product so that only feature-driven people will buy it. We lose out on sales to the price-sensitive population, who won’t pay for the extra capabilities.
  3. Try and compromise. Nobody wins. We won’t get enough of the price-sensitive customers, and we’ll leave money on the table with the feature-driven customers.

The Good, The Better, and The Best

One way to serve all of the customers is with multiple products. As an example, imagine three versions of a washing machine. They all basically do the same thing, wash clothes. The manufacturers can put a stronger motor, fancier control panel, or better sound insulation on some versions of the same product, and sell them to different people for different prices.

Good Better Best

Most of the engineering costs apply to all three versions of the same product. The same is even more applicable to software. An easy way to do it would be to write the “best” software, and then disable some features to create the “better” and “good” versions. Many small software companies do this today, offering free and paid versions of their software. The free versions usually are missing features of the paid versions.

Microsoft has presumably identified several user profiles, and tailored a specific version of the software for each profile. Each version has different capabilities, and a different price.

Product Differentiation

Unit testing support, within the Visual Studio development environment, is absolutely a valuable capability. The growing response to the petition proves it. This is a great example of a surprise and delight feature (in Kano terms). In fact, some users find it to be so compelling that they want all users to get it “for free” as part of purchasing any version of Visual Studio.

This is one way that Microsoft is providing differentiation of the Team System versions of Visual Studio. There are other tools that may provide even more compelling reasons to get the Team System version.

Opportunity Cost

The odd thing is that NUnit is an open-source unit testing tool that can be plugged-in to all versions of Visual Studio. This means that there is a free tool for doing exactly what the petition is asking Microsoft to do. The cost of using NUnit is the time spent setting it up – I would imagine a few hours to figure it out and create an install document for the rest of the team. This is a surprisingly low-cost alternative. And it may even be the better alternative, as NUnit has a very active community, and there are many areas to find free support and help. The opportunity-cost logic applies to this situation (but in reverse). There is a low-cost alternative, so why spend the money on the extra features?

The other capabilities available in Team System provide much better differentiation, as they don’t have low-cost alternatives like NUnit.

Conclusion

This is a great example of using market segmentation to sell more software for more profit. Feature-driven people who want unit testing will pay more for it. People who are more price sensitive will still buy the versions without unit testing baked in, and will hopefully know about NUnit and bolt it on.

Good job Microsoft marketing.

Targeted Communication – Status Reporting

right on target

We’ve posted tips about targeted communication – tailoring the message for the audience. Anthony Mersino has an excellent post from January of this year about how to write a good status report. He provides seven excellent guidelines for status reporting, and all of them around providing the message our audience cares about, as effectively as possible.

Status Reports are important

Good teams can work around a project manager who doesn’t communicate effectively. They just bypass the problem and communicate with each other. But a project with poor outbound communication is doomed. Outbound communication is communication from the operational team to the client, stakeholders, management, other departments, etc.

As product managers, we spend most of our time focused on what things need to be done. We risk forgetting about the actual doing of those things. Sometimes we’re in an organization where we aren’t directly involved in the execution of the plan. In that situation, we are consumers of status reports from the program and project managers who are making our visions into reality.

If we aren’t providing status reports, we’re receiving them. Either way, we want them to be great. Specifically, we want them to be targeted, accurate, and easy to quickly absorb. We also want the reports to make it easy for the reader to get more information when she needs it, and know when she doesn’t need more information.

The reader of a status report is asking two questions. Our status report needs to answer them.

Will you meet your commitments?

If not, how can I help?

Our favorite tips

Three of Anthony’s tips hit the bullseye for us.

  1. Write for the reader, not the writer (#2)
  2. High signal to noise ratio (#6)
  3. Communicate status against the plan (#7)

Write for the reader, not the writer

Anthony tells us

Add value with a message that is clear, shows the way, and guides the reader on whether to take immediate action, begin to worry, or relax and let you do your job.

Photo by Katinka Kober (photo courtesy of Katina Kober)

A great way to do this is with a stoplight metaphor (at least in the US, where green = go, yellow = caution, red = stop). We can provide a little color in our reports to make the status details and rollup easy to scan. When someone is the audience of a status report, its because the reader needs to know what is going on, but isn’t involved – and likely is reading status reports from other teams. We need to present a document that gives a quick visual that guides the reader to pay attention to the most critical elements.

  • Red. Immediate action (by the reader) is required to fix this.
  • Yellow. We’re at risk of failing to meet expectations. There’s a plan in place, but we thought you should know. Want to know more?
  • Green. Meeting or exceeding the plan. No need to spend cycles on this one.

[Update 28 Apr 2007: We have a much improved metaphor for tracking project status – weather forecasting.]

Be careful not to be a hero, chicken-little or pollyanna.

No interesting project happens exactly to plan. When the team is dealing with chaos, change and uncertainty, we have to make a judgement call. Is this something we’re capable of resolving, or do we need to bring in the big guns? If we need help, we have to ask for it. That’s why status reports were created. Top-down organizations empower their teams to make stuff happen. They use status reports to mitigate the risk that something will go wrong. When something does go wrong, they can fix it, or at least minimize the pain.

Don’t flag everything as yellow (unless it is). These are the warning signals to our audience that something might go wrong, but hasn’t yet. Every warning in the status report requires our reader to get more details. These are problems that are “at our capacity” – we need to be able to deal with them. But we also need to communicate the risk that we can’t. Most readers will react to yellow items by making a note to followup (so make sure we follow up!). They might look for more information (so provide it), to validate that they don’t need to be involved yet.

If everything is green even if it shouldn’t be, we destroy the power of our communication vehicle, and damage our credibility. The reader of the status report needs to know that green means “at least as good as expected/promised.” Our audience needs to be able to ignore the green stuff when she’s swamped with other urgent problems.

High signal to noise ratio

Report on only the most important stuff. Write brief, scannable information. Bulleted lists and hierarchical presentation of information are great techniques to use. Break the report up into sections. The sections could be functional breakouts, or product-area summaries. Start with a summary, end with the same summary. On ratios, think about it in terms of time.

stopwatch

  • 10 minutes. The reader of the status report should know what it says after no more than 10 minutes of looking at it.
  • 2 hours. A report should take no more than a couple hours to pull together, when we are tapped in to whats happening on the project. We already know the important information – the time is spent organizing and filtering. If each draft is shorter than the previous one, we’re on the right track.
  • 200 hours. We should spend about an hour to summarize no less than 100 hours of project work. A weekly status report for a team of 5 represents 1% overhead, and takes 10 minutes (under 1% of the reader’s time) to read. A period of more than two weeks per report loses the sense of immediacy and urgency that would be required to actually fix something that is broken. And that’s the whole point – to allow our audience to respond to problems, and plan for change. If the problem happened three weeks ago it better already be addressed by the time the report is written.

Communicate status against the plan

The red/yellow/green metaphor works best when it presents status in context. When the reader of the status report approved our product development plan, or our release schedule, that created the context. Will we meet our commitments? That’s the main message. Tracking against the plan can also make it easier to write the status report.

[Update 2006-10-01: Reporting progress against a set of deliverables can be accomplished very effectively with a burndown graph.]

Summary

The reader of a status report is asking two questions. Our status report needs to answer them.

Will you meet your commitments?

If not, how can I help?

Maine Mangles Medicaid – Charges CIO

child crying

Allan Holmes, for CIO Magazine just posted a scathing and detailed autopsy of the disastrous Medicaid Claims System project run by CSNI and launched in January of 2005. Requirements elicitation failures combined with incompetent vendor selection and project mismanagement lead to a $30,000,000 oops for the state of Maine, jeopardizing its credit rating. The system failed to process 300,000 claims in the first 3 months of operations, causing many health care providers to close their doors – and presumably causing citizens of Maine to go without needed services. Maine is the only state in the union (as of April 2005) not complying with federal HIPAA regulations.

Autopsy results

There were crucial failures in essentially every step of the project. We’ll look at each of the following areas:

  1. Defining requirements and creating an RFP (request for proposal)
  2. Vendor selection
  3. Requirements validation
  4. Risk management
  5. Execution (Project management and development)
  6. Testing
  7. Deployment / Change Management / Training

Defining requirements

April 2001. Maine issued an RFP for the new HIPPA compliant system. By the end of the year, only two bids were placed – one for $15 million and one for $30 million. Holmes tells us that this is a sign of a bad RFP:

…says J. Davidson Frame, dean of the University of Management and Technology in Arlington, Va. “Only two bidders is a dangerous sign,” he says, adding that the low response rate indicated that potential bidders knew the requirements of the RFP were unreasonable.

Requirements elicitation done poorly is the major source of defects in any project.

Taking a step back, we see from Holmes that Maine decided to use a new (to them) technology and develop the software themselves instead of outsourcing. The justification being that it would be easier to adapt to changing requirements (this becomes ironic later – read on).

The development of the new system was assigned to the IT staff in the DHS, which decided it wanted a system built on a rules-based engine so that as Medicaid rules changed, the changes could be programmed easily into the system.

Vendor selection

Quoting Holmes:

In this case, the low bidder, CNSI, had no experience in building Medicaid claims processing systems. In contrast, Keane had some experience in developing Medicaid systems, and the company had worked on the Maine system for Medicaid eligibility.

OK, maybe not so bad, but wait – more irony. The final costs (to the State) of going with the low-cost vendor exceed the bid from the high cost vendor.

Requirements validation

To begin with, the 65-person team composed of DHS IT staffers and CNSI representatives assigned to the project had difficulty securing time with the dozen Medicaid experts in the Bureau of Medical Services to get detailed information about how to code for Medicaid rules. As a result, the contractors had to make their own decisions on how to meet Medicaid requirements. And then they had to reprogram the system after consulting with a Medicaid expert, further slowing development. [emphasis ours]

We wouldn’t use the same language as Holmes, we would say “… the contractors decided to make their own interpretations of how to meet Medicaid requirements.” They never had to do it – they chose to do it. In Where bugs come from we show the impact of having or not having a feedback loop for validating requirements. Not having that feedback loop was either a decision of incompetence or hubris.

No one is blameless for this mistake. Maine’s IT department is responsible for making sure the contractors are doing what they really want. The contractors are responsible for doing what Maine wants. At a minimum, the SMEs should have been interviewed, and the contractors should have at least used active listening techniques to validate their interpretations of the statutes. All the way down to the developers, who should have required that they understand the context in which they are coding. They should have said “why?” until they got answers.

Risk Management

New vendor. New technology. Maine knew that the requirements were not good.

Thompson decided that the six months that would have been needed to redo the RFP was too much. “We had a requirement to get something in place soon,” Thompson says.

No access to SMEs (subject matter experts). No system tests (more on that later). No backup system. No contingency plans if the system didn’t work.
If there was a risk management plan in place, it certainly didn’t change the course of events.

Execution

Starting with project management:

  • Oct 2001. CNSI is selected as vendor – project length: 12 months.
  • Fall 2002. Project timeline doubled to an Oct 2003 delivery.
  • Fall 2003. No delivery.
  • Fall 2004. No delivery.
  • Jan 2005. System goes live.
  • Apr 2006. System now (claimed to be) operating at same level as legacy system.

And with development (here’s the aforementioned irony):

The development of the new system was assigned to the IT staff in the DHS, which decided it wanted a system built on a rules-based engine so that as Medicaid rules changed, the changes could be programmed easily into the system.

Errors kept cropping up as programmers had to reprogram the system to accept Medicaid rule changes at the federal and state levels.

Wow.

Testing

Hey, testing is optional.

testing the system from end to end was dismissed as an option. The state did conduct a pilot with about 10 providers and claims clearinghouses, processing a small set of claims. But the claims were not run through much of the system because it was not ready for testing.

Conclusion

Holmes presents excellent conclusions about the HIPAA project. Our conclusion – we need more people in Maine to read the blog. If you know someone in Maine, send them a link. In some seriousness, there’s a T-shirt that says “If you can’t be a good example, be a horrible warning.

Thanks for the warning, Maine!

Getting agile – should we?

Agile girl

Should we adopt an agile process for our team? Methods and Tools has posted a two part article titled Adopting an Agile Method. In thier article, they explore five areas of consideration. We provide our thoughts on each area.

Five areas to consider (from the article)

  • Our organization’s culture
  • Our customers and how they prefer to interact with us
  • The types of projects we do
  • The tools and processes that we currently use
  • The strengths and weaknesses of our software-related staff

They go on to make points in each area. Here are our thoughts on them

Culture

The toughest hurdle for any organization with going agile is adopting the notion of dynamic planning. With an Agile project, our expectation is that we do not know what the end state should be when we start the project. Some writers create an outline first, then fill it in. Others just start typing and “let the story tell itself.” Moby Dick is a classic example of a story telling itself. Mellville starts the book with a lot of emphasis on Bulkington, intending him to be a major character. On the day the Pequad sets sail, Mellville changes his mind and washes him overboard. Ahab takes the main role from there. [Analysis by Ansen Dibell in Elements of Fiction Writing].

In the debate between Cooper and Beck, Cooper charges that Agile processes are designed to make changes tolerable. Beck contends that they are inevitable.

If our culture is uncomfortable with the expectation that change will happen, then Agile will be a struggle.

Customer involvement

Agile processes require customer feedback and involvement throughout the process. Without feedback from customers, incremental delivery becomes incremental construction. We would still get the development-efficiency gains from iteration and introspection, but we would lose the much larger gains that come from redefining our objectives to focus on the right requirements.

Types of projects

The article promotes Agile as being more applicable to smaller projects subject to excess change. We think Agile, if adopted, can be applied to any project. We would ask, which projects are better off discovering that the wrong requirements were implemented after the project is complete, instead of in the middle? An understanding of sunk cost makes this all but self-evident. The less money spent on implementing the wrong thing, the more we stand to gain.

Current process and tools

The article points out that effective Agile processes are dependent both on automated testing and good source code control. Without either, the overhead associated with each iteration makes it too expensive to be Agile. We have to be able to automate testing (both technologically and culturally). We also need to be able to version and branch our source code. If we are in a code-freeze-test-debug cycle, we will get crushed by the burden of additional testing that comes with incremental delivery.

Staff skills

The article talks about the need for higher absolute capability from Agile team members. We think that this isn’t true. A more rigorous attention to process is required. People who forget to run the tests before promoting their code hurt every project. The pain is more keen for Agile projects, that depend upon a presumption of a valid baseline as a starting point for enabling more aggressive refactoring.

Summary

We talked about the benefits of being agile yesterday. To make Agile processes work,

  • We have to have people capable of following process.
  • They have to be equipped with the tools to automate testing and reduce release-overhead.
  • We have to have stakeholders and customers that can be engaged throughout the development cycle for feedback.
  • We have to have managers who are willing to plan on changes to the plan.

We also need to watch out for the top ten mistakes of adopting agile processes.

Gartner research on Agile Requirements Definition and Management (RDM)

magnifier

Gartner has a research report available for $95, titled Agile Requirements Definition and Management Will Benefit Application Development (report #G00126310 Apr 2005). The report is 7 pages long and makes an interesting read. Gartner makes a set of predictions for 2009 about requirements definition and management (RDM) systems, and the software created with RDM tools. Gartner misattributes several benefits of good process to RDM tools. We give them a 3.5/7 for their analysis – check out the details here.

Here’s the excerpt from their website (AD = application development):

The flexibility with which requirements are gathered and managed shows how disciplined an AD process is. AD organizations with automated requirements definition and management environments will better support change control, gain testing efficiencies and reduce future maintenance burdens.

Gartner Predictions

  • The cost of quality for developed systems will drop by 30%
  • Maintenance costs will drop by 10%
  • User satisfaction will go up from ‘fair’ to ‘good’ for medium and large applications

RDM systems have a meager impact on these predictions – other trends and processes are just as likely to affect them, hence our low rating of the Gartner report. Read on to see where they err.

Reduced cost of quality (RDM score 2/3)

Better requirements lead to fewer bugs. In Where bugs come from, we talk about the introduction of bugs from different sources in the process. The area where RDM can help is with misinterpretation of requirements. A structured system makes it easier to validate the alignment of proposed designs with requirements. RDM score = 1/1.

Incremental delivery processes will likely have a greater impact on cost of quality, as they help us correct mistakes in requirements documentation. This source of bugs (doing the wrong thing right) is higher up in the process, and thus has a larger impact on the bottom line. Gartner touches on this trend as well, but does not try to tease out the potential benefits and isolate them from the “benefits of RDM” analysis they present. RDM score = 0/1.

Traceability in RDM tools can provide large benefits in the cost of quality. To achieve these savings, RDM users must trace their testing to the requirements. This traceability helps uncover gaps in test coverage of the requirements, resulting in fewer bugs released to the field. RDM score 1/1.

Maintenance costs will drop (RDM score 1.5/3)

There are three trends that can drive lower maintenance costs for software. RDM can reasonably play a role in 1.5 of the 3.

  1. Improved design and implementation. RDM doesn’t make design better. Incremental construction creates opportunities for refactoring the code. RDM score = 0/1.
  2. Fewer gratuitous features. Most software today has too many features. Optimizing the features for a product will reduce the size of the code, and therefore reduce costs. RDM provides traceability of features to top level objectives (ROI), making it easier to identify and descope marginal features. RDM score = 0.5/1.
  3. Cheaper labor. Outsourcing can reduce the cost of development labor, but not as much as you might think. Joel Spolsky shows us that 80% of the costs aren’t programmers, so the upper bound on savings is small. Tarun Upadhyay provides some real world data showing that the total cost savings can be between 8% and 38% when replacing 70% of US developers with Indian developers. The extra savings comes from moving some of the abstraction layer (supporting roles and overhead costs) to India as well. RDM makes this possible by providing a means for rigorous asynchronous communication between team members operating in different time zones. RDM score = 1

User satisfaction will rise (RDM score 0/1)

User satisfaction is ultimately increased when software is focused on user task-accomplishment, or goal-achievement. Process approaches like Alan Cooper’s interaction design process make this more likely to happen. Existing RDM products use a structured-requirements representation, most aligned with Karl Wiegers’ framework. With Wiegers, the main tool for capturing user tasks is the formal use case.

While it is possible to combine interaction design and structured requirements processes, none of the current RDM products do this. Our prediction – none of them will make this change in time to impact Gartner’s predictions. As interaction design gains momentum in the product management space, customer satisfaction will definitely go up. But it will be in spite of RDM systems, not because of them. RDM score = 0

Conclusion (RDM score 3.5/7)

The optimism about improved economics of software development and improved levels of user satisfaction is heartening. Unfortunately, Gartner has tied too much of this optimism to the proliferation of RDM systems (they also forecast strong growth in sales of RDM systems). We pointed out that requirements management software will not solve the problems with bad processes.

We are excited to see Gartner focus on RDM and on RAD processes. Executives are very swayed by Gartner opinions. If RDM retools to support interaction design, it could indeed live up to these predictions.

Two big benefits of incremental delivery

Tarun Upadhyay wrote a fair criticism of our previous post on why incremental delivery is good on his blog today. It is great that he is extending the conversation, and he makes a couple valid points. We definitely missed a big benefit of incremental delivery, and will cover it in this post.

Here are the main points from Tarun’s critique:

The analysis is rather simplistic and does not assume any additional gains from having all four pieces working together (typical in many but not all projects) and also does not take into many other benefits from agile iterative releases like:
a) creating a release structure forces good habits like: continous integration, automated build management and consistent configuration across development, QA and production
b) many releases forces customer to see the product early which reduces surprises and produce better alignment around what customers want vs. what the team is developing
c) earlier releases brings out many feature requests from the customers earlier in the cycle (causing fewer design changes and less rework) reducing the overall costs.
d) estimates are better when deliveries are iterative.

We like incremental delivery. We don’t promote it because of the correlated benefits that often happen when we do incremental delivery. We promote incremental delivery because of the two big benefits that are caused by incremental delivery.

Two Big benefits

  1. Achieve ROI faster from earlier deployment.
  2. Higher absolute ROI from deployment of more valuable software.

Achieve ROI faster
Our previous post on why incremental delivery is good focused on this single benefit. By delivering independent, atomic sets of functionality as early as possible, we can begin getting ROI from the software faster than if we waited until the originally scoped software was complete to release it.

ROI faster graph
This chart shows ROI versus time for a simplified example of delivering the most valuable requirements first, prior to completion of the least valuable requirements.

Higher absolute ROI

We completely overlooked this other big benefit of incremental delivery in our previous post. A key premise of why agile methods are better is that we learn as we go. Once we start writing the software, we begin to learn more about it. Through iteration and prototyping we gain a better understanding of the requirements.

When we take advantage of that knowledge, we improve requirements (make them more valuable) and replace requirements (with more valuable requirements). Therefore the requirements that we ultimately implement are more valuable than the ones we initially identified. The result is that the absolute ROI will be higher when we use incremental delivery.

Tarun’s points b & c are also addressed here. He points out that it costs less to change the software before it is written than after it is written. The iteration and feedback cycles definitely give us this benefit. To his credit, the argument can be made this way. We believe the argument is more compelling when presented in terms of differential value than differential costs.

Other possible benefits

There are other sources of benefit, though marginal in comparison with the big benefits. These benefits, however, don’t come from incremental delivery, nor are they prevented with waterfall delivery. Teams that deliver incrementally tend to also have other beneficial processes. Introducing incremental delivery processes at your company may create a vehicle for making these other things happen, but they aren’t strictly required.

Lower cost of quality
The earlier we catch bugs, the less they cost to fix. We can take the same testing approach with both waterfall and incremental project plans, so assuming that incremental requires or forces better quality is wrong. It would also be wrong to say that a waterfall process prevents better quality. When comparing incremental delivery with alternative approaches, we have to isolate those things that must be different from those that might be different.

Think of it in terms of correlation and causality. Teams that deliver incrementally tend to have better processes – but that’s correlated, not causal.

The only difference that is caused by incremental delivery is that we get end-user bug reports earlier. These bugs might influence fewer users if our incremental releases are designed to have incrementally increased user-bases. With a smaller code base (at the time of bug-fixing), it might cost us less to fix the bugs that are reported in early releases.

Automated process steps are more efficient

Making the statement that incremental delivery process must be more efficient than waterfall process simply isn’t true. We’ve worked with teams that release new functionality every month with painfully manual build and test processes. A waterfall process may have nightly builds, automatically pulling the trunk from source control and running regression, performance and load tests every day on dedicated hardware. This type of development-process-enhancement can happen with or without incremental delivery schedules, and incremental deliveries can happen without these beneficial processes.

Improved estimation

Respectfully, we disagree.

Estimation improves when the estimator reviews previous estimates and gets better at providing estimates over time. This is a personal development process, and can be accomplished by any developer working on any project. There is nothing that prevents a waterfall project from reviewing estimates throughout the course of the project. Without the benefit of constraining a development team to timebox based delivery, incremental delivery is harder to estimate than waterfall delivery.

Incremental delivery is harder to estimate because we fully expect to change the scope of the project as we go. Individual task estimates can be updated in either process approach.

The benefit of waterfall process estimation is that we can confidently predict how long it will take us to implement the wrong requirements.

Summary

Incremental delivery is valuable because we get returns earlier, and by adapting to feedback from the early deliveries, we can improve the requirements resulting in higher absolute ROI.

Persona Grata

Different strokes cast

Different people approach the same goal very differently. When we don’t truly identify our users, we end up with software that dehumanizes, waters-down, and otherwise fails to succeed at anything more than grudgingly tolerated functionality. Even worse, we may ignore the needs of our key demographic, resulting in software failure. When we use personas instead of generic use cases, we can avoid both the misery of a failed product and mediocrity of marginal success.

Different Strokes for Different Folks

James Kalbach has a post at Boxes and Arrows about how to design for the four different modes of seeking information. This makes for a great analogy to how different people will approach the “same” task differently. All of these examples can be generically classified as “User searches for information on the internet”

  1. Searching for information about a known item. Imagine that you know a bunch about growing chile peppers. You want to learn the specifics of how to grow a habanero plant in Zone 3 of North America. You know what to look for, you know what terms to use to look for information. You know what questions to ask, you just need answers.
  2. Exploring an area of knowledge. You are tired of your job, and you want to find out if starting your own franchise business is a good idea. You have a reasonable idea of a starting point and general questions, but no idea what detailed questions to ask.
  3. Not sure what you even need to know. You decide to invest in a vacation property. You have no idea what you need to understand in order to know what questions to even ask.
  4. Repeat searches. Two years ago you learned about what it means when the yield curve inverts. Now that it’s time to rebalance your portfolio, you need to refresh your understanding of macro-economics.

Each of these examples represents a distinctly different user goal, even though they are all users of the same search software, and they are all searching for information. The same solution is unlikely to be ideal for all of them.

Personas for Requirements Management Software

Consider requirements management software and the users that are forced to live with it (most of us). Only enterprise software is consistently more obtuse about the notion of different users requiring different interfaces. RM software like Caliber RM, DOORS, RequisitePro and their ilk all suffer from the design specifications of myopic requirements experts. They suffer from other things too – featuritis, failing to clear the suck threshold, and an expert-only interface.

Requirements management software is intended to be not only a central repository for documenting requirements, but to be a dynamic hub of information. Everyone on the team should be able to use the system and consume information from it. When the software is designed for a single user (the requirements manager who inputs and manages the data), it makes it harder for other users to isolate the information that is relevant to them. An RM system is designed both for input and output, and where the major systems consistently drop the ball is in output.

Let’s look at three of the personas who need to consume status information that comes out of a requirements management system. We’ll start with their roles and their corporate goals

  • Program Manager. Responsible for the management of the requirements for a software application.
  • Development Manager. Responsible for delivering the functionality required to support each requirement in each release of the software. Also responsible for growing the development team’s capabilities.
  • Project sponsor. Responsible for delivering increased sales across every channel in every region the company serves. Funded the project.

Progam Manager – Joe K.

Joe K

Joe has been with the company for two years, ever since the startup he was at was acquired. He has a passion for driving new product innovations, has friends all over silicon valley and knows the latest ideas being explored by the current wave of startups. He’s driven some real innovation in embedded software development before. This is his first project with a user interface. Joe competes in an ultimate frisbee league every weekend, to burn off the energy he builds up with late nights at work. Joe’s biggest challenge isn’t getting the work done, it’s determining which work to do – he’s bursting with ideas. When Joe looks at the status of the product, he’s concerned that the dev team is hitting or will hit all of the releases. When they have to change the schedule, Joe works with the stakeholders to reprioritize and works with the development manager to find out what can be done.

Development Manager – Thom Jai
Thom Jai

Thom is all but burnt out. He’s managing development for two products right now, and both teams are global, with people down the hall and people on the other side of the planet. When he’s not trying to meet deadlines, he’s trying to figure out how to scale the offshore operations to save the company money. Thom has a family at home, and the midnight conference calls are really starting to frustrate him. He can’t cancel them, because his developers always have questions about what to implement, and he has no time in the schedule to let anyone lose a day while he waits to get an answer to a question. He has to interpret the requirements for his developers and provide them with context, clarification, and design suggestions. Maybe next month he’ll be able to work on performance reviews. When Thom is looking at the status of the product, he is trying to make sure that requirements aren’t changing after they’ve been scoped or scheduled. He also wants to know whenever the schedule changes for a requirement, or anything that the requirement depends on.

Project Sponsor – Julie Rogers
Julie Rogers

Julie manages a two-hundred person global service operation. Julie quickly rose up the sales-management ranks because of her ability to create strategic relationships with her clients. Every account she ever signed is still a customer. Julie contacts a different customer every week just to ‘check in’ and make sure the customer is happy. Her managers told her that their number one problem in closing more deals is getting the right prices for the products. Julie commissioned the IT department to build her software that helps her regional managers set the right prices for the company’s products. The IT team promised to deliver an application that could report on historical prices by the end of the quarter. By the end of the year, the application will even provide profit information, lead-time data (another big problem), and the ability to review customer-history when closing a deal. Julie’s daughter is in high school and applying to colleges across the country. Julie is spending every minute she can helping her daughter when she isn’t running her sales organization. She wants updates on the status of the project, but does not care about the details – “it’s like making sausage” is her favorite quote. She knows what her team needs, and she knows what Joe has committed to deliver.

Different roles, different goals

Each of our personas has very different goals. Joe wants to deliver the most valuable requirements as fast as possible. Thom wants to make sure the ground doesn’t shift beneath his feet – he’s busy enough making sure his team can execute to meet their commitments. Julie wants to increase sales, and believes the software will help, once it’s been deployed to her team.

Persona non gratis

If we ignore the different needs of our different users, we can use the same (and only) interface for all of our user tasks.

Julie will look at the schedule for the top-level goals in the system. Joe has explained that all of the structured requirements in the system roll up to those goals, so she can ignore everything else. Julie is immediately overwhelmed with information, but with some coaching is able to ignore the noise and focus on the signal (the scheduled dates for the goals). She’s content that everything appears to be ok. What Julie doesn’t realize is that the underlying requirements for one of her main goals has slipped by a release, and the date for the goal hasn’t been updated. She would not be able to work through the traceability matrix in the application to find the problem.

Thom is struggling – every morning he sees that at least a dozen requirements for the current release have changed. He has to keep an archived version of the SRS and compare the two files to see which changes are irrelevant (typos, etc), and which ones affect his scoping, or change functionality after it’s been implemented. Thom can use the interface, but it’s yet another complex system that he doesn’t have time to learn. He’s frustrated that he has to spend any time at all doing it, and especially frustrated that the one thing he needs the most (to understand what has changed, at a detail level) has to be done manually by him.

Joe is snug as a bug in a rug. The software works just like he expects it to work. Sure some steps are manual and tedious, but hey – managing requirements is hard work. Since Joe spends so much time in the application, all of the traceability is intuitive, he knows where everything is, and he’s even learned the shortcut keys for jumping around and mass updating of files. Joe doesn’t spend any cycles thinking about the UI, because it was designed for him – he gets to spend all of his time on his work.

Persona gratis

Just thinking about the problems from the perspective of our key users makes the problems glaringly obvious. Why didn’t the RM software vendors tackle these problems? Aren’t they supposed to be requirements experts? Shouldn’t they have an appreciation that a central tool for a team is used by more than one team member? They implemented row-level locking in the database to support multiple users (says so right in the press release). Shouldn’t they realize that the simultaneous use is by different people? Someone needs to hurry up and build a requirements management system that is designed for all of the people that use it. Or at least the most important ones.

We can learn from their mistakes

When we are gathering requirements, if we do it in the context of the user personas, we can create great software. Goal driven development is a great framework for doing this. When we keep things at an abstract level, we run the risk of making the software unusable by key users. It may not prevent a sale, but it certainly jeopardizes a renewal or future sale.