Monthly Archives: February 2006

Second-Mover Opportunities: Bringing a Gun To a Knife Fight

picture of an old knife
Thanks Harry Nieboer for catching this one in your post, Spending time on features that are never used.

IBM developerworks has an introductory article by Laura Rose in their Rational section titled Involving customers early and often in a software development project.
Laura’s main point

The main point of Laura’s article is the importance of engaging users to find out what they really care about. In this post we are going to pick up on another point she makes indirectly.

usage breakdown

[image from IBM developerworks]

What isn’t as obvious is that, of those 100 features, a total of 64 percent are rarely or never used by any customer. But to stay seemingly competitive to the other brands, we incorporate superficial solutions to the unneeded 64 features, allowing us to “checkmark” those rows (making our product more complicated and frequently more difficult to use).

Laura’s accidental point
Laura also points out indirectly that the inclination of companies is all too often to build software that looks good on paper instead of software that is good in practice. A sort of rat-race of me-too’s and mimicry. Companies that add features solely because the competition has them are in for trouble.

In our previous post on applying Kano analysis to requirement prioritization we talk about how innovation, and more-specifically differentiated innovation is key to the success of software. If we base our requirements on keeping up with the competitors, we aren’t setting the stage for innovation, we’re setting the stage to be second-movers.

The second-mover advantage myth

It may not be bad to be a second mover. Atari moved first, Sony, Nintendo, and Microsoft now dominate the market for video games. But being a second mover doesn’t provide an advantage – it presents an opportunity. Activision, Coleco and Intellivision were all second movers too, and they vanished into obscurity.

Sorting out the second movers

In one camp we have the failed companies, Activision, Coleco, and Intellivision (ACI, collectively). In the other camp, we have the thriving (or at least surviving) companies, Sony, Nintendo, and Microsoft (SNM).

The ACI companies did not differentiate their products. Team ACI had two major problems. They turned out a bunch of really poor products that would fail in a static market, and they failed to respond to changes in their market. Not only did they not innovate or differentiate, they failed to live up to the existing standards. They also failed to respond to the changes in the marketplace caused by low-cost personal computers. The pc technology was disruptive, and the marketing was ruthless (“your college bound kids need a computer more than a game console”) and effective. The new pcs could provide the same gaming experience at a comparable price and they had value in being used as computers. The ACI companies showed up for a gunfight with knives.

The SNM groups innovated and differentiated themselves. The SNM group had the opportunity to see how the market was changed by pc companies. They also had the opportunity to see that customers actually cared about the quality of the products. The SNM companies realized that as long as pcs could provide a comparable gaming experience, they would dominate the market because their products were differentiated with computing capabilities – so SNM created solutions that could not be implemented with pcs of the day. The SNM group also saw that their products would need to be good to be sold. Their strategy was effective. The SNM companies disrupted things by showing up to a knife fight with guns.
Being a second mover didn’t provide an advantage, it provided an opportunity. ACI failed to capitalize on that opportunity, where SNM created an advantage from the opportunity they were presented.
How to act as a second mover

  1. Show up to the knife fight with a gun. Redefine the rules of the game. Look at the feature-lists of your competitors and guess at the underlying requirements. Find alternate problems to address, or address the same problems in innovative ways. Make those features irrelevant. When I use Gmail, I don’t care if I can copy all my emails from one folder to another – they don’t even have the concept of folders.
  2. Elicit the right requirements. See Top five requirements gathering tips for more.
  3. Ideate to determine which problems to address and which to make irrelevant. See From MRD to PRD […] for more.
  4. Prioritize requirements ruthlessly. See Prioritizing software requirements […] for more.
  5. Implement with high quality.

Prioritizing Software Requirements – Kano Take Two

Photo of iPod

Let’s try this again.

In our previous post on Kano requirements classification, we introduced the concepts and showed how to apply them. One of our readers commented privately that we didn’t show how to use the techniques for prioritization. We’ll do that in this post. Thanks very much for the feedback!

Kano analysis recap

Kano provides three relevant classifications of requirements (the fourth category is redundant). All requirements can be placed in one of these categories.

  1. Surprise and delight. Capabilities that differentiate a product from it’s competition (e.g. the nav-wheel on an iPod).
  2. More is better. Dimensions along a continuum with a clear direction of increasing utility (e.g. battery life or song capacity).
  3. Must be. Functional barriers to entry – without these capabilities, customers will not use the product (e.g. UL approval).

Using the categories to prioritize requirements (discussing them in reverse list-order)

photo of fiat - a mandatory decree *

3. The first release of the software should include primarily ‘must be’ requirements.

We talked about how 37signals and other companies have taken the “more is less” approach to releasing their software. The first releases (or beta releases, which has become the misnomer du jour) of successful products have focused the majority of their efforts towards achieving these highest priority requirements.

This trend has evolved because of what we used to call internet time. Products are being released more quickly by startups, skunk-works, and other teams operating with Agile development approaches. The dot-bomb generation of software developers have almost a decade of experience now, and are increasingly influencing company decision making, with the benefits of the lessons they learned about hype and substance. These teams and leaders are driving relevant and differentiated innovation into the marketplace faster than ever before.

Geoffrey Moore is a big thinker and author of Crossing the Chasm, Inside the Tornado, and now Dealing with Darwin: How Great Companies Innovate at Every Phase of their Evolution. (Chasm and Tornado are two of my favorite books about innovation – great ideas, and quick reads). He recently posted an article – Top 10 Innovation Myths, that is in line with his new book and definitely worth a read. One of Mr. Moore’s points is that innovation isn’t the goal – product differentiation resulting from innovation is the goal. He’s absolutely right. An innovative way to minimize the window of an application isn’t likely to differentiate the product from it’s competitors. An innovative way to automatically validate requirements would be the proverbial better mouse trap.

Making things even more competitive is the increased speed and reduced cost of getting the message out. Idea virus is a term coined by Seth Godin, and every day another company learns how to do it. When a new product gets digg’ed or slashdotted, the stampede on the server is like the mobs that went after super cheap computers this past christmas. The traffic can even be overwhelming and shut down the servers. And that still doesn’t stop the flow of traffic for really hot ideas – people will post links to a cached version of the page at Google, and the idea virus keeps spreading.

It isn’t enough to be fast, as Mr. Moore points out. But it also isn’t enough to be differentiated. Both are important – neither is sufficient alone. The twin-dynamic of smarter, faster competition combined with cheaper, faster effective marketing demands that we focus on reducing time to market.

All of the must be requirements need to be included in the first release. If we could release the software without implementing features to support a must be requirement, then either no-one will use the software, or the requirement is not truely a must be requirement.

picture of more french fries

2. ‘More is better’ requirements need to be rationally defined and then prioritized based on ROI.

We showed in our previous post how to find the pareto-optimal point for specifying a more is better requirement. This is the point where additional investments in the measured characteristic are not offset by comparable gains, due to the law of diminishing returns. We showed that optimal point to be where the slope of the cost-benefit curve is 1 (or 100%). What we didn’t account for is the opportunity cost of spending those development resources on other features or capabilities or projects. If we have a hurdle rate of 20% for investments, we should find the point on the cost-benefit curve where the slope is 1/1.2 (120% benefit for 100% cost). This normalizes our cost benefit decisions across projects.

The key to scheduling more is better requirements is to take advantage of the fact that they represent a continuum of performance, and a continuum of benefit from that performance. In the earliest release(s) – include a minimal amount of the requirement – not the optimal amount. The optimal amount can be added later. We refer to this as requirement staging – or implementing portions of (or versions of) a particular requirement across multiple releases.

Engagement ring
1. Surprise and delight requirements can be differentiators.

We care about surprise and delight features NOT because they are whimsical, but because they are valuable. A delightful splash screen doesn’t make software easier to use – but it has value in both branding and word of mouth marketing. It’s the buzz-marketing equivalent of great packaging for a physical product (like the ketchup bottle that stands upside down). Self-configuring software, error messages with “click here to automatically fix it” buttons are other examples of surprises with value. These examples can increase the number of possible users, or help novices clear the suck threshold more quickly. These are the kinds of things that make our customer-salespeople more likely to promote our products (as Seth Godin describes them).

When prioritizing these requirements, we have to consider our marketing strategy to determine their importance. Are we a startup and is this our first product? Are we a behemoth rolling out another bolt-on package for our installed base? These types of features are most valuable when individuals are making purchasing decisions and when we’re relying on word of mouth marketing. They are least valuable when the decision maker is three layers (and six figures) removed from the actual users – someone who can pragmatically decide that usability is irrelevant because he doesn’t have to use it personally.

* The tiny car in the picture is a Fiat – a legally binding command or decision.

Dilbert gathers requirements

Official Dilbert link icon

Another great Dilbert – http://dilbert.com/strips/comic/2006-02-26/

I won’t show the cartoon here, but here’s a quote from the first two panels:

Pointy-haired boss: Why is your project four months behind?
Dilbert: I still don’t have the user’s requirements because she’s a complete nut-job.
[…]

This cartoon does point out the critical importance of eliciting the requirements, not requesting the requirements. We can’t expect our stakeholders to just put together a requirements doc and hand it to us. Even if they happened to be trained in requirements management, gathering requirements will not be their top priority.

Instead of asking his user for requirements, Dilbert should have interviewed her.

Great Dilbert products

dilbert book coverThe latest book (Nov 2005) from Scott Adams, Thriving on Vague Objectives

From D. Reller:

Another Dilbert collection, another great Dilbert collection. Highly recommended, of course, if you work in an IT/IS/MIS department, or just in an office. They never get old, or out-of-date. Mr. Adams hasn’t slowed down or backed off one bit – Ratbert the $100 million CEO-reject, multiple Wally updates, Dilbert and dating….

Reading too much to buy another book? Check out the DVDs
dvd cover art Dilbert – The Complete Series (1999)

From Adam Dukovich:

The voice acting is excellent. The characters all sound just as they would be expected to when reading the comic strip. Daniel Stern as Dilbert, Larry Miller as the clueless boss (who had to have been based off of one of Scott Adams’ bosses), Larry Charles as slacker engineer Wally, Kathy Griffin as Alice, the triangular-haired female engineer, and Chris Elliot as Dilbert’s sidekick Dogbert, not to mention a parade of guest stars, including Tom Green, Andy Dick, Jerry Seinfeld, and Jason Alexander. Put simply, this show had talent to burn.

Prioritizing Software Requirements With Kano Analysis

photo of ipod

Using Kano analysis to prioritize software requirements

We’ve talked before about three ways to prioritize software requirements. We’ve also talked about incorporating risk analysis into ROI calculations for requirements. In this post we will look at how Kano analysis can be applied to prioritizing requirements.

Kano analysis allows us to prioritize requirements as a function of customer satisfaction. Kano defined four categories into which each feature or requirement can be classified (we’ll use an Apple iPod for examples in each of the categories):

  1. Surprise and delight. Capabilities that differentiate a product from it’s competition (e.g. the nav-wheel on an iPod).
  2. More is better. Dimensions along a continuum with a clear direction of increasing utility (e.g. battery life or song capacity).
  3. Must be. Functional barriers to entry – without these capabilities, customers will not use the product (e.g. UL approval).
  4. Better not be. Represents things that dissatisfy customers (e.g. inability to increase song capacity via upgrades).

Surprise and delight requirements

For just a moment, think about software as a user, not an accountant. We want software that is fun to use and interesting. Affordances in the user interface that allow us to just “do what comes naturally” and have the software do exactly what we want. New ideas that make software better. We aren’t talking about a button that pops up dancing squirrels when we hit it – we’re talking about valuable features that make software great.

  • The nav-wheel on the iPod is a great hardware example.
  • Gmail’s use of labels instead of folders for organizing email is a good software example.
  • Contextual help buttons that open to exactly the right page in a help file.

All of the examples above are implementation details or the results of design decisions – which we’ve pointed out before are not part of specifying requirements. However, when converting from market requirements to product requirements, we can point our development teams in the right direction, and help our development teams focus on innovating the right problems. These might be the requirements behind the delightfull features listed above

  • Users must be able to select songs while holding the iPod in one hand.
  • The system must provide an efficient way to organize emails, with the assumption that users will never delete emails.
  • The system shall provide relevant help information for the context in which the user requests help.

More is better requirements

These are the most easily graspable concepts – bigger, faster, better, stronger. The challenge in writing a more is better requirement is in knowing when enough is enough. If we were to write requirements that said “minimize” or “maximize”, they would be ambiguous requirements. What’s the theoretical minimum response time for a search engine? A few hundred micro-seconds for the bits to travel from the server to the user, plus a few micro-seconds for switch latency, plus a few nano-seconds for a cpu to find the answer? If we were unambiguously requesting that our developers minimize search time, it would be completely impractical.

Specifying precise objectives can be very difficult as well. The law of diminishing returns comes into play. There’s a concept in economics called utility which represents the tangible and intangible benefits of something. We can consider the utility of a feature with respect to the target users. A graph of the utility for speed of search-result generation would look like the following:

increasing utility with speed

We can see that as the speed of results increases, the associated benefit to the user of further increases in speed is diminished. When writing a requirement, how do we determine the speed that is truly required? We would be ambiguous to say “as fast as possible” or “as fast as reasonable.” And we would be naive to think that we didn’t need to understand something about the implementation before specifying an unambiguous requirement like “search must complete in 2 seconds.”

We have only described the benefit side of the cost-benefit analysis needed to specify the requirement. We have to iterate and interact with our designers to determmine the impact of a speed specification on costs. After getting feedback from our implementation team, we now have an understanding of the cost of implementing search, as shown in the following graph.

cost versus speed diagram

As we see, it gets progressively more expensive to make progressively smaller increases in speed. This is our “development reality” and we can’t ignore it when specifying how fast search needs to be. To determine the optimal specification, we have to find the point in the curves where the incrmental benefit of searching faster is equal to the incremental cost of searching faster. We can do that by graphing utility versus cost as shown in the next figure.

utility versus cost

The circle shows the point where the slope of the curve equals 1. At this point in the curve, an additional increase in speed provides less benefit than the associated increase in cost. At any point to the left of the circle, we’re “leaving money on the table” because there is a better point to the right. This is the optimal speed to specify.

Reality check

In the real world, we won’t have the precise data that allows us to draw these graphs and quantitatively identify the pareto optimal point on the cost-benefit curve. It is important to understand the fundamental principles of the tradeoff so that we can make informed decisions and judgement calls.

Some analyses will be relatively easy, as our development curves are usually discrete data points based upon estimates of the work required to implement particular designs. We also won’t have access to the full spectrum of design choices, because we will be limited by other constraints on the system as well as the creativity and capabilities of our development team in proposing alternatives.

Must be requirements

These are the requirements that most people think about when they talk about requirements. These are the easiest requirements to elicit.

Stakeholders can usually tell us what they must have in the software. In our Am I hot or not? post on requirements prioritization, we see that 37signals focuses on this as their primary criterion for inclusion in a software initial release. They choose to only put essential, or ‘must be‘ requirements into the initial release of software.

Better not be requirements

This is really just the opposite of surprise and delight. If dreamers think about what makes something great, then critics complain about what holds it back. We don’t think this bucket really has a place in Kano’s analysis. Saying “Users don’t like confusing navigation” doesn’t provide any benefit relative to saying “Users prefer intuitive navigation”. We suggest not using this category at all.

Conclusion

We can apply the Kano techniques to make sure we’re making good prioritization decisions

  1. Are our 1.0 release requirements all “Must be” requirements?
  2. When we specify “More is better” requirements, are they unambiguous, and are they optimal – or at least practical?
  3. Do we have any “Surprise and delight” requirements that will enable us to create an innovative product?

[Update: We’ve continued this analysis in Prioritizing software requirements – Kano take two with details about how to apply this classification system to prioritization decisions]

Definition of opportunity cost

dictionary

Why won’t my boss approve my project? I’ve done the math – it’s a good investment. Because it isn’t good enough. We learn the math and rationale behind these decisions in this article.

Opportunity cost is a financial metric generally applied to investment decisions made by companies. These decisions can be made about very large potential investments (acquisitions and company mergers), or they can be applied to smaller investments (individual projects). In this post, we will talk about opportunity cost as it applies to project-level decisions.

Definition of Opportunity Cost

Opportunity cost is the “lost opportunity”, of the best alternate way to spend the money.

Consider the following example of an investment decision without using opportunity cost:

We are considering a project to spend $100,000 to build a new website. We have calculated the expected value for the increased sales from the new website to be $110,000. This represents an ROI of 10%.

Without any additional information, we would make this investment. Every dollar we spend yields us $1.10 in returns.

Evaluating with Opportunity Cost

What would we do with the money if we didn’t spend it on this project? Imagine we had the following opportunity:

We have the option to invest $100,000 for a year in corporate bonds at a 20% interest rate. At the end of the year, we would have $120,000. This investment option represents our other opportunity for the money, with an ROI of 20%.

The opportunity cost for our website project is $120,000, the value of investing in bonds. The opportunity cost exceeds the value ($110,000) of our website project. As a company, we should not build the new website, we should invest the money in the bonds. We would be better off at the end of the day.

Using Opportunity Cost to Make Project Decisions


Every company always has an opportunity cost for any investment. It may not be obvious, and it may be very small, but it is always present.

Any given project will have a sponsor who has the ability to decide how to spend the company’s money (up to some dollar limit). For our example, we’ll use an IT department director as our sponsor. The sponsor is not expected to know what the company’s investment alternatives are, or what the corporate opportunity costs are. Our sponsor will, however, be expected to exceed the rate of return that the corporation could otherwise get if she didn’t spend the money.

This rate of return is called the hurdle rate. All investments by our sponsor should be expected* to exceed the hurdle rate. In fact – the hurdle rate is the minimal requirement, not the goal. A project that barely clears the hurdle rate is a marginal project, and probably shouldn’t be done at all.
horse jumping hurdle

Determining the Hurdle Rate


Companies have two sources of money – cash from investors and borrowed money. The investors and lenders expect a particular rate of return on their money – and that determines the hurdle rate for the company.

At a high level, some percentage of the company’s cash comes from investors (private investors, stock holders), and the rest comes from lenders (banks, bond holders). Each group has an expected rate of return on their money. Imagine we funded our company with $50,000 in cash from our rich uncle, and a $50,000 loan at 10% from the bank. Our uncle expects a 20% return on his investment (he expects us to convert his $50,000 into $60,000 by the end of the year). At the end of the year, we have to pay the bank $5,000 in interest and we have to show our uncle that we have an extra $10,000 in the bank. We started the year with $100,000 and we have to end the year with $115,000. This represents our weighted average cost of capital (WACC) of 15%. A detailed explanation of how to calculate the WACC for public companies can be found in this investopedia article.

We should only consider investing in projects that we believe will have an ROI of at least 15%, if we plan to meet our cost of capital expectations. This therefore defines our hurdle rate – the minimum return required to satisfy our investors and lenders.

Survival of the Fittest

We’ve established the hurdle rate, or minimum rate of return we should even consider. Think of it as the initial audition – if our project can’t meet the hurdle rate, it won’t be considered at all. But when we defined opportunity cost, we defined it as the return of the best alternative investment – not the minimum expectation of our investors. We have to compare our project to other projects.

If our company has a hurdle rate of 15% and we have a $50,000 project with an expected rate of return of 20%, but another $50,000 project is also being considered with an expected ROI of 25%, our sponsor should pick the other project. The value of our project is $60,000 (120% of $50,000), which is less than the opportunity cost of $62,500 (125% of $50,000). These comparisons are generally done as a percentage basis, to allow us to normalize and compare projects of different sizes.

*The degree to which the investment is expected to exceed the hurdle rate is a function of how the company is run. Some companies don’t explicitly manage project ROI for projects under a certain dollar amount (or managers below a specific level in the org chart). It varies with companies and with individual managers.

What do you hate?

Frustrated user

What do you hate about Tyner Blain’s blog?

ack/nak posted a great idea – ask customers what they hate about you.

Seth Godin has a book – Permission Marketing: Turning Strangers into Friends and Friends into Customers, and in his free ebook, Flipping the Funnel, he expands on what he originally wrote. He talks about transitions:

  • Turning strangers into friends
  • Turning friends into customers
  • Turning customers into salespeople

Seth explains friends as:

I define your “friends” as the prospects you’ve earned permission to talk with—even though they haven’t turned
into customers yet. And your customers have crossed the Rubicon; they’ve been converted from total strangers
to interested friends, and then all the way to dedicated users of your product or service.

We think of strangers as first time visitors – welcome aboard. Friends are repeat visitors – welcome back. Customers have bookmarked us or subscribe to our feed (RSS) – thanks. Salespeople have linked to our articles or added us to their blogrolls – special thanks to you!
So, we ask you : What do you hate about Tyner Blain’s blog? Please tell us in the comments on this post, and thanks in advance.

OnTime Bug tracking software – $5 (or free) from Axosoft

OnTime boxed software image (image hotlinked from Axosoft)
Seriously.

There’s a crazy deal being offered by Axosoft. Buy a 5-user version of their $500 software suite for $5, but the offer expires February 24th (2006). The link to buy the software is here – and only available on blogs. Axosoft is trying a social marketing experiment to see if they can promote their products and brand via the blog universe. It isn’t clear at what hour the offer expires, so you might want to get it on the 23rd.
Check out the demo (6 minutes). We bought the software immediately after watching the demo today. Axosoft is donating the $5 to the American Red Cross. They have already raised over $2400. I love their explanation of why it’s $5 and not free:

When we decided to move forward with this offer, we wanted to make it free. However, the Axosoft Online Store currently doesn’t have a way to do $0 transactions. All purchases through the store must have a positive dollar value. Rather than spending the time and resources to update the store to allow $0 transactions, we decided to go ahead and charge a nominal fee and donate the money to the Red Cross.

The first positive sign (for us) about the folks at Axosoft. They make rational investment decisions.
After digging around on their site, we found that they (at least for now) also have two ways to get the software for free.

OnTime 2006 installs as a 30-day multi-user trial for up to 12 users for evaluation. An activation key is NOT Required! However, for a free single-user activation key of OnTime that never expires, with no limitations (a $200 Value!), visit the Key Request Page.

Aren’t you glad you found the link at Tyner Blain?

The second positive sign about the folks at Axosoft.

After the purchase, we got a personal (form letter) email from their president, Hamid Shojaee. Marginally useful. In that email, he mentions that we should have received our product activation key in an automated email. We hadn’t. I replied to his email, and 9 minutes later had both a response from Mr. Shojaee (including my activation key) and the automated email with the activation key. With hundreds of purchases in two days, the fact that their president took the time to reply (with no way of knowing that it would show up as good PR here), impressed us again.

Is it any good?

Honestly, we don’t know yet. Since their offer is so short lived, we wanted to get the post out asap – if it is great, then getting it for $5 (or free, if you’re a one-man-shop) is fantastic. So, we’re joining in and spreading the word too. We can tell you that the windows client installs very easily and also uninstalls very easily. It does require a SQLServer database connection, so it isn’t a 30-second install. But if it isn’t what you want, it is a 30-second uninstall.

Measuring the Cost of Quality: Software Testing Series

scale

Should we test our software? Should we test it more?

The answer to the first question is almost invariably yes. The answer to the second question is usually “I don’t know.”

We write a lot about the importance of testing. We have several other posts in our series on software testing. How do we know when we should do more automated testing?

Determining the costs is an ROI analysis. Kent Beck has a great position –

If testing costs more than not testing, then don’t test.

At first glance, the statement sounds trite, but it really is the right answer. If we don’t increase our profits by adding more testing, we shouldn’t do it. Kent is suggesting that we only increase the costs and overhead of testing to the point that there are offsetting benefits.

We need to compare the costs and benefits on both sides of the equation. We’ll start with a baseline of the status quo (keeping our current level of testing), and identify the benefits and costs of additional testing, relative to our current levels.

We should do more automated testing when the benefits outweigh the costs

We’ll limit our analysis to increasing the amount of automated testing, and exclude manual testing from our analysis. We will use the assumption that more testing now will reduce the number of introduced bugs in the future. This assumption will only hold true when developers have the ability to run the automated tests as part of their personal development process. We’ve written before about the sources of bugs in the software development process, and in other posts in this series we show how automated testing can prevent future bugs (unlike manual testing, which can only identify current bugs).

We are also assuming that developers are running whitebox unit tests and the testing team is running blackbox tests. We don’t believe that has an impact on this analysis, but it may be skewing our perspective.

Benefits

  • Reduced costs of bugs in the field. Bugs in the field can cause us to have “emergency releases” to fix them. They can increase the costs of (internal teams) using our software and working around the bugs. They can cause delayed sales. Bugs cause lost customers.
  • Reduced costs of catching future bugs. When developers can run a regression suite to validate that their code didn’t break anything before asking the testing team to test it, they can prevent introducing regression bugs. And thereby prevent the costs of finding, triaging, and managing those bugs.
  • Reduced costs of developing around existing bugs. Developers can debug new code faster when they can isolate it’s effects from other (buggy) code.
  • Reduced costs of testing around existing bugs. There is a saying – “What’s the bug behind the bug?” we’ve heard when testers are trying to validate a release. A bug is discovered, and the slack time in the schedule is used fixing that bug – then the code is resubmitted to test to confirm that the bug was fixed. Another bug was hiding behind it, and untestable because the first bug obfuscated the second bug. Addressing the second bug introduces unplanned testing costs. Preventing the first bug will reduce the costs of testing the latent bug.

Costs

Most of these increased costs are easy to measure once they are identified – they are straightforward tasks that can be measured as labor costs.

  • Cost of time spent creating additional tests.
  • Cost of time spent waiting for test results.
  • Cost of time spent analyzing test results.
  • Cost of time spent fixing discovered bugs.
  • Cost of incremental testing infrastructure. If we are in a situation where we have to increase our level of assets dedicated to testing (new server, database license, testing software licenses, etc) in order to increase the amount of automated testing, then this cost should be captured.

Conclusion

This is a good framework for making the decision to increase automated testing. By focusing on the efficiencies of our testing approaches and tools, we can reduce the costs of automated testing. This ultimately allows us to do more automated testing – shifting the pareto optimal point such that we can increase our incremental benefits by reducing our incremental costs.

The Reason Why

typewriter
Seth Godin has a post titled The Reason.

He has several good examples, like this one:

The reason the typewriter keyboard is in a weird order is that original typewriters jammed, and they needed to rearrange the letters to keep common letters far apart.

In each of his examples, Seth asks and answers the reason why we do things that don’t have an obvious rationale.

Requirements elicitation is about asking why
When we ask why correctly, we get great insight, which enables great requirements, which can yield great software. When we ask why incorrectly, we can get a great big mess.

Some examples of asking why about software product requirements:

  • Why do the users need to be able to save work in progress and come back to it later? This is the killer-feature of Turbotax on the web. The answer is that most users don’t complete their taxes in a single session.
  • Why must we be able to create a report of all purchases by a single customer in a single quarter? Because we have a rebate program that rewards customers for aggregate sales data, not just per-order discounts.
  • Why must we update the price of shipping dynamically as users make selections? Because our customers are total-price sensitive, and this is how we differentiate our online-store.

We need to remember to ask why something is a requirement. Not just so that we can control scope, but so that we can focus our effort on the most important requirements, like we discussed in our recent post about requirements prioritization. The folks at 37signals ask why. They get it.

We need to ask nicely.

We must avoid sounding like a broken record – why why why?! We also have to avoid sounding accusatorial – imagine the emotions we would feel if someone asked us any of the following questions:

  • “Why did you think it was a good idea to drag race my car?”
  • “Why did you have a party when we were out of town?”
  • “Why is it ok for you to sit and watch tv all day while I’m working in the yard?”

We might feel apologetic, or defensive, or embarrassed, or worthless. This isn’t an inquisition, its elicitation.

We won’t intentionally ask threatening questions, but that doesn’t mean that our audience won’t feel threatened.

  • We may be in the process of discovering that their favorite feature has a very low ROI and is at risk of getting dropped from the product.
  • We may be asking questions that they can’t answer, and they may feel humiliated or at risk of losing their responsibilities or their job.

We have to be good listeners – and pick up on cues and attends that we are making people uncomfortable, and adapt our approach.Thanks Seth for the spark of this this post.

Software Requirements Specification Iteration and Prototyping

large gearbox

Developing great software requirements demands iteration

In our previous post of an example of the software development process, we showed a linear flow through the process, as depicted in several posts over a couple weeks. What we failed to show was any of the iteration cycles, as Deepak points out by asking a great question in the comments on that post. In this post, we will show a little more about how the process works by showing how iteration fits into the machinery of software development.

We showed a simplified view of the requirements process (and roles) in an earlier post. Here’s a review of the steps described in that post.

  1. Identify market opportunities
  2. Express software requirements
  3. Design solution (high level)
  4. Design (low level) and implement solution

Linear view of the requirements process

We iterated within the process for this project. We followed the following steps on this project:

  1. Define the market requirements (MRD)
  2. Define software requirements (PRD)
  3. Design a solution
  4. Validate requirements and design with the users, uncovering new requirements and improving our analysis of benefits
  5. Repeat steps 2-4 until satisfied (in our case – once was enough)
  6. Implement

software requirements process with iterations

This provides us with an incrementally more refined view of the overall software development process.

Iteration is most effective when combined with prototyping

In our iterative process, we validated the use cases, the cost-benefit-analysis, and got usability feedback*.

We were introducing a new concept to our potential users, and inital conversations were awkward. By applying active listening techniques, we realized that we weren’t getting our message across. We created prototypes (screen mockups) and used them to demonstrate the proposed design, and we got the ah-ha moments were looking for.

Once we got the mindshare, we were able to elicit inputs on the cost-benefits, usability improvements to the design, and other potential requirements. We cycled back into an update of our PRD and our design and validated again.

By iterating the requirements and design we were able to

  • completely eliminate two screens in the user interface
  • improve the usability of the tool
  • reduce the cost to implement!

We were able to make these changes because during the validation steps we identified low-value use cases that we could eliminate entirely by adding simple steps to other use cases.

We then moved on to another step in the process – putting together a detailed estimate, work breakdown structure, and ROI analysis.

Thanks again to Deepak for asking the question and pointing out that we’ve failed to share this critical step.

*Usability is not technically what we analysed – we got feedback on how two potential users believed they would use the tool. We did not perform a discount usability study, or anything more formal. The UX/HCI folks will appreciate the distinction between how users use a prototype and how they believe they will use a prototype. Everyone else will just roll their eyes.