Monthly Archives: February 2006

Software development process example

double figure eight knot

We’ve presented an example of the software development process across several posts over the last two weeks. In this post we tie them all together, showing the steps in process order.

  1. A discussion of the concept of tagging. Context and background on tagging as a technology, with pros and cons.
  2. The top five problems with test automation suites. We’ve talked repeatedly about how test automation suites are better than purely manual testing. Here we look at the “second order” problems. What are the main problems with unit test automation? These represent our market requirements for the creation of a software product designed to improve unit test automation suite usage.
  3. Converting from MRD requirements to PRD requirements. The ideation / triage process of determining which requirements should be addressed in software.
  4. Creating informal use cases to support our requirements. We define the use cases that support the high-level requirements.
  5. Writing functional requirements to support the use cases. With a user-centric approach to software development, it is imperative that we build out our functional requirements in the context of use cases – we keep our eye on the ball with this focus on the user.
  6. Design elements that support our functional requirements. Without going into esoteric details about how to design test automation software, we discuss the elements of the design that relate to the application of tagging to addressing some of the larger market opportunities.
  7. Iterating and prototyping. We show the iterative process from PRD to design to users and back again. [Update 21 Feb – added this step. Thanks again, Deepak]

Let us know if you’d like to see a discussion of any of these or other steps in more detail by leaving a comment on this post. Thanks in advance!

The evolution of software product development

Lost Garden logo

The Lost Garden has an outstanding post by Danc – Software Development’s Evolution towards Product Design.

Danc writes about how the software development process has evolved over the years. He characterizes this evolution in four distinct phases.

  1. The technocrat era: Programmers serving programmers.
  2. The early business era: Programmers attempt to serve others.
  3. The late business era: Programmers and artists meet and do battle.
  4. The product design era: Can programmers and artists learn to work together?

His last era is the current era, and he details how first game-companies, and then Apple and web-design companies dominate. He also goes into some details about what the next era will be like. He shares Robert Cooper’s top nine core factors in product design success.
Quotes (and our comments)

From his third era:

None of the freshly introduced team members spoke one another’s language. The artists talked about fluff like color and mood. The marketing people made outrageous requests with zero comprehension about technical feasibility. The programmers were suddenly enslaved by bizarre, conflicting feature demands that they did not understand. “Make it friendlier” translates poorly into C++ code.

This is precisely the issue we talk about in Intimate domains where we highlight the need for requirements managers to be able to speak all the languages of team mebers. Team members also need to be able to interoperate across their contextual boundaries. This issue again comes up in Software requirements – process and roles where we talk about how several steps “have to happen”, and who may be best suited to perform each step.

In discussions for the future, Danc presents some compelling statistics:

The benefits of a product design process are well documented. New products that deliver superior, unique benefits to the customer have a commercial success rate of 98% compared to 18.4% for undifferentiated products. These products reach an outstanding 53.5% market share.

Wow! Hard to argue with that. Note that Danc lists 14 separate references for his data and arguments.

To the losers, the success of their rivals appears miraculous. “How is it that a slow web app can take away market share from our superior desktop application?” they ask in surprise.

I wonder if it was any different for the neanderthals when the homo-sapiens started to dominate Europe?

The feedback that Danc gets when telling this story:

“Well, it is about time.”

We agree – it is about time. And it was about time someone crafted the message as well as Danc and Lost Garden. Thanks!

Software Testing Series: Organizing a Test Suite with Tags Part Three

organizing into bins

Organizing a test suite with tags (part 3)

This is the third in a three-part post about using tags as a means to organize an automated unit test suite.

Part 3 of this post can be read as a standalone article. If it were, it would be titled Design elements of an automated unit test framework using tags. If you’re only reading this post and not parts 1 and 2, pretend that this is the title.

  • In part one of this post we developed an understanding of tagging as a technology, including its pros and cons.
  • In part two of this post we defined the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we will consider the key design associated with using tags as a mechanism for organization.

Setting expectations

We designed a custom unit test automation system based on the use of tags to organize automated tests for a client. That system isn’t the subject of this post, but it did provide us with context and insight into the problem we’ve addressed. In this post we won’t be presenting a completed design – there are many good tools out there already for automating unit tests. We will be talking about a subset of the design decisions that are associated with the use of tags as a mechanism to organize the unit testing within the suite. In Marc Clifton’s advanced unit testing articles, he walks readers through the creation of a test automation tool for C#. The concepts presented here can be incorporated into a design relatively easily, but the details of doing that are a little too off-topic for most of our readers.

We are writing about designing software that tests other software. To keep our language consistent and easy to follow in this post, we will use two terms. Tool represents the test automation software that we are designing. Application represents software being tested with the tool, or more specifically, with tests maintained within the tool.

Design approach

After identifying the use cases and functional requirements for the tool, we began iterating on screen designs, business (requirement) object models and architectural (implementation) object models. We created an object oriented analysis (OOA) diagram to represent the concept concisely.
OOA diagram

OOA diagram

An object oriented analysis diagram of the key relationships between scripts, inspections and tags.
A script is the embodiment of a user session in the application – it represents a set of actions that a user of the application would take. The user of the tool will create a script (as a separate file), and will create a reference to that script in the tool.

An inspection is an unit test of the application. The inspection evaluates a particular condition or makes a specific assertion about the state of the application (a properly filled out order is submitted when the user clicks “submit”) . The code that executes that inspection is represented outside of the tool. The user of the tool will create a reference to the inspection within the tool.

Inspections can be associated explicitly with any number of scripts (including none). An association between inspection 1 and script A is an instruction to the tool to run script A within the application, and evaluate inspection 1 against the script.

The processing of scripts and inspections is outside the scope of this document, but is covered in many other references, including Marc Clifton’s.

Any number of tags can be associated with each script. Tags could be used to represent different user actions (like deleting items from a shopping cart), different specific selections (user adds 1000 of an item to the shopping cart), different situations (shipping address does not match billing address), or any other relevant descriptor of the user session. A single script could have multiple tags.

Each inspection can be transitively associated with a set of scripts by explicitly associating it with one or more tags. By associating an inspection with a tag, we are instructing the tool to dynamically associate the inspection with all scripts that are associated with all of the identified tags. There are two benefits to this approach. First, this approach reduces the amount of time that a user of the tool must spend to associate an inspection with a set of relevant existing scripts. Second, this indirect mapping is utilized to assure that an inspection that is mapped to a tag will also automatically become associated with any future scripts that are added – as long as they have associations with the same tag or tags as the script. This reduces the cost of creating future mappings when scripts are added to the suite in the future.

We expect this design approach to provide significant labor savings in maintaining a test suite. We built our business case for this project upon that assumption. We also expect that this design approach will result in better testing coverage of the application by users of the tool. We did not incorporate that expectation into our cost benefit analysis when calculating the ROI of this project.


We will follow-up some months from now when we can evaluate data and draw conclusions from use of a tool built along similar lines for a client. Until then, we have confidence that it will work very well, but no tangible data.


  • In part one of this post we developed an understanding of tagging as a technology, including its pros and cons.
  • In part two of this post we defined the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we considered the key design elements associated with using tags as a mechanism for organization.

– – -Check out the index of software testing series posts for more articles.

Prioritizing software requirements – am I hot or not?

Campfire image

Prioritizing software requirements

Jason at 37signals recently posted about essential vs non-essential requirements – the software equivalent of Am I hot or not?*
He talks about the prioritization decisions their team went through as part of bringing Campfire to it’s launch. Campfire is an online collaboration application that just launched today. We will talk about how their prioritization approach affects the greatness of their software.

A great way to spot non-essential stuff is to run it through a filter. If the feature is attached to “wouldn’t it be nice if…” or “It would be cool if…” then it’s usually non-essential. Nice and cool are definite red flags.

[Update: Another post at 37signals about how some features just don’t matter and how that applies to Campfire]
37signals seems to have a (ruthless | spartan | minimalist) approach to determining features for the 1.0 releases of their products. If the feature isn’t essential, it doesn’t happen. On future releases, they again tend towards the simpler software that can be much easier to use.

Is this “release minimally functional software” approach the exact opposite of what we propose in Getting past the suck threshold. In that post we focus on the critical importance of making sure we have great stuff for the users in the product as soon as possible.

No. The two approaches are very well aligned.
Kathy Sierra’s point (the original source of the suck threshold meme) is that you want to maximize the rate at which you get user adoption. 37signals is doing exactly that, by putting fewer features in the software.

  • Fewer features in the 1.0 release means getting the product to users earlier, who then provide feedback faster, allowing the tool to be improved more quickly.
  • For new software, there are no expert users, just novice users. By building the first interface with a single user-class in mind, they can present a clear intuitive interface for those users. As the users evolve, 37signals can add more functionality to support this new user class.
  • There is an optimal level of complexity/functionality in an application. Some applications fall short (most server log stats packages) by not giving us enough functionality, while others go way too far and become convoluted bloatware (Lotus Notes for example) by trying to be all things to all people. The people at 37 signals seem to be very anti-bloatware. By providing enough functionality for most people to use the application easily, they help those users cross the suck-threshold. By not providing the extra clutter and capabilities for those power users, they don’t make the software better for power users – but they don’t make it worse for the majority of users either.

We’ve shared techniques for prioritizing requirements in the past. We should update that list to include “Build this first” as another approach.

*Likely not a worksafe site (inappropriate content)

Top ten tips for giving a better presentation


[Update: At least two people have misinterpreted this post as a commentary on Seilevel’s presentation two days ago. It was not. Sorry for the confusion! Watching their very good presentations reminded me that I need to continually focus on my presentation skills, so I wrote this. As I think back on their presentation, they hit every single item on Guy’s list. I don’t know that they asked for a small room – I think they just had a better than expected turnout for the event. Great job Jerry and Joe, and thanks Seilevel for the event – it really helps increase awareness and will certainly help everyone who was there be better at managing successful products. Scott Sehlhorst]

Top ten tips for giving a better presentation
Guy Kawasaki wrote a great article last month about how to give a great presentation. You should be reading his stuff!

He goes into details about each of his ten eleven tips from his perspective. Here’s a quick summary of those tips with our thoughts.

  1. Have something interesting to say.
  2. Cut the sales pitch.
  3. Focus on entertaining.
  4. Understand the audience.
  5. Overdress.
  6. Don’t denigrate the competition.
  7. Tell stories.
  8. Pre-circulate with the audience.
  9. Speak at the start of an event.
  10. Ask for a small room.
  11. (bonus) Practice and speak all the time.

The most important themes in his post are

  • Be relevant. If we aren’t saying something we passionately believe in, and it isn’t something our audience wants to hear, we shouldn’t speak. Presentations aren’t opportunities to engage in ego stroking, they are opportunities to communicate.
  • Engage with the audience. We should find out what’s important to the audience, and entertain them – not try and broadcast our own “sales message.” We should try and entertain and inform the audience about what they want to know not what we want them to know. Of course there’s a message we want to give, but we should think of it in terms of what they want to hear, and craft it as such. This applies to small presentations, like proposing a project to a management team – which we talked about in our post, It’s not business, it’s just personal.
  • Earn the respect of the audience. Guy points out that the opportunity to speak is a priviledge, and we should treat it as such by not doing a sales pitch, not talking smack about the competition, and by showing respect. Overdressing is a great piece of advice – it is too easy for technical people to dismiss this advice – what we wear is not an indicator of what we can do or know. But human nature trumps correlation. People will perceive casual or sloppy dress as a lack of respect.

Check it out.

Requirements Management Software Will Not Solve the Problem

Norm Abram

Requirements management software will not solve our requirements problems.

Jerry Aubin of Seilevel made this great point in his presentation this evening at the IEEE Computer Society, Austin / A-SPIN event. This was a great event, focusing on how to take requirements management “to the next level” – not just being good at it, but being great at it. Seilevel’s speakers (Jerry and Joe Shideler) demonstrated that they have great insights into the art and science of requirements management – and presented some cutting edge ideas that extend the “known good ideas” in some interesting directions. Definitely a company to keep our eyes on and learn from. Their blog, requirements defined, is linked in our blog roll – check it out.

The Norm Abram analogy

Jerry used what he coins as the “Norm Abram analogy”. Norm Abram is a great carpenter, and he does a weekly television show here in the US. He has an amazing workshop, with every tool imaginable for cutting, shaping, sanding or finishing wood. And Norm uses those tools to create beautiful products.

If you had those tools, Jerry asked us, would you be able to suddenly create products as beautiful as Norm does?


Norm became a great carpenter and he became proficient with tools that help him do his work faster. The tools didn’t make him better, they just make it easier for him to do the work.

Would having typewriters make us write better, or just faster?

Applied to requirements management

The same holds true about requirements management software. Having access to software won’t make us better at managing requirements. The software will help someone who already knows how to manage requirements be more efficient.

The folks at Seilevel have seen the introduction of RM software (requirements management software) actually be counter-productive for teams who are new to managing requirements. We agree – we think it’s like getting an expensive car so that we can teach someone how to drive. The learning process has to come first.

Jerry showed statistics from the Standish Group’s 2004 CHAOS report. We’ve talked about that report earlier. That report shows that 71% of software projects fail. The issue isn’t the speed or cost of writing requirements, the issue is writing bad requirements.

For those 71% – the problem isn’t the tool, it’s the training. For the other 29%, there are absolutely solutions to help us do our jobs faster.

What should we do?

There are two ways to get better requirements. Buying RM software is neither one of those.

  1. Get help with your requirements. Companies like Seilevel and Tyner Blain have already invested in learning how to do requirements right. Let them or someone else help you manage requirements, or let them do it for you.
  2. Learn how to manage your requirements. Get training, read, study, practice, fail, improve, and succeed. As individuals, find mentors. As companies – get outside experts to come in and audit your projects.

When should we buy RM software?

When we’re already good at writing and managing requirements, and we’re looking for a cost reduction. Introducing a tool of this complexity to a team that isn’t accustomed to the process will actually hinder the process.

Jargon gone amuck!

This video showing the abuse of jargon (2 minutes) is absolutely hysterical, and should be watched for humor alone. However, it also drives the point home about the effects of using jargon when writing requirements.

When we write a PRD or SRS if we use the jargon of one domain, this is what we will sound like to everyone who isn’t in on it.

To avoid this mistake start with Writing requirements unambiguously.

Writing Requirements Unambiguously

sax face

Writing requirements without ambiguity

This is one of the harder parts of writing good requirements. Marcus tells us to avoid it with a good example here. Jerry Aubin at Seilevel has written an outstanding post on the subject, The art and science of disambiguation. Jerry starts his post with a gripping example from Weinberg and Gause:

  • Mary had a little lamb.

What exactly does this phrase mean to you? Here are some possible answers.

  • Mary owned a lamb.
  • Mary gave birth to a small sheep.
  • Mary ate some mutton.
  • Mary conned a mild-mannered person.

As Jerry shows, ambiguity can result from variations in the use of language – a much more subtle problem than in variations in the symbolism people associate with words.

Jerry points out that one of the key ways to avoid ambiguity is to leverage a shared context for conversation. This is very effective, but very difficult, especially when you consider how little overlap in context there is between different members of the same team. We talked about this in one of our earliest posts, Intimate domains, where we highlight the distinct contexts of different players on a typical software team.

A requirements manager must be able to communicate within each context, and translate between contexts to serve as an effective communicator.

domains of expertise

As our diagram (re-used from Intimate domains) shows, people in each area of expertise have a predominantly independent context. And within that context they have unique interpretations of meanings for words. More than jargon or symbolism – many words carriy a unique, rational interpretation within each context. In that post we go into more details on how to navigate these areas of expertise.

The authors of the ambiguity handbook (pdf link available in the seilevel post), highlight what we believe to be the insidious challenge of ambiguity when writing an SRS (or PRD) – unconcious disambiguation.

Unrecognized or unconscious disambiguation is that process by which a reader, totally oblivious to other meanings of some text that he has read, understands the first meaning that comes to mind and takes it as the only meaning of the text.


The PRD (or SRS – see Requirements document proliferation) is the most important document to clear of ambiguity, because it is a document that is intended to communicate across context more than other documents in the requirements process. We talk about these role transitions a bit in our post, Software requirements – process and roles. A review of that process puts context and communication in perspective.

Software Requirements – Process and Roles

black and white face

Requirements vs design – who does what and why

Our previous post, Requirements vs design – which is which and why, describes our position on which parts of the software development process are requirements-activities, and which parts are design activities. The debate among professionals about these distinctions is ongoing, and continues in the comments on that post. The length of the debate, combined with the skills of those debating demonstrates that it isn’t a black and white issue.

In this post, we will try and explore the reasons why this debate is ongoing. We will do that by exploring the symbolism of the terms involved, as well as the roles of different members of the software development team.

One of the challenges in successful communication comes from the way people use symbols as part of the organization of their thoughts. The word design, to many people, encompasses any task that involves ideation, or the creation of ideas. In software, this label is commonly applied to the task of determining how to achieve a goal in software. The previous debate asks a different question.

Does this symbol apply to the act of determining which problems to solve with software?

When we take the symbolic association traits of people into account, this is not an unreasonable question. Without an over-arching context in which to answer the question, the term design can be applied to almost any activity.

We propose that when discussing the software development process, the term design should not be applied to the act of creating a PRD from an MRD.

The software development process can be described as a series of steps outlined below.
software development process diagram
Steps in a software development process

  1. Someone identifies market opportunities and captures the results of that analysis in a document like an MRD.
  2. Someone determines which of those opportunities should be addressed in software and creates a PRD (or FRS or SRS) capturing the results of that analysis.
  3. Someone designs a software solution based upon those software requirements, and captures that design in a design document.
  4. Someone implements a software solution (writes code) that conforms to the documented design.

Note that there are also steps involved in assuring the quality of the solution, and feedback loops in a good software development process. We covered this in our earlier post, Where bugs come from. We’ve also talked in more detail about the software development process in our post, Describing the software development process. In order to keep this discussion on task, we’ve simplified our presentation of that material in this post.

We talked a bit in our previous posts about the nature of the activities in the four steps identified above. Each activity is distinctly different, as shown in the next diagram.
software development process roles diagram

The software development process as shown has four distinct types of work that are involved.

The four types of work

  1. Opportunity identification – identification and sizing of problems or opportunities that exist.
  2. Opportunity classification – determining which and how problems should be solved in software.
  3. Solution architecture development – determining, broadly, how software implementations should be created
  4. Software implementation – determining, narrowly, how software solutions should be created and then creating those solutions.

Steps one and two are synergistic. Step three is a natural evolution of step four. The first two steps require distinct thought processes and frameworks from the last two steps. It is reasonable to combine either pair of steps (1 & 2, or 3 & 4) but not to intermix them (1 & 3 would be bad, for example).
The missing link

The crux of the ongoing debate is the following: Should step 2 be treated as two separate steps:

2A. Triage opportunities and select the ones best addressed with software solutions.
2B. Determine how software solutions should address the triaged opportunities.

And if so, should step 2B be considered “design”?

Our answer

The process of determining how software should address opportunities is tightly intertwined with the process of determining which opportunities to address in software. Having a vision of how the software solution might work is required to understand if software is the right mechanism for addressing a particular opportunity. There is a massive overlap in the skillsets required for these two tasks.

Since these two “half steps” are so intertwined, there is no benefit to separating them. In the overall flow of the process, there are clear distinctions between each of the “whole steps”, and it makes sense to separate them when possible.

Symbolism and Communication

blank card
Symbolism and communication
One of the challenges in successful communication comes from the way people use symbols as part of the organization of their thoughts. Symbolic thinking and reasoning is an incredibly efficient process. It allows us to create representational views of the world that allow us to process much more information than our brains have evolved to handle.

What does this have to do with requirements?
We see from our earlier article on requirements gathering techniques that communication is central to the most important requirements elicitation methods. Understanding how people associate ideas symbolically helps us communicate more effectively.

History of symbolic association

40,000 years ago, arguably the last time humans developed any increase in cognitive capacity, we had a tough life – but we had much less information to deal with. We learned how to be hunters, and how to escape hunters. We developed interpersonal relationships and communities. We began specializing in skills. And we taught skills to, and learned skills from other members of our communities. We advanced in knowledge, but very slowly (compared to today).

The reason we advanced slowly is that we could only communicate, retain, and re-communicate a finite amount of information. While that amount varied by individual, it was still a finite amount. At some point, we began developing associations between pieces of information.

There have been studies that show that the average person can remember between 5 and 9 unrelated items in a list. Memory improvement techniques teach us to create associations that allow us to remember much more.

One classic technique is to take the items in the list, visualize them, and place them in rooms of your house. You can create a mental image of a walk through your house and see the items in each of the rooms, and thereby remember a longer list of items.

A trick for remembering people’s names is to create an association between their name and some characteristic about them, or about how you met them, or even just a play-on-words with their name. Amar Rama – the human palindrome. Linda from Lima. Red head Fred.

These tricks work by creating associations. By creating these associations, we’re able to retain much more information. And humans began to learn more, develop sciences and societies, and evolve (socially). We discovered (or created) associations everywhere we looked.

As associative learning took hold for people, abstract reasoning skills became more important, and symbolic associations began to accelerate our intellectual development. As our societies and knowledgebase evolved, we had to do more than learn the same things our ancestors knew – we were building on their knowledge, and had to learn everything they knew and then learn even more. Symbols became an accelerator for knowledge retention.

Symbols represented incredibly dense, compact, and efficient tokens that embodied complex ideas. We live in an age of symbolic reasoning, and we use symbols to communicate complex ideas with a minimum of prose. Freedom is a symbolic word. Just reading the word conjures up dozens of ideas and images. Honor, work, fairness – all are symbolic words. The challenge is that different people recall different images and ideas when they read or hear these symbols.

Communication with symbols

double edged sword

Symbolic reasoning is a double edged sword. It makes communication more efficient, but it also makes miscommunication more efficient.

When we wrote Top five requirements gathering tips, we highlighted the use of prototypes to quickly validate requirements with people who have an “I know it when I see it” mindset. This leverages the associative reasoning centers of the brain too. We also touch on this with our post, A picture is worth a thousand requirements. In that post we use visual, symbolic diagrams to communicate very dense information efficiently.

When we wrote The top five ways to be a better listener, the most important technique we identify is active listening. Active listening allows us to resolve the different meanings that people associate with the same symbols.

Use symbols, but use them wisely.