Category Archives: Outsourcing

Articles that discuss or evaluate outsourcing as a component of successful software development.

Making Offshore Design Work

offshore oil rig

When companies first start off-shoring, they usually send the “low level” implementation work overseas first, to work out the process kinks and manage risk. Over time, your valued, domain-aware developers will perceive a lack of career opportunities with this limited role. Naturally, you will want to consider sending design work offshore too. You can make it work. If you do it wrong, you’re toast.

Continue reading Making Offshore Design Work

Making Offshore Development Work

offshore oil rig

Economic pressures are driving most companies in high-developer-salary markets to explore using offshore development teams as part of their approach to developing software. Developing software with a global team presents new challenges as well as new benefits. If you do it right, you can have a more cost-effective team. If you do it wrong, you can have a disaster.

Continue reading Making Offshore Development Work

Writing Unambiguous Requirements

big ten logo

One of the ten big rules of writing a good MRD is writing unambiguous requirements. Ambiguity is a function of communication. The writing can be generically ambiguous, or ambiguous to the writer. A requirement could be precise in intent, but ambiguous in interpretation by the reader. Understanding our audience is as important as precision in language. We write unambiguous requirements because misinterpretation of requirements is the source of 40% of all bugs in delivered software.

The Big Rule of Writing Unambiguous Requirements

From our previous article, Writing Good Requirements – Big Ten Rules:

A great requirement has a single interpretation. A good requirement has a single reasonable interpretation. As part of our development process, we will use listening skills like active listening to make sure that our engineering team understands the requirement we intended to write. The better the requirements, the less expensive and risky this communication process will be. Writing unambiguously is critically important when using outsourcing models that limit our interactions with other team members.

Ambiguous to the Writer

We introduce ambiguity with imprecise language. Using shall instead of should is one of the first things people suggest (or require!) when writing requirements. [Added 2010.08.17 – shall is ambiguous too – use must!]  Red flags are also raised with words like can and might. Marcus wrote a good post about vague requirements language

What do the terms: user-friendly, flexible, easy-to-use, fast, and intuitive mean to you? Do you think these terms mean the same thing to someone else? Generally, no!

from Speculative and Vague Terms

These are examples where the language is ambiguous, both to the writer and the reader. Any uninformed third party could read these requirements and identify the amiguity in the language. This makes these mistakes easy to catch – all that is required is a good working knowledge of the language.

Ambiguous to the Reader

Even precisely written, grammatically correct prose can be ambiguous to the reader. This ambiguity can come either from lack of expertice with the language, or from incompleteness of the requirement.

Language Ambiguity

With the ever-increasing outsourcing of teams, we have to think about writing requirements for outsourced team members. When we use a complete technical outsourcing model, we have to consider the possibility (or certainty in some cases) that the primary language of the readers of the MRD is not the language it is written in. Making a document easy to read (short sentences, common words) can be at odds with making the language of the requirements precise.

Some good research on vocabulary size data for comprehension of english can be found here. The average native english speaker knows 20,000 word families. With a vocabulary of 5,000 english words, only 98.5% of the words in a given text will be understood. 5,000 words is a lot for a speaker of a second language (3,000 words is considered a working knowledge).

An analysis of the reading-level at which a document is written can be helpfull in identifying if the language is likely to be challenging for readers with limited vocabularies. The Gunning-Fog Index provides a measure of the education level at which a text is written.

Incompleteness Ambiguity

When the language is both precise and understood, we still face challenges in ambiguity by failing to provide all the information. When we find ourselves wanting to say “That was implied, you should have known that!”, we are still being ambiguous. Clarity exists not only in language but in intent. Should the reader assume that when we specified user-authentication and role-based functionality that we intended users to have roles? If we specify that the best-matching search results must be presented within 1 second, is it ok if the rest of the results are presented later? And how many of the best-matches must be found?

Writing requirements like this is definitely a risk, and probably ambiguous. If we have a history, a rapport, and synchronous feedback cycles with the readers of the document, this may not be vague. We may be able to rely on them to assume the same things we assume. The language in the document may be serving effectively as shorthand for this communication. If we are working as a team with a shorter history of working together, this almost certainly will not communicate what we intended. There is also a risk of going too far.


Writing unambiguous requirements requires us to write complete requirements. It also requires us to use precise language that communicates information across domains to our readers. To determine the right level of effort, we need to monitor the effectiveness of our communication, and balance that with the amount of time we can afford to dedicate to word-smithing instead of other product management activities.

Maine Mangles Medicaid – Charges CIO

child crying

Allan Holmes, for CIO Magazine just posted a scathing and detailed autopsy of the disastrous Medicaid Claims System project run by CSNI and launched in January of 2005. Requirements elicitation failures combined with incompetent vendor selection and project mismanagement lead to a $30,000,000 oops for the state of Maine, jeopardizing its credit rating. The system failed to process 300,000 claims in the first 3 months of operations, causing many health care providers to close their doors – and presumably causing citizens of Maine to go without needed services. Maine is the only state in the union (as of April 2005) not complying with federal HIPAA regulations.

Autopsy results

There were crucial failures in essentially every step of the project. We’ll look at each of the following areas:

  1. Defining requirements and creating an RFP (request for proposal)
  2. Vendor selection
  3. Requirements validation
  4. Risk management
  5. Execution (Project management and development)
  6. Testing
  7. Deployment / Change Management / Training

Defining requirements

April 2001. Maine issued an RFP for the new HIPPA compliant system. By the end of the year, only two bids were placed – one for $15 million and one for $30 million. Holmes tells us that this is a sign of a bad RFP:

…says J. Davidson Frame, dean of the University of Management and Technology in Arlington, Va. “Only two bidders is a dangerous sign,” he says, adding that the low response rate indicated that potential bidders knew the requirements of the RFP were unreasonable.

Requirements elicitation done poorly is the major source of defects in any project.

Taking a step back, we see from Holmes that Maine decided to use a new (to them) technology and develop the software themselves instead of outsourcing. The justification being that it would be easier to adapt to changing requirements (this becomes ironic later – read on).

The development of the new system was assigned to the IT staff in the DHS, which decided it wanted a system built on a rules-based engine so that as Medicaid rules changed, the changes could be programmed easily into the system.

Vendor selection

Quoting Holmes:

In this case, the low bidder, CNSI, had no experience in building Medicaid claims processing systems. In contrast, Keane had some experience in developing Medicaid systems, and the company had worked on the Maine system for Medicaid eligibility.

OK, maybe not so bad, but wait – more irony. The final costs (to the State) of going with the low-cost vendor exceed the bid from the high cost vendor.

Requirements validation

To begin with, the 65-person team composed of DHS IT staffers and CNSI representatives assigned to the project had difficulty securing time with the dozen Medicaid experts in the Bureau of Medical Services to get detailed information about how to code for Medicaid rules. As a result, the contractors had to make their own decisions on how to meet Medicaid requirements. And then they had to reprogram the system after consulting with a Medicaid expert, further slowing development. [emphasis ours]

We wouldn’t use the same language as Holmes, we would say “… the contractors decided to make their own interpretations of how to meet Medicaid requirements.” They never had to do it – they chose to do it. In Where bugs come from we show the impact of having or not having a feedback loop for validating requirements. Not having that feedback loop was either a decision of incompetence or hubris.

No one is blameless for this mistake. Maine’s IT department is responsible for making sure the contractors are doing what they really want. The contractors are responsible for doing what Maine wants. At a minimum, the SMEs should have been interviewed, and the contractors should have at least used active listening techniques to validate their interpretations of the statutes. All the way down to the developers, who should have required that they understand the context in which they are coding. They should have said “why?” until they got answers.

Risk Management

New vendor. New technology. Maine knew that the requirements were not good.

Thompson decided that the six months that would have been needed to redo the RFP was too much. “We had a requirement to get something in place soon,” Thompson says.

No access to SMEs (subject matter experts). No system tests (more on that later). No backup system. No contingency plans if the system didn’t work.
If there was a risk management plan in place, it certainly didn’t change the course of events.


Starting with project management:

  • Oct 2001. CNSI is selected as vendor – project length: 12 months.
  • Fall 2002. Project timeline doubled to an Oct 2003 delivery.
  • Fall 2003. No delivery.
  • Fall 2004. No delivery.
  • Jan 2005. System goes live.
  • Apr 2006. System now (claimed to be) operating at same level as legacy system.

And with development (here’s the aforementioned irony):

The development of the new system was assigned to the IT staff in the DHS, which decided it wanted a system built on a rules-based engine so that as Medicaid rules changed, the changes could be programmed easily into the system.

Errors kept cropping up as programmers had to reprogram the system to accept Medicaid rule changes at the federal and state levels.



Hey, testing is optional.

testing the system from end to end was dismissed as an option. The state did conduct a pilot with about 10 providers and claims clearinghouses, processing a small set of claims. But the claims were not run through much of the system because it was not ready for testing.


Holmes presents excellent conclusions about the HIPAA project. Our conclusion – we need more people in Maine to read the blog. If you know someone in Maine, send them a link. In some seriousness, there’s a T-shirt that says “If you can’t be a good example, be a horrible warning.

Thanks for the warning, Maine!

Outsourcing Conversation – One Topic, Two Blogs, Three Cs

chatting ducks

Frederick Boulanger picked up on our earlier article about different application development outsourcing models, and extended it with his own good ideas– making it easier for teams to decide which outsourcing model is right for them. Frederick identifies the three key factors that determine which model is most likely to succeed for a given team. They are control, coordination, and communication. Anyone else want to join in? Blog away, and trackback or comment here.


Are we outsourcing turn-the-crank type activities? What about tasks that require only entry-level skills? Frederick points out that there is a continuum of desired control for any company or project. When we maintain as much control as possible, we only outsource very specifically defined work-elements. When we have less need for control, we can outsource more and more of the process. If we keep design in-house but outsource implementation, we still have a lot of control over the final results.


Coordination of activity is central to the success of any team. When everyone on the team is in the same room, coordination almost happens automatically. Having everyone in the same building helps, and having everyone in nearby timezones is the norm these days for non-outsourced projects. A lot of people treat outsourcing as synonymous with offshoring – the term for outsourcing to teams on different continents.

Technology has eased the pain of a geographically distributed team – instant messaging, email, collaboration applications, and video conferencing have reduced the need to travel, and made coordination easier for dispersed teams. Offshoring also creates temporally distributed teams, as team members are working in very different time zones. When team members are eleven and a half hours out-of-phase with each other, coordination becomes much more important.

Asynchronous teams also face an efficiency challenge. The typically iterative tactical communication among team members may only take minutes when folks are simultaneously working. It can take a day per iteration when the communication happens only through email (with a multi-hour delay between each exchange of information).

Defining the communication process our team will use is important. Documenting this process helps to set expectations for the outsourcing team, and is required for most CMMI levels.


Precisely defined tasks (the turn-the-crank variety) require far less iterative communication than more strategic activities like validating requirements or architectural design. Offshoring projects will be more effective initially when the majority of the outsourced work is partitioned into narrowly scoped deliverables.

After developing a good relationship with our outsourcers, and working the kinks out of our communication process, we can begin to relinquish control by outsourcing work with greater scope of impact. I’ve had excellent success in doing this by leveraging documentation to communicate with overseas team members. When giving them greater responsibility, I would require that they document their design and their test design for review prior to any implementation work. This dramatically reduced the effect of ambiguity in the requirements, while implicitly providing an active-listening feedback loop. Most often, misinterpretation of requirements resulted in errors in the test designs. It also had the side benefit of helping my teammates grow their skills more quickly as it forced more critical thinking to occur early in their process.


Communication is important to any outsourcing effort, and increasingly critical when relinquishing increased levels of control. Process execution and coordination determine how repeatably we can communicate at any level of control.

Four Application Development Outsourcing Models

The man behind the curtain

On March 30th CIO magazine published an article titled Do’s and Don’ts of Outsourcing Benchmarks. The article spurred us to write about outsourcing models for product development – it is otherwise unrelated, but interesting. [2015 Edit: The CIO article has been removed, check out these lessons from successes and failures instead]

Continue reading Four Application Development Outsourcing Models

Software design and specification and making movies

movie reel

Alan Cooper presents the analogy that software development is like making movies in his book, The Inmates are Running the Asylum. [This is a fantastic book for getting an understanding of exactly how Cooper’s perspective evolved over the last decade.] Cooper is presenting the analogy in the context of validating the business case for investing in interaction design.

Cooper points out that they’ve been making movies for a lot longer than we’ve been making software, and he’s exactly right that there is something to learn from the film industry.

How the movie industry works

The movie industry manages movies in three phases:

  • Pre-production. Determining what the movie will be about, raising funds, storyboarding and designing the movie, getting actors signed, writing the script, etc.
  • Production. Shooting the film. Directors, actors, and crew all working very expensively to get the film shot.
  • Post-production. Tweaking and finalizing the film.

How software development parallels movie making

Software development involves three phases as well: Decide what to do, do it, and deliver it.

The interesting thing to note is that the film industry universally invests time upfront in pre-production, to minimize the costs of production. They recognize that production is more expensive than pre or post-production. Many software teams take the same approach, although Agile development explicitly does not. We gleaned some insight into Cooper’s perspective from our coverage of a debate between Cooper and Kent Beck.

If we accept Cooper’s premise that production is more expensive than pre-production, then software should follow the same model.

It’s worth noting that an agile process results in more design, not less. Beck might argue that redesigning as we go is less expensive, because we improve our ability to understand what we actually want to create during the process of creating it. Cooper disagrees.

As much as we like Cooper’s insights, the movie cost structure is not paralleled in the software development structure. When we hire developers, it is analogous to the old movie studios keeping actors on retainer – the cost is “fixed.” And the infrastructure costs of production (set creation, for example) are not affected by the time spent in production – they too are fixed. If we have a project with contractor developers, then we have a variable cost, and we lose money while those developers are “sitting around.” However, today’s projects leverage outsourced overseas contractors more and more – and these actors are a lot cheaper than script writers.

What we know in spite of the analogy’s flaws

We absolutely save time and money by defining requirements before we write the software. We also know that it is important to design before we code.

Neither of these statements conflicts with agile philosophies, if we take the approach of treating “design everything” with “design this one thing” similarly. An agile approach will simply have multiple design/implement cycles, each focused on a subset of the software (and allowing for a redesign phase prior to delivery).

Fixing the Requirements Mess

Stephen Larrison posted an article in his blog, Survive Outsourcing, about how to compete with offshore-outsourced projects by fixing the requirements mess up front. His background in manufacturing, as well as software development has lead him to a similar perspective as the one I touched briefly on in my post about where bugs come from.

Stephen points out that “You have to eliminate scrap and rework” in order to compete with teams that have lower cost implementation resources. He points out that this is the key to improving productivity in software development, just as it is in the manufacturing world.

I read a book a little over 10 years ago, The Machine that Changed the World, by Womack, Jones, and Roos. They first introduced me to the concept of the value of investments made upstream in the process. They specifically showed the payback of a dollar invested during the design of a new product (their focus was automotive) yielding the same results as ten dollars spent after a product enters manufacturing. It had the same effect as $100 spent fixing problems after they’ve been released to the field. IIRC, their point, in 1991, was that Japanese manufacturers were investing more at the design stage than U.S. Automakers. Seems almost prescient now with Toyota set to overtake GM as the largest car manufacturer in 2006.

There’s an analogous rule of thumb in software development that requirements bugs cost ten times as much to fix after they reach development, and 100 times as much to fix once they’ve been released to the field. A requirements bug caught in development seems cheap (you only have some wasted effort and minor delays associated with re-working the software to meet the corrected requirement), when you compare it to cost of releasing buggy software – which can result in lost sales and added costs (if the cost of “repair” is accounted for). But truly, the savings comes from fixing the bug before anyone starts building part of the solution based upon the incorrect requirement.

In Stephen’s article, he references CIO magazine, and an eye-opening statistic: “71 percent of software projects that fail do so because of poor requirements management”. It is seriously worth a read. Here are some more statistics on software project failures.