Monthly Archives: April 2006

Goldilocks and the Three Products

three bears

  • This product has too many features.
  • This product has too few features.
  • This product is just right!

Michael on High-Tech Product Management and Marketing has a fantastic “wish I wrote that” post about the importance of having the right number of features. He has several references, the best of which is Kathy Sierra’s Featuritis vs. the Happy User Peak post from June 2005. The two posts combined provide great insight into why having too many features is bad, while acknowledging that too few is just as bad. In this post we will look at what we can do to apply these insights and also change the rules some, making our software more desireable than our competition.

Kathy Sierra’s curve

CC license at

Thanks Kathy Sierra for allowing re-use of your work.

Kathy’s basic point is that users get happier as we add features – up to a point – and then the confusion and complexity of dealing with extra features outweighs the benefits of having those features. In the discussion thread on her post, people use the Microsoft Word example – most people only use 10% of the features, and people counter with the position that different users use different features. Kathy’s post explores more than just software and addresses car radios and other interfaces.

Michael’s extension of ideas

Michael reviews the recent Business 2.0 article titled “Simple Minds” that in short says “more is more, and it always has been”. I guess there’s a bit of backlash about the quest to create minimally functional software. To quote Michael:

Simpler is indeed better, as long as your product meets your customers’ core needs. You may lose some customers because you don’t have some non-core features, but in most cases – I believe – that loss will be more than made up by those customers you gain since your product is simple, easy to use and yet meets their core needs.

His article is a fantastic and thought provoking read. I especially like his use of the pocket utility knife for feature comparison!

Tying ideas together

We’ve posted before about exceeding the suck-threshold by creating software that people can use. Another of Kathy’s great ideas. Visually, here’s what that looks like using the same framework Kathy has presented.

chart redrawn

suck threshold

We can see that to clear the suck threshold, we need to have more than some minimal amount of features, without having too many features. Our goal is to reach the peak of the curve, where we have the optimal amount of features (for competent users).

How do we reach the goal?

When we use Kano analysis to prioritize features, we’re already halfway there (and then some). Recapping from that post:

Kano provides three relevant classifications of requirements (the fourth category is redundant). All requirements can be placed in one of these categories.

  1. Surprise and delight. Capabilities that differentiate a product from it’s competition (e.g. the nav-wheel on an iPod).
  2. More is better. Dimensions along a continuum with a clear direction of increasing utility (e.g. battery life or song capacity).
  3. Must be. Functional barriers to entry – without these capabilities, customers will not use the product (e.g. UL approval).

The must-be features are the first piece in the puzzle, and they are easy to overlay on the diagram.

must be diagram

What gets us to the goal is our differentiated innovations – the surprise and delight features.


Shifting the curve
As both Kathy and Michael point out, we still feel a lot of pressure to keep adding features. Even if we use Kano to hit the ideal software goals, what keeps us from having feature-creep and bloat until it’s all worthless. They both suggest investing in making the software better, instead of making it do more. And we agree about making it better. If we make the user experience better, we can make the software do more too without falling back below the suck-threshold.

Consider the more is better requirements. Think of them in two categories – user interaction improvements, and application performance improvements.

User interaction improvements remove complexity, and make software easier to use. This results in more user happiness from a given feature, and also allows us to implement more features at a given level of happiness (appeasing salespeople).


Application performance improvements don’t create as dramatic of a shift (they don’t make the application easier to use). They do, make it more enjoyable for a given feature set – shifting the curve up.


Release Planning

We posted before about prioritizing requirements across releases. The initial release should focus 80/20 on must-be and surprise and delight requirements. After the first release, we should prioritize 50/50 effort on surprise and delight and more is better requirements. This split of effort balances the goal of product differentiation (adding features) with the goal of user happiness (shifting the curve).


We have to have a minimum set of features. Too many features is bad. The Kano approach helps us to pick the right requirements to prioritize. It also helps us change the shape of the curve for our software, allowing us to add more features while simultaneously increasing user satisfaction.

Thanks again to Michael and Kathy for their great contributions to this and other topics!

Foundation Series: Basic PERT Estimate Tutorial

estimation classroom

PERT = Program Evaluation Review Technique

PERT is a technique for providing definitive estimates of how long it will take to complete tasks. We often estimate, or scope, the amount of time it will take us to complete a task or tasks. PERT allows us to provide not only an estimate, but a measure of how good the estimate is. Good estimates are a critical element in any software planning strategy. In this post, we will present an introduction to using PERT, explain how it works and how to interpret PERT estimates.

Continue reading Foundation Series: Basic PERT Estimate Tutorial

How To Use Timeboxes for Scheduling Software Delivery

watch in a box

Welcome Readers of the Carnival of Enterprise Architecture and Sharp Blue readers, and Carnival of Software Development followers. We hope you like what we’ve got here – please let us know what you think, and enjoy yourselves!

Roger had a great suggestion in the comments to our previous two-part post on scheduling requirements changes based on complexity. Roger pointed out that we had not explained what timeboxing is, but implicitly used the principles of timeboxing in our proposed process. In this post, we explain timeboxes and how they are used.

Definition of timebox

A timebox is a fixed unit of development capacity. An easy way to visualize a timebox is as a two-dimensional graph. Along the vertical axis is the cost of the development team (per unit time). Along the horizontal axis is time. The longer an iteration is, the wider a timebox is.

fixed capacity

The important thing to notice is that with Cost and Time fixed, the capacity of the timebox is fixed. There is only so much that can be accomplished with a given team and a given amount of time.

A unit of work

A unit of work represents both the functionality being delivered and the quality of the functionality. This is the concept that many managers do not grasp.

unit of work

To deliver the functionality that supports any particular requirement, we can think of the time as having two tightly-linked components: implementing the functionality, and implementing the functionality with good quality. Poorly written code, for an isolated requirement, can take less time than well written code. The extra time spent doing it right is part of the quality component. Writing tests and documentation (when appropriate) are also part of the quality component.

How big should a timebox be?

With some exceptions, anywhere from 2 to 4 weeks. Smaller, more tightly knit teams can operate with shorter timeboxes. Teams with less release-process overhead can operate cost-effectively with smaller timeboxes. There’s a good post and discussion at The Pragmatic Architect on how long to make timeboxes. Mishkin Berteig also has a good post, with some differing opinions, identifying the pros and cons of short iterations.

We think a good way to approach it is to start with a 3 week cycle and extend or shorten it, based on what our stakeholders prefer, balanced with the reality of our development environment. For larger teams, we usually end up with a 4 week cycle. Keep in mind that the length of the cycle can be changed as we get feedback on our process efficiency.

Filling a timebox

We can fill up a timebox with the work-units representing several requirements. Ideally, they are the highest priority requirements. Different requirements will take different amounts of time to implement. We can visualize this in the following diagram, which shows a timebox with the “original” schedule.

original schedule

We see that each work unit has both a functionality and a quality component. We don’t want to intentionally plan to deliver functionality without quality.
Dealing with new requirements

The previous posts on scheduling were about how to manage the deadlines for receiving change requests. In those posts, we didn’t talk about how to manage the schedule after receiving a request. There are four methods of adjusting the plan once a request has been approved and committed.

Four methods

  1. Sacrifice quality to increase functionality
  2. Increase cost to increase functionality
  3. Increase time to increase functionality
  4. Delay some functionality to deliver other functionality

1. Sacrifice quality to increase functionality

We can, and too many teams do, sacrifice quality to deliver extra functionality without impacting costs or delivery dates. When we take this approach, we incur a code-debt. A code-debt is us taking a loan against our code-base in the short term to resolve otherwise impossible constraints (no extra budget, can’t miss the deadline, can’t delay anything). Poor quality code comes with a long term cost. It introduces risk into the delivery, which is the cost of poor-quality. This risk manifests as a negative expected value (think of it as the interest on the loan). Poorly written code also makes it more expensive to write new code in the future (think of this as the principal on the loan). Until we invest time to fix the quality of the code (refactor, test, etc), we will continue to incur costs.

The following diagram shows what this would look like.

sacrificing quality

We have sacrificed quality on some requirements (work components) including the new (red) requirements in order to squeeze them into our timebox.

2. Increase cost to increase functionality

Another approach is to increase the capacity of the team to meet increased demands. This can mean extra hours for the current team, re-tasking people from other projects to join the team, or bringing in contractors to temporarily increase capacity.

The following diagram shows that by increasing the cost (and shuffling requirements around visually) we can deliver more functionality without sacrificing quality.

increasing cost

There are always inefficiencies to adding capacity. If we add hours, people get burned out. If we add people, there is overhead in helping them get up to speed. The benefit of this approach is that we are not sacrificing quality or timing to be able to deliver the new requirements.

3. Increase time to increase functionality

When we have the ability to do so, extending a particular release may make sense. We can extend the period of a timebox, say from 4 weeks to 5 weeks, to incorporate additional functionality. The following diagram shows how a time extension creates more capacity for implementing the requirements.

extend time
Deadlines are often arbitrary. We should always explore the possibility of delaying the end of the timebox. Don’t extend the timebox more than 50%, or we lose the benefits of having incremental delivery.

4. Delay some functionality to deliver other functionality

Many times, there are political ramifications to delaying the release. And budget constraints are more common than they were ten years ago. When faced with no ability to extend the time or increase the cost, we are faced with a decision. We either sacrifice quality, or delay other functionality. Since we’ve prioritized our requirements based upon the value they provide to the business, it is usually an easy decision.

First we identify which previously scheduled requirements are lower priority than the new requirements. Then we understand which of those requirements has the lowest cost of delay. After confirming with our stakeholders, we delay those requirements to the next iteration. The following diagram shows this.

delaying requirements


We’ve talked in the past about how scope-creep should be managed as a relationship, not a math exercise. When we establish rational deadlines for change requests, and then combine them with the four techniques here, we can provide our stakeholders with a number of choices.

Scheduling requirements changes – part 2


This process goes against agile principles on paper, but makes teams more agile in practice.

Scheduling delivery of a project is an exercise in managing complexity. Scheduling changes to the requirements on the fly is really only marginally more difficult. The key to managing changes is to set expectations with our stakeholders. By defining rational deadlines for change requests, we assure ourselves that we can manage the changes. We also demonstrate responsiveness to our stakeholders. Rational deadlines are not arbitrary deadlines nor are they unreasonable deadlines. Deadlines that vary with the complexity of the changes are rational, easy to communicate, and easy to manage.

In part one of this article: We presented a scheme for organizing requirements based upon the complexity of implementing them.

In part two of this article: We show how define deadlines for change requests based upon the complexity of the proposed change.

Complexity of change (review)


We defined four different buckets into one of which every change request will be dropped

  1. Simple implementation (less than 2 hours)or minimal risk
  2. Easy implementation (less than 1 day) or low risk
  3. Hard implementation (less than 1 week) or appreciable risk
  4. Major implementation (less than 1 release cycle) or high risk

For this article, we will work with the assumption that each release cycle is 4 weeks long, and the development team is between 2 and 10 people. When there is a single developer, it is much easier to handle change, and with more than 10 developers a development team should be grouped into sub-teams that operate in a coordinated but independent way on different elements of the project.

Release schedule timing


When we talk about a schedule, we will talk in terms of a countdown to a release date. A release date is the date that developers stop. If our team uses a code freeze, or delivers to another internal team prior to customer-delivery, that first delivery is the release date. All of the dates we talk about in this post are relative to that development-terminating release date. Everyone is familiar with NASA countdowns – “T minus 20, 19,….” which count down to the point of ignition of the engines. We will use the same language, but instead of talking in terms of seconds, we will be counting in terms of days – specifically week days. “T minus 5” is 5 days prior to the release date.

Incremental delivery sometimes refers to delivering to the customer with each release, and sometimes refers to internal releases that happen between external releases. When we are scheduling releases, we are referring to each incremental delivery (either internal or external). The following diagram shows a timeline for a single release:

single release timeline

  • R1: The release (where the subscript ‘1’ represents the release number.
  • S1: The deadline for simple change requests: T-2. Changes that take less than 2 hrs to implement must be vetted at least two days prior to the release.
  • E1: The deadline for easy change requests: T-5. Changes that take less than 1 day to implement must be vetted at least five days prior to the release.
  • H1: The deadline for hard change requests: T-10. Changes that take less than 1 week to implement must be vetted at least ten days prior to the release.
  • M1: The deadline for major change requests: T-20. Changes that take less than a full release cycle to implement must be vetted prior to the start of the release.

[Update 12 Apr 2006]

Thanks Roger for the great comment (below) suggesting that we incorporate timeboxing into this post. We just posted an article on how to use timeboxes when scheduling software delivery. In our diagram above, we show the timing for the vetting of a change request, relative to a single timebox’s release date. These timeboxes would be strung together as part of an incremental delivery plan, as the following diagram shows.

multiple timeboxes

[end update 12 Apr 2006]

Vetting a change request

A fully vetted change request is not a properly documented request. Vetting is the process of validation and verification. A change request is a requirement. It is either a previously scheduled requirement that must be changed, or a requirement newly scheduled for this release, or a new requirement.

A requirement is validated through communication with the stakeholders. Usually a stakeholder submits a change request. The product manager or program manager will then verify the requirement with the stakeholder, usually in an inverview. The PM will also determine the proper priority for the requirement, as well as identify the desired release for the requirement.

A requirement is verified by the development team – usually the development lead or a senior developer. Verification includes the following steps:

  1. Confirm the correct interpretation of the requirement. Does the developer understand the change request? Is his understanding correct?
  2. Estimate the implementation time. The developer must commit to a PERT estimate for delivery of the change request. Creating a good estimate may require design effort or prototyping for hard or major changes.
  3. Assess the risk associated with the change request. The developer and project manager may need to collaborate to determine the risk.

Without validation of the change request, we risk building the wrong functionality – the hardest source of bugs to eliminate. Change requests usually come with a sense of urgency, making it even more likely that we will misinterpret them. Without verification, we don’t know how big the impact of the change is (or might be). Until the requirement is vetted, it must not be accepted by the PM for inclusion in a particular release.

Zero-sum game

All changes are scope creep, because they are asking us to do something more, or something different. Sometimes, we have to do something again. We made the assumption in part 1 of this post that we already have a fully committed development team at the start of the release. To incorporate a change, something must be removed from the release.

Which something should we remove? There is no general answer for that question. We have to look at the skills and availability of our team members. We have to understand the interdependence of tasks in our current schedule. Interdependence is especially tricky because it not only affects sequencing, it affects scoping. Developers will make assumptions when estimating the work to implement a particular requirement. One of those assumptions will usually be that something else is already implemented. For example, implementing a new report is dependent upon the reporting engine being implemented.

Our PM needs to work with the development team (usually the dev lead) to understand which committed features can be pushed out. Actually, every deliverable can be delayed, but sometimes, pushing out feature X also means delaying features Y and Z. A Gantt chart will reveal these dependencies if properly documented and managed. Requirements traceability can also be a source of dependency information – a requirement to show per-item shipping charges will depend upon the ability to show an itemized quote. While the development team may be able to implement the per-item shipping charge functionality in the current release, if the itemized quote functionality is pushed to a future release, the stakeholders will not get any benefit from the per-item shipping charge display capability. This is why we communicate release content in the form of use cases, or enabled capabilities.

We’ve posted in the past that managing scope creep is not a zero-sum game. Scope creep is managed best at the relationship level. This process details how to execute within that relationship, not in spite of it.


This may seem like a burdensome process. It isn’t. We are not pro-process, we are only pro-valuable-process. We’ve had success using this process both to introduce predictability into the development process, and as a simple and clear communication vehicle to stakeholders who may not appreciate the challenges of software development. Our experience is that this process allows more changes to be implemented earlier. The process goes against agile principles on paper but makes teams more agile in practice.

We’ve worked with teams that required their stakeholders to wait for months to deliver functionality – even though they used a monthly release cycle for their applications. Their stakeholders complained about the lack of responsiveness of the IT organization. The IT organization complained about the inability to deliver what the business users asked for. The root cause of their pain – inadequate vetting of the requirements combined with lack of vetting of change requests – if it was important it was approved. The IT team struggled and juggled every month to get stuff done. They relieved the pressure by pushing out commitments until the business was waiting for months to get anything other than bug fixes. Change management was a special event, and required management attention.

For those teams that implemented a process like this one, within a few release cycles, the changes were almost astounding – better quality, better quality of life (for the developers), more predictability and higher satisfaction for the stakeholders. For the teams that didn’t, they still struggle and juggle.


Scheduling requirements changes – part 1


Software product success requires timely delivery

There are many factors that influence our ability to properly scope, schedule, and deliver software. When we propose changes in requirements we introduce risk to the schedule. We can set reasonable expectations for our stakeholders while maintaining a realistic work environment and schedule. In part 1 of this post we detail a requirements triage process that organizes requirements by complexity and allows us to set and meet expectations of delivery.

In part 1 of this article: A method of classifying requirements based upon the complexity of their implementation.

In part 2 of this article: A scheduling approach that uses variable lead times combined with our classification scheme.

    AssumptionsWe will work with the following assumptions, not because they are realistic, but because we want to isolate the clear benefits of this approach. Solutions to the problems that we are ignoring in this post do not conflict with the techniques we present.

    • Someone plays the role of product manager, specifically gathering requirements and validating their delivery schedule with the stakeholders
    • Someone plays the role of project manager, organizing the team’s activities to deliver according to the schedule.
    • The team members are competent, and their estimates are realistic (i.e. we can capture and address estimation risk effectively with PERT).
    • We are starting with a project that is currently scheduled properly, and on time.
    • We are using an automated test process that allows us to deliver without a designated code freeze.
    • We have an incremental delivery model, with multiple releases in the schedule.

    Classifying requirements by complexity


    We group our requirements based upon the complexity of the implementation (a simply worded requirement could have a very complex implementation). We interpret complexity by two measures, level of effort and risk. Effort represents the amount of work required to code/test/document and deploy the features required to implement the requirement. Risk represents the risk of introducing a bug or slipping the schedule for the project.

    There are four buckets, each representing a level of risk and a level of effort.

    1. Simple implementation or minimal risk
    2. Easy implementation or low risk
    3. Hard implementation or appreciable risk
    4. Major implementation or high risk

    Simple implementation or minimal risk

    Any requirement or feature that requires fewer than 2 hrs to implement, and has no known impact on the rest of the code. Further, this is easily within the skills of the assigned developer to implement – a no brainer. The developer is able to provide a narrow PERT estimate for the work (+/- 30 minutes). For all of these categories, the time to implement is expressed in developer-hours, not calendar hours. A 2-day task could be completed (potentially) in half a day by a team of four developers.

    • Help file links, release notes, collateral documentation or content updates
    • Changes to data that is processed by the application
    • Minor UI layout changes without behavioral changes
    • Correction of spelling errors, missing labels, etc.

    Easy implementation or low risk

    A requirement that can take up to a day to implement, and does have some dependencies on other areas of the code. From a risk standpoint, the implementation approach is well understood, and has been done before – totally do-able. The developer is confident in a PERT estimate with no more than an hour of “worst case” buffer.

    • Small and straightforward interaction changes in the UI, like changing from radio buttons to a combo-box, or duplicating menu functionality in a context menu
    • Minor bug fixes – correcting errors in previous releases or builds, discovered after a release schedule has been “defined”
    • Small extensions of existing functionality – adding sorting to tables, simple extensions to existing data schema, etc.

    Hard implementation or appreciable risk

    Requirements that can take up to a week to implement. An implementation approach that is expected to work but the team members may not have a lot of experience with it. PERT estimates may have a “worst case” buffer of up to a day (for example: best case 24 hrs, likely 32 hrs, worst case 40 hrs). If there are strong dependencies on external groups, there is also a risk factor involved.

    • Significant extensions of existing functionality
    • New, but straightforward functionality
    • Complex user interaction changes
    • Easy additional integration requirements (with external systems)

    Major implementation or high risk

    Requirements that take over a week to implement. There is a practical upper bound of requirements that exceed the length of the release cycle – the work supporting these requirements should be decomposed into seperate deliverables and scheduled independently. Work that requires scheduled innovation or that the current development team has never done before is high risk. External dependencies on unproven teams can also make a deliverable high risk. Work that requires inadequately tested code to be modified is also high-risk.

    • Support for additional platforms (new operating system, new database, new browser)
    • An overhaul of the user interface
    • Significant change to existing functionality
    • Complex additional system integration


    Each team may need to slightly adjust the definitions of the buckets, based upon their level of risk-tolerance, their development skills, the effectiveness of the project manager (or organization) to execute and make things happen. The examples listed above are representative – if they don’t apply for a given team or project, create comparable examples, using risk and level-of-effort as guidelines.

    When new or modified requirements are presented to the team, the first thing that the team should do is estimate or scope the amount of effort required to implement them in the desired release (taking into account leverage of existing and concurrent development). Determining which bucket to put the requirement in is independent of the decision to implement the requirement.

    Part 2

    In part 2 of this post we will talk about how to manage the scheduling of requirements that are defined in each of the buckets.

Product Manager Role Definition


Michael has posted a great definition of the product manager role on his blog, Product Management and Product Marketing – A Definition. He covers a whole host of activities in six seperate areas. Some of the responsibilities, while not product management, are often the responsibility of the product manager. It’s a good real-world assessment of what product managers are often asked to do.

The six areas

  1. Market Research
  2. Product Definition and Design
  3. Project Management
  4. Evangelize the Product
  5. Product Marketing
  6. Product Life Cycle Management

Market Research

Definitely part of the role of a product manager. This is where we determine what opportunities exist and document them. Analysis of the market identifies those problems worth solving. We disagree with Michael only in that we believe the MRD is a deliverable from this area, and not part of the next area.

Product Definition and Design

We see these as two distinctly different activities, as the nature of the work involved requires such different skillsets. The conversion of an MRD into a PRD is the process of determining which valuable problems should be solved in software. The application of interaction design and program design are completely distinct activities from this prioritization process. There was a heated debate a couple months ago across a few of the blogs in this space about the difference between requirements and design. We believe that combining them in the same area adds to the confusion, and would suggest splitting the area up into two areas.

Project Management

Absolutely part of developing great software. Product managers are often asked to do this, but this area isn’t about deciding what to do, it’s about executing in the context of a decision. Michael’s point that product managers are often asked to do this is valid -but we believe that it falls in the “do more than your job” bucket, and should not be part of the canonical definition of product management.

As one of Micheal’s commenters points out, this role is often performed by a program manager, who is responsible for coordinating the efforts of the rest of the team to achieve the product manager’s vision. That’s how the responsibility is assigned at Microsoft, as the commenter points out. Scott Berkun, in The Art of Project Management, also with a background from Microsoft talks about the dual-nature role of program manager as well – part product manager and part project manager.

Evangelize the Product

We agree with Michael that the second most important part of a product manager’s role (after defining what the product should do) is getting everyone on board and excited about what it will do. One of the reasons this is so important is that it opens up avenues for two-way communication with customers. Evangelists aren’t corporate bullhorns – they apply listening skills not just to tailor the message, but also to adapt and adjust the product direction. This then sets the stage of iterative requirements elicitation and validation. And that’s why it is important that this be a product manager playing the role of evangelist.

Communication of delivery schedule isn’t really evangelism, but otherwise matches the rest of the characteristics of this type of communication. This feels to us more like it belongs in the project management area, as communication of project status is part of that role. But we like the consistency of roles and skills that we get from placing it in the evangelism area.

Product Marketing

This is the creation of outbound messaging (the bullhorn). When there isn’t an explicit evangelism role, product marketing can provide a pretext for incremental market research. As Michael points out, this is primarily a communication role, not ideation, prioritization or organization.

Product Life Cycle Management

This is basically product portfolio management and strategy. Another analysis to add to Michael’s list – should we continue to invest in the software, or stop/minimize investment and milk it for whatever revenue we can get at a >90% profit margin?


Product managers have to be able to do just about everything. And the smaller the company is, the bigger the responsibility for the product manager. Great summary and classification, Michael!

Office 2007 UX Victory

office 12 video

Microsoft Office 2007 has a completely new user interaction paradigm.

The old interfaces for Microsoft Office 2003 (and earlier) organized the menu structures around features or capabilities. Each grouping represented tasks that appeared to be related in functionality. This, unfortunately, doesn’t help the user very much. The new interface is very task based, and organizes capabilities based upon the task the user is currently performing. What the Office team has done is innovate. And the innovations differentiate them from every other business application I’ve ever seen.

Check out the 13 minute video from Microsoft’s user experience team here : It’s worth watching!

The best quote, from Jensen Harris, lead program manager, Office user experience team:

We realized that people weren’t trying to find commands in the product, they were trying to get great results.

This really captures the spirit of the changes – and they are compelling and dramatic!

Julie Larson-Green / Scoble interview on Channel 9

Robert Scoble filmed a 41 minute interview and demo with Julie Larson-Green, manager of the Office user experience team (Sep 2005). The comment thread on this Channel 9 post is really interesting, including several people expecting to be unimpressed, and ending up being converted. If you only watch one video, watch the other one. If you’re the kind of person who re-watches a DVD to listen to the director voice-over the decisions, check out this one too.

Jensen Harris has a blog devoted to Office 2007

After you watch the video, and decide that you must have this app (and you will), subscribe to Jensen’s blog to stay up to date. He provides great insight into how the UX force at Microsoft has driven these changes and why. A great post to start with: Ye Olde Museum of Office Past (Why the UI, part 2). Note that it is a repost of an earlier article he wrote, but it is still fun to look back at the evolution of the Office UI.


Suddenly, open office seems much less compelling. The Office team has put together very compelling and differentiated innovations. It isn’t the ribbon (replaces the menus and toolbars) that makes it exciting, its the task-centric design of the application. The new UI puts Office 2007 clearly in the revolutionary change corner of the magic square of innovation.

Epicenter software design – 37signals applies Kano

campfire (logo for Campfire a business group-chat application from 37signals)

Jason at 37signals has started a discussion about feature prioritization with his recent post. He describes the epicenter of software as the most important, must-have feature. He argues that this feature should always be the one that is built first, since without it you don’t have an application. This is the same approach we reccommended in our recent post about prioritizing requirements with Kano analysis. The epicenter, while critically important, isn’t sufficient to drive success for the software.

Epicenter == must-be requirement

Jason provides an example of a must-be requirement in his post:

But, what’s the epicenter? It’s the events themselves. Without those you don’t have a calendar. You can have a calendar without color coding. You can have a calendar without alarms. You can have a calendar without dragging and dropping. You can have a calendar without displaying a mini calendar of the next three months. You can have a calendar without a lot of things. But you can’t have a calendar without being able to add events. That’s where you start.

From our first post on Kano analysis:

These are the requirements that most people think about when they talk about requirements. These are the easiest requirements to elicit.

Stakeholders can usually tell us what they must have in the software. In our Am I hot or not? post on requirements prioritization, we see that 37signals focuses on this as their primary criterion for inclusion in a software initial release. They choose to only put essential, or ‘must be‘ requirements into the initial release of software.

Veto power
Jason points out that 80% of the value of the software comes from the epicenter feature. He’s probably right. But the must-be features don’t create a saleable product in and of themselves. They absolutely must exist, or the software won’t sell. They have veto power over success. Without these features, the product will fail. But product success also requires innovation, especially when we are entering a domain as second-movers.

Don’t forget innovation

Innovation, and more precisely, differentiated innovation sells software. It also provides a barrier to entry for competitors (for a short time). In Kano, these are the surprise and delight requirements. We should spend 80% of the first release on the must-have requirements and the other 20% of our time on differentiated innovations via surprise and delight requirements.

Innovation can take many forms, and we talk about different types of innovation in our post, Magic Square of Innovation.

Outsourcing Conversation – One Topic, Two Blogs, Three Cs

chatting ducks

Frederick Boulanger picked up on our earlier article about different application development outsourcing models, and extended it with his own good ideas– making it easier for teams to decide which outsourcing model is right for them. Frederick identifies the three key factors that determine which model is most likely to succeed for a given team. They are control, coordination, and communication. Anyone else want to join in? Blog away, and trackback or comment here.


Are we outsourcing turn-the-crank type activities? What about tasks that require only entry-level skills? Frederick points out that there is a continuum of desired control for any company or project. When we maintain as much control as possible, we only outsource very specifically defined work-elements. When we have less need for control, we can outsource more and more of the process. If we keep design in-house but outsource implementation, we still have a lot of control over the final results.


Coordination of activity is central to the success of any team. When everyone on the team is in the same room, coordination almost happens automatically. Having everyone in the same building helps, and having everyone in nearby timezones is the norm these days for non-outsourced projects. A lot of people treat outsourcing as synonymous with offshoring – the term for outsourcing to teams on different continents.

Technology has eased the pain of a geographically distributed team – instant messaging, email, collaboration applications, and video conferencing have reduced the need to travel, and made coordination easier for dispersed teams. Offshoring also creates temporally distributed teams, as team members are working in very different time zones. When team members are eleven and a half hours out-of-phase with each other, coordination becomes much more important.

Asynchronous teams also face an efficiency challenge. The typically iterative tactical communication among team members may only take minutes when folks are simultaneously working. It can take a day per iteration when the communication happens only through email (with a multi-hour delay between each exchange of information).

Defining the communication process our team will use is important. Documenting this process helps to set expectations for the outsourcing team, and is required for most CMMI levels.


Precisely defined tasks (the turn-the-crank variety) require far less iterative communication than more strategic activities like validating requirements or architectural design. Offshoring projects will be more effective initially when the majority of the outsourced work is partitioned into narrowly scoped deliverables.

After developing a good relationship with our outsourcers, and working the kinks out of our communication process, we can begin to relinquish control by outsourcing work with greater scope of impact. I’ve had excellent success in doing this by leveraging documentation to communicate with overseas team members. When giving them greater responsibility, I would require that they document their design and their test design for review prior to any implementation work. This dramatically reduced the effect of ambiguity in the requirements, while implicitly providing an active-listening feedback loop. Most often, misinterpretation of requirements resulted in errors in the test designs. It also had the side benefit of helping my teammates grow their skills more quickly as it forced more critical thinking to occur early in their process.


Communication is important to any outsourcing effort, and increasingly critical when relinquishing increased levels of control. Process execution and coordination determine how repeatably we can communicate at any level of control.

Learn to Fly with Software Process Automation

flying squirrel

We can reach the next step in our software process evolution by automating much of our process. Flying squirrels evolved a technique* to quickly move from one tree to another without all the tedious climbing and dangerous running. Software teams that automate their processes achieve similar benefits. Automation allows us to increase efficiency while improving quality. And we spend less time on tedious and mundane tasks.

Benefits of process automation

Tim Kitchens has a great article at developer.* where he highlights the benefits of process automation. Here are our thoughts on the benefits he lists:

  • Repeatability. The first step in debugging software is the isolation of variables. A repeatable build process eliminates many variables, and likely many hours of wasted effort.
  • Reliability. A repeatable process eliminates the possibility of us introducing errors into our software by messing up a step in the build.
  • Efficiency. An automated task is faster than a manual task.
  • Testing. Reductions in overhead of building and testing allow us to test more frequently.
  • Versioning. The scripts that drive our build process are essentially self-documenting process documents. And tracking versions of the scripts provides us with precise records of the process used for prior builds. This documentation, and re-use of it can reduce the cost of running our projects at any CMMI level.
  • Leverage. We get much more efficient use of our experts’ time – they spend less effort on turn-the-crank processes and more effort on writing great software.

What and when should we automate?

The short answer is automate everything, unless there’s not enough ROI. We have to examine each process that we use to make a final decision – some automation will not make sense due to uncommon situations. Also, if we’re nearing the end of an existing project, there is less time to enjoy the benefits of automation, so we may not be able to justify the costs. We may be under pressure to deliver ROI in a short payback period. We would suggest exploring the automation of the following activities:

Automate the build process

Most people underestimate the benefits of an automated build. The obvious benefit is time savings during the normal build cycle. Imagine the build takes an hour, happens monthly, and usually happens twice per month. Two hours per month doesn’t seem like a lot of savings. However, chasing down a bug caused by the build process is at best expensive, and at worst nightmarishly expensive (because we aren’t looking in the right place to find the problem). Use an estimate of the probability of this happening to the expected value calculation for the savings.

The largest potential benefit of an automated build is changing the way we support our customers. Monthly builds aren’t scheduled because the business only wants updates once per month. They are scheduled at a monthly rate because that’s a balance someone has achieved between the cost-of-delivering and the cost-of-delaying a delivery. When we automate our delivery process, we dramatically reduce the cost of delivery, and can explore more frequent release schedules.

Automate unit testing

We significantly improve the efficiency of our team at delivering by shortening the feedback loop for developers. On a Utopian dev team, we would run our test suite as often as we compiled our code. Realistically, developers should run relevant automated whitebox tests every time they compile. They should run the suite of whitebox tests every time they promote code. And an automated process should run the full suite against the latest tip on a nightly basis (to catch oversights). It would be great if the check-in process initiated an automated test run and only allowed a promotion if all the tests passed.

Automate system and functional testing

End to end and blackbox tests should be automated next. These are the big picture tests, and should be run nightly on a dedicated box against the latest code base. We’ve had the most success with teams that used a nightly testing process, which sent an email with test results to the entire team whenever results changed. We’ve had the pleasure of working with a team that included performance testing on the nightly runs, and reported statistically significant improvement or degradation of performance.


Generate tactical documentation whenever possible. Use javadoc or the equivalent to automatically generate well formatted and organized reference materials for future developers.

Marginally relevant reporting

If our team is asked to report metrics like lines of code, cyclomatic complexity, code coverage, etc. We should automate this. This work is the definition of tedium, while presenting tenuous value to the manager who requested it. If we can’t convince someone that they don’t want this data, we should at least eliminate the pain of creating it.

Code coverage statistics can provide better than nothing insight into how much testing is being done, or how much functionality is exercised by the test suite. But code coverage metrics have the danger of false precision. There’s no way to say that a project with 90% code coverage has higher quality than a project with 80% coverage.


Automation makes sense. We save time, increase quality, and ensure a more robust process. We also spend less time on turn-the-crank activities and more time creating differentiated software.

*Technically, they don’t fly – they fall. With style.