Welcome Readers of the Carnival of Enterprise Architecture and Sharp Blue readers, and Carnival of Software Development followers. We hope you like what we’ve got here – please let us know what you think, and enjoy yourselves!
Roger had a great suggestion in the comments to our previous two-part post on scheduling requirements changes based on complexity. Roger pointed out that we had not explained what timeboxing is, but implicitly used the principles of timeboxing in our proposed process. In this post, we explain timeboxes and how they are used.
Definition of timebox
A timebox is a fixed unit of development capacity. An easy way to visualize a timebox is as a two-dimensional graph. Along the vertical axis is the cost of the development team (per unit time). Along the horizontal axis is time. The longer an iteration is, the wider a timebox is.
The important thing to notice is that with Cost and Time fixed, the capacity of the timebox is fixed. There is only so much that can be accomplished with a given team and a given amount of time.
A unit of work
A unit of work represents both the functionality being delivered and the quality of the functionality. This is the concept that many managers do not grasp.
To deliver the functionality that supports any particular requirement, we can think of the time as having two tightly-linked components: implementing the functionality, and implementing the functionality with good quality. Poorly written code, for an isolated requirement, can take less time than well written code. The extra time spent doing it right is part of the quality component. Writing tests and documentation (when appropriate) are also part of the quality component.
How big should a timebox be?
With some exceptions, anywhere from 2 to 4 weeks. Smaller, more tightly knit teams can operate with shorter timeboxes. Teams with less release-process overhead can operate cost-effectively with smaller timeboxes. There’s a good post and discussion at The Pragmatic Architect on how long to make timeboxes. Mishkin Berteig also has a good post, with some differing opinions, identifying the pros and cons of short iterations.
We think a good way to approach it is to start with a 3 week cycle and extend or shorten it, based on what our stakeholders prefer, balanced with the reality of our development environment. For larger teams, we usually end up with a 4 week cycle. Keep in mind that the length of the cycle can be changed as we get feedback on our process efficiency.
Filling a timebox
We can fill up a timebox with the work-units representing several requirements. Ideally, they are the highest priority requirements. Different requirements will take different amounts of time to implement. We can visualize this in the following diagram, which shows a timebox with the “original” schedule.
We see that each work unit has both a functionality and a quality component. We don’t want to intentionally plan to deliver functionality without quality.
Dealing with new requirements
The previous posts on scheduling were about how to manage the deadlines for receiving change requests. In those posts, we didn’t talk about how to manage the schedule after receiving a request. There are four methods of adjusting the plan once a request has been approved and committed.
- Sacrifice quality to increase functionality
- Increase cost to increase functionality
- Increase time to increase functionality
- Delay some functionality to deliver other functionality
1. Sacrifice quality to increase functionality
We can, and too many teams do, sacrifice quality to deliver extra functionality without impacting costs or delivery dates. When we take this approach, we incur a code-debt. A code-debt is us taking a loan against our code-base in the short term to resolve otherwise impossible constraints (no extra budget, can’t miss the deadline, can’t delay anything). Poor quality code comes with a long term cost. It introduces risk into the delivery, which is the cost of poor-quality. This risk manifests as a negative expected value (think of it as the interest on the loan). Poorly written code also makes it more expensive to write new code in the future (think of this as the principal on the loan). Until we invest time to fix the quality of the code (refactor, test, etc), we will continue to incur costs.
The following diagram shows what this would look like.
We have sacrificed quality on some requirements (work components) including the new (red) requirements in order to squeeze them into our timebox.
2. Increase cost to increase functionality
Another approach is to increase the capacity of the team to meet increased demands. This can mean extra hours for the current team, re-tasking people from other projects to join the team, or bringing in contractors to temporarily increase capacity.
The following diagram shows that by increasing the cost (and shuffling requirements around visually) we can deliver more functionality without sacrificing quality.
There are always inefficiencies to adding capacity. If we add hours, people get burned out. If we add people, there is overhead in helping them get up to speed. The benefit of this approach is that we are not sacrificing quality or timing to be able to deliver the new requirements.
3. Increase time to increase functionality
When we have the ability to do so, extending a particular release may make sense. We can extend the period of a timebox, say from 4 weeks to 5 weeks, to incorporate additional functionality. The following diagram shows how a time extension creates more capacity for implementing the requirements.
Deadlines are often arbitrary. We should always explore the possibility of delaying the end of the timebox. Don’t extend the timebox more than 50%, or we lose the benefits of having incremental delivery.
4. Delay some functionality to deliver other functionality
Many times, there are political ramifications to delaying the release. And budget constraints are more common than they were ten years ago. When faced with no ability to extend the time or increase the cost, we are faced with a decision. We either sacrifice quality, or delay other functionality. Since we’ve prioritized our requirements based upon the value they provide to the business, it is usually an easy decision.
First we identify which previously scheduled requirements are lower priority than the new requirements. Then we understand which of those requirements has the lowest cost of delay. After confirming with our stakeholders, we delay those requirements to the next iteration. The following diagram shows this.
We’ve talked in the past about how scope-creep should be managed as a relationship, not a math exercise. When we establish rational deadlines for change requests, and then combine them with the four techniques here, we can provide our stakeholders with a number of choices.