Just because your requirement is not a user story does not mean you have to throw it out when planning your next sprint. See one way (that is working) for managing non-functional requirements with an agile team.
Product Backlog Stories
Every article* I can remember reading that explains how to manage a product backlog talks about user stories. Those articles are necessary, but not sufficient . You’ll create better products by developing them from the outside-in, with a user-centric point of view.
*One loophole – scheduling refactoring (or the payback of technical debt). A lot of articles talk about the need to do this, and refactoring is pretty much the opposite of a user story, since by definition, refactoring improves the software without introducing new capabilities. The best idea I’ve come across for incorporating refactoring as part of a sprint is to write a user story, with the system as the user, and the lead architect / developer as the primary stakeholder. This actually works really well, but it works by treating the work as a user story.
Atomicity
What is really important, when scheduling sprints (or releases, if you are not doing scrum) is that you are scheduling solutions to atomic, measureable, valuable problems. Atomicity is the reason for breaking epics (really big stories) up into smaller stories. It is also important that you communicate them unambiguously. That might mean writing user stories or writing use cases instead of user stories, when things are complicated.
The problem is, not every atomic requirement can be represented with a user story. Some things that must be, are not stories.
Non-Functional Requirements
Non-functional requirements always seem to be under-emphasized when writing requirements. The Twitter fail whale has become famous, because twitter could not scale to meet the demands of a rapidly growing user base. Maybe the Twitter team planned for scalability, but demand simply outstripped it. Or maybe they failed to plan for it. Either way, they failed to meet the non-functional requirements of supporting the growth that they did have. (Un)Luckily, this type of problem self-corrects. Scaling failures drive away users, reducing the need to scale, until balance is achieved.
Product managers and business analysts tend to neglect non-functional requirements. Unfortunately, this is especially true when managing with a focus on users and their goals. Not because goals don’t drive non-functional requirements – they do. I believe this has happened because historically, non-functional requirements were treated as an after-thought. In reality, they are explicitly driven by goals. I proposed an equal rights amendment to the structured requirements domain model almost three years ago. In short, it explicitly calls out the relationship between goals and non-functional requirements.
Agile Non-Functional Requirements
Getting non-functional requirements into your sprint planning is actually not that hard. You only have to make two tiny adjustments to get from the waterfall world to the agile world.
The first adjusment is that you have to treat non-functional requirements incrementally. Non-functional requirements often affect all of the other requirements – so they seem massive and unweildy. You have to decompose them. Consider the platform-compatibility requirements for a web application. You may have to support IE6,7,8; Safari on Windows, Safari on OS X, and Firefox on Windows XP and Vista. That could be incredibly daunting. So break it down. Your first group of users (key persona) are primarily Firefox/XP users. So the first platform you support is that one. The next big platform for your persona group is Safari on OS X. Add support for that next without breaking the previous support for FF/XP. With each release, you add a platform (or two, or none). You are conspicuoulsy addressing the needs of your target users. The key is that once support is added for a platform, all future development is required to “not break it.”
Each non-functional requirement is cumulative. This is the second adjustment. All development, once a non-functional requirement is in place, must continue to honor it. You wouldn’t break a previously released capability (functional requirement), so don’t break a non-functional requirement. You have to determine, in each sprint, if additional functionality is more important than additional platform support. And add in the platforms as they become the most important “next things to do.” In waterfall projects, I’ve seen many teams break and re-break platform support throughout the development process, with the knowledge that it only has to work “at the end.” Include platform-specific support in your continuous integration tests.
You have a launch event coming up in six weeks. You have an established user base. You’re also developing a key new set of capabilities for your product that you believe will be a big hit and drive significant growth for your product. You have a small group of people in a private beta, providing you with feedback about the new development. If you believe the launch will cause your customer base to double very quickly (maybe over a month), how do you plan for that? This is a serious scalability non-functional requirement.
Break the non-functional requirement up into cumulative requirements. Assuming your plan is to add 10,000 users “at once” – have your implementation team brainstorm what that could/would mean for the system. [Also, make sure you coordinate with your community manager and marketing folks, both to validate the anticipated growth, and to device any contingency strategies in advance.] After talking with your development team, perhaps you learn that “at once” is a nuanced proposition. Literally at once is very bad. Spread out over a few days, not so bad. OK – you can deal with this too.
Imagine you already have the following non-functional requirements in place:
- The system must be available 24/7, with no more than one hour of down time per day, and no more than one outage per day.
- The system must respond in under 2 seconds for >95% of uses [ in key user story].
- The system must respond in under 20 seconds for 100% of uses [in same key user story].
Based on what we’ve said above, these non-functional requirements must not be broken. They essentially define the acceptance criteria for our scalability requirements.
Consider the following as the non-functional requirements that must be deployed by the time of launch (six weeks, or three releases from now):
- The system must support 10,000 additional users added in the month following SXSW.
- The system must support 500 new users signing up and initiating [troublesome user story] within the same hour.
Your dev team does not really know exactly what needs to be done – they just know that the current solution won’t scale – it barely meets the existing non-functional requirements. They propose a couple redesigns that may get the job done. But they point out the need to actually test that the designs work. Schedule the following for the first release:
- The system must support 100 additional private beta users.
- The additional users will all have [troublesome user story] initiated within the same hour.
The second one is really a pragmatic solution – artificially creating a spike in demand, to test out the scalability of the new code. From that data, we can determine what we need to do to hit 10,000 additional users, and to support 500 concurrent [troublesome user story] instances. By managing expectations of the new users (that you’re queuing them up to test scalability), you can get the data you need. And you have a couple more iterations to improve if needed.
As a benefit, you get to completely avoid the fail whale. The other half of this is making sure you can constrain the rate of new-user sign-ups. Work with your community manager and marketer to make sure you position this effectively. You are creating scarcity, which may increase demand. A crashed server won’t. If your team can’t support 10,000 in time for the launch, plan on 2,500.
Conclusion
You can plan and schedule more than just user stories. And your product will be better for it. Give those non-functional requirements a chance.
I like to attach the nonfunctional requirements to user stories. If you’re using index cards for user stories, you include acceptance criteria underneath the natural language description of the story. These acceptance criteria express the nonfunctional requirements.
And you’re right that you can spike or iterate on the stringency of the nonfunctional requirements. It’s very much like basing iterations on constraint-based use case versions.
Thanks Roger! I agree when the context of the non-functional requirement is scoped to the single story, then it really is an acceptance criterion. But too many non-functional requirements affect all stories (support screen readers, scale to 100 concurrent users, be PCI compliant, encrypt traffic, etc). I guess I’m trying to call out the difference between “verify that [story operates within parameter X]” and “all stories must operate within parameter X.”
One of the ways you can iterate is by relaxing the nonfunctional requirements for a subset of the user stories, even if those nonfunctional requirements will apply to all of the user stories as the product matures.
From your description of the scope of the impact of non-functional requirements, they sound like “cross-cutting concerns.” Aspect-oriented programming (AOP) was developed to deal with cross-cutting concerns.
AOP works by intercepting interobject messaging doing some processing, and then sending the message to the original target object. It’s a system of sideband consumption.
You can use it as a means of extending your testing budget. To do this, you could code the core, and never change it. Instead, when you need it to change, you would build the change as an aspect. Since you didn’t change the core code, you don’t have to do regression on it. You might still do that until you are convinced, but most of the testing would be focused on the new code.
In terms of non-functional requirements (NFR), you would write your core at some level of “how well”. Then, when you wanted to increase your “how well” by some factor, you would write an aspect to do that. It wouldn’t effect the full scope of the non-functional requirement. It would effect only some small portion. You could then write another aspect for the next small portion. There is no need to break the whole thing, or to have the whole thing comply with a single metric NFR.
There might even be a business case for tiered compliance with the NFR. Sure, it’s nice if everything is operating on the same NFR levels, but it’s more important that the premium functionality meets it, or maybe the base functionality depending on your pricing scheme, and segmentation.
It might sound like a nightmare, but your NFRs would be encapsulated in an AOP approach where they are spread throughout your code today.
I can’t say I fully agree with approach of adding one or a couple of non-functional requirements to each sprint. To keep the example with browsers, I would definitely think from the very beginning that application should support all target browsers. Win/OS X platform it’s a bit easier since capabilities are pretty similar. However if you think of having a site accessed from mobile devices too it all becomes tricky. If you base mainly on javascript, well, it won’t work on many terminals to throw just one example.
Pretty much the same is with performance-related non-functional requirements. Let’s say our goal is 20k requests per hour (whatever a request is – it doesn’t matter). At the moment application at the peak gets about 7,5k requests. If we put our milestones at 10k, 12,5k, 15k, 17,5k and finally at 20k for each consecutive sprint we risk that our current architecture will die in the middle of the march and whole work was useless. I would rather spend some effort in one of early sprints (possibly the very first) or at the beginning of a project to do extensive load testing and learn how far current performance can be pushed. If results weren’t satisfactory I would start working on major changes enabling performance improvement from the beginning (hopefully the second iteration) instead of somewhere in the middle of process.
Non-functional requirements are always tricky to deal with, no matter if your project is done agile-way or you’re waterfall zealot (are there any of them still?) because they’re hard to pack into frames we use for functional requirements.
Late to the party, but the Cranky PM likes…
Just a follow-up regarding nonfunctional requirements and whether they apply to the entire system or just a subset of user stories. Let’s take one of the examples:
“The system must be available 24/7, with no more than one hour of down time per day, and no more than one outage per day.”
When you think about it, this availability requirement likely shouldn’t apply to all – or even most – of the user stories of the system. Likely, there is system functionality for which it would be acceptable if it weren’t available more than one hour per day.
For example, imagine it’s possible for the user to rename a system configuration profile. Would it really be a significant problem if this functionality were not available for an entire day? Maybe, but maybe not.
As a general rule, a product manager should treat a system-wide nonfunctional requirement with skepticism and probe to determine the subset of user stories to which it applies.
Wow – some great conversation here that I haven’t participated in – my bad!
@Pawel – There’s value in drawing a distinction between “when do we need it?” and “when do we build it?” when expressing more is better non-functional requirements. As a former code-slinger, for your “increasing support for # of requests” example, I agree with your inclination to build the initial solution such that it can support all of them. However, I also appreciate the trust that is implicit in having the needs expressed to me in priority order, leaving the implementation sequence up to me – so that I can choose to build a solution however.
When I put my agile hat on, there is really clear guidance to only build “just enough” – satisficing the needs of the moment (or of the iteration). This can be really tough as a developer, where the costs of discarding an earlier implementation are very visible, but the benefits of doing so are often hidden (from development). Here’s how I think about it.
Think about the model that bugs cost 10X to fix in the field (versus QA), and 10x to fix in QA (versus dev), 10x to fix in dev (versus design), and 10x to fix in requirements (versus design). This is true not just because of the cost of change, but because of the opportunity cost of change. The same model applies to capturing value. Delaying a properly prioritized requirement (story) to implement the “full solution” of a more is better non-functional requirement follows a comparable cost curve. The cost of refactoring is far lower than the cost of lost opportunity of delaying the other, valuable functionality.
Completely agree about the trickiness of this!
@Cranky – thanks for the props!
@Roger –
GREAT point. A particular non-functional requirement does not necessarily apply to all of the stories. Your system availability requirement is a fantastic example.
I haven’t done the math, but I think the bias is towards applicability to “most stories” for examples like this one. Most stories will be end-user stories (versus other stakeholders like sysadmins). So uptime requirements will tend to be applied to most stories. I’m basing this purely on hazy memories of past projects, and a quick view at a current project.
Other non-functional requirements (like platform support), will also apply to “most stories”, again, because most stories bias towards end users (or primary personas among end users).
I completely agree that the default should not be “all”, now that I think about it more (thanks again!), but the expectation I will approach non-functional requirements with is an assumption of “most” and a look to see which stories can be excluded (versus identifying the ones that can be included).
Great stuff.
Scott, I agree that many nonfunctional requirements will in a sense apply to most stories. However, I’d clarify to state that most attributes (e.g. availability or, more specifically, uptime) will apply to all or most stories, and that what Weinberg calls constraints (the measurement tied to an attribute, e.g. uptime >= XX%) will often vary by story.
For example, for most web sites, it’s much more important for it to be available to visitors than it is that the administrator can change the content or formatting of the web site.
@Roger – Great distinction! I’ve always approached it as understanding the constraints that truly apply to the user’s context, so alignment with stories makes a ton of sense. Pages on an eCommerce site may “need” to load under two (generally), but search results may “need” to be presented in under a second – or vice versa.
I put “need” in quotes because response time (an attribute) leads to a constraint (under two seconds) that is a (current) manifestation of a more-is-better requirement, which has to clear a must-be hurdle, then exhibits diminishing returns.
I love the nuances of this stuff!
Wouldn’t an NFR impact implementation, rather than just design. Who picks the algorithm, and why would that choice affect the architecture of the system?
If you want to further develop the notion that an NFR applies to some, but not all features, look at the frequency of use of all features. It should be a power law distribution. Where a feature network extends into deeply into the long tail, that’s where the opportunity lies to live with a lower service level.
Interesting conversation. I admit, NFR is the one area that makes me a tad uncomfortable in the Agile framework, too. It seems to me, the countermeasure to the “biz value target bullet” approach’s apparent disregard for NFR is best addressed by adjusting our perspective of the “R” in NFR. As previously stated, these qualities and constraints can be baked into the Acceptance Criteria and they can be part of our Definition of Done fro various stories. But shouldn’t the non-functional expectations also be part of “Sprint 0” which sets up the infrastructure? If so, then Sprint 0 can be used a touchstone for refactoring during development sprints. And to round it out, the non-functionals can part of a periodic Hardening Sprint, the purpose of which is to ensure a robust infrastructure.
Hey Karen, thanks for the great comment and welcome to Tyner Blain!
I don’t bake the NFRs into sprint 0, because they (1) tend to change – because they are usually overlooked, they are usually unearthed or refined after releases, and (2) because they provide a great way to split epics into workable stories.
As an example: As a reseller, I want to receive your OEM product catalog electronically, so that I can sell your products to my customers. For sprint 1, I may only release to resellers in the US (deferring the need for translated product descriptions), or to only 10 key partners (deferring the need to scale), or only provide unformatted data (deferring the need to provide semantically marked-up HTML) for partners that create “parts lists.”
I’ve never heard of a “Hardening Sprint” before – but I have created “compliance themes” and “scalability themes” that defined specific NFRs designed to meet goals around compelling events (analyst approval, trade-show demo, etc). The only difference, I think, is to take it one step further and ask the question – “why do you need a robust infrastructure?” Maybe it is risk mitigation (but risk of what exactly?
how does adding functional requirement affect non-functional vise versa
Hey prot, thanks for the question, and welcome to Tyner Blain.
Let me try and reword your question into two questions, and please let me know if I got it right. I’ll answer each separately.
When implementing new functional requirements, for a product that already has some non-functional requirements (NFRs) “completed”, what is the impact on those non-functional requirements?
The really short answer is that the NFRs are unaffected. The short answer is that they are unaffected, because they still apply – however, they may apply in new ways. The long answer is that how they apply to previous functional requirements is unchanged. They now also apply, when appropriate, to the new functional requirements. As Roger points out in an earlier comment, the existing NFRs may not affect the new requirement – you may have an NFR that says that search results are generated in under 2 seconds, so that NFR would not make sense when adding support for two-factor authentication. However, adding the two-factor authentication capability does not “release” your product from the 2-seconds-per-search requirement. That NFR still applies. If, however, the new functional requirement is to allow people to find their friends in the system, and you happen to implement that with search, then the old NFR still applies – the search for friends must return results within 2 seconds.
For the reverse of the situation – how does adding new NFRs impact already implemented functional requirements, the answer is different (but the derivation of the answer is the same).
The new NFR will impact how all relevant existing functional requirements perform. Imagine that you have implemented search already, for product data, and user reviews. When you add the NFR to assure that search results are displayed in under 2 seconds, that NFR will apply to all existing search capabilities. It will “technically apply” to everything that already exists, but only makes sense in some situations.
If, for example, your previous “find your friends” capability was implemented by always showing all of your friends on the screen – users are never asked to “search” for them – then it would not really make sense. Imagine, over time, that adoption of your product has grown over time, and you decide to refactor the user interface – because showing all friends all the time creates a bad experience – when you refactor your product to enable “friend searching”, the (previously implemented differently) ‘see your friends’ capability would then need honor the NFR for searching-in-less-than-2-seconds.
Does that address your question?
Imagine that the entire project lacks new functionality. Say we inherited a home-grown authentication/access system as part of our application infrastructure but that system costs a lot of money to maintain or is based on a technology we no longer want to support, so we are going to replace it with an Open Source or Commercial alternative. The purpose of the project is to reconfigure our application system to rely on the new authentication/access mechanism. There is no new functionality, but we want to be certain the existing functionality is unaffected by the infrastructure change. What do the items in our backlog look like?
Hey Skip, thanks for the great question and welcome to Tyner Blain!
I wrote a couple articles about “migration projects” in the past, that may answer your questions. Or they may lead to more specific questions.
The crazy short answer is “the backlog items look the same” – you are standing up capabilities (in a new system) to replace capabilities* in the old system.
*You only need to replace the capabilities you actually use – some of the functionality that exists today in the home-grown system is undoubtably unused. And some capabilities are more valuable than others (giving you prioritization).
Here are the previous articles:
Let us know (here, or in the discussions on those articles) if you have more questions, and thanks for contributing to this discussion!
Scott (@sehlhorst)
Skip, you need to consider whether the backlog items are really the same.
A proper backlog contains functional requirements (expressed as user stories or use cases that are notfleshed out with individual steps) paired with acceptance criteria that represent the nonfunctional requirements attached to each function. Together, these items state the least stringent conditions that must hold to verify that you’ve solved all the problems the system is intended to solve.
Presumably, there is a reason that you are restructuring the system. What is the problem you are trying solve? You’ve stated it relates to maintenance costs. Maintainability is a nonfunctional requirement, and you can attach measurable acceptance criteria that must hold to know the system has achieved an acceptable level of maintainability and cost of maintenance.
Thus the backlog items will NOT be the same. In particular, the acceptance criteria will need to change to reflect the more stringent maintainability requirements.
Great additions, Roger, thanks!
I was confused at first by the “backlog items will NOT be the same” – then I realized, you meant “the backlog items used to create the existing system can not be re-used ala copy-paste” – because new acceptance criteria (maintainability, not-on-platform-X) were not present previously. Agreed.
The structure / format of requirements for a migration project are still the same (as we both alluded) – stories + acceptance criteria.
I like that you called out maintainability as a measurable NFR. It opens an interesting sub-topic, how to (a) define acceptability and (b) measure how good we were.
If we were to migrate from (for example) Java to .NET, so that we can save money, the project should be considered a failure if that did not actually result in saving money – because ‘maintaining in .NET turned out to be just as expensive as maintaining in Java.’
How that plays out will ultimately be a function of the people you have available to maintain the system. Good programmers usually have (a) languages they know now, and (b) languages they don’t know yet. Unless you’re doing something dramatic like moving from a very unsuited (to the task) language to a very-suited-to-the-task language, I wouldn’t expect to get efficiency gains from a rewrite. Re-architecting often can make sense. Moving to a highly-cohesive, loosely-coupled design in a complex system enables rapid parallel development and increases business agility (or lowers the cost to be agile).
When I’ve seen “technology mandates” in the past, they’ve usually been expressed as constraints that are trickled-down mandates. They can still be articulated as acceptance criteria. The “is it worth it?” discussion usually (in my experience) happens at a higher level than, and outside the context of, a particular project. My mental model is company-wide standardization. Is that a good requirement? Depends on your team and your staffing models and your recruiting capabilities – as well as the product. Sometimes yes, sometimes no.
The biggest impact someone can have on a “migration project” is to make sure that the only stuff that gets (re)built is the stuff that still provides value.
I cringe whenever I hear “like for like” or “pin-compatible” migration projects. They might make sense, but are (I think) more likely to be “we’re too busy/lazy/rushed to define what _today’s_ requirements are. Let’s just blindly use the requirements from 5 years ago.