Monthly Archives: November 2005

Feedburner Feed Added

I’ve added a feedburner feed for this blog – http://feeds.feedburner.com/tynerblain

Until wordpress.com allows us to customize the templates, I can’t embed a link to the feedburner feed. We get some tracking stats from people who subscribe via that link, so if you’re subscribing – take a second to change to that link.

If you’re not subscribing – leave a comment and tell us why not.

Scott

[update 29 May 2006: You can also add Tyner Blain directly to your Google Toobar]

Stop Wasting Your Time – Don’t Bother Writing Functional Specs

burnt spec

OK – got your attention.

But I wasn’t being original. 37 signals got my attention with their article, Getting Real, Step 1: No Functional Spec. It was written back in Feb 2005, and when I found it today, it had 69 comments (and it was closed to future comments). Read the article to get started, and then read the comments – a fantastic debate full of insights on both sides of the argument.

My english teacher in high school had a great nugget of wisdom about writing argumentative essays. Mrs. Voss taught us that the viewpoints with the greatest potential for good writing were the ones with the strongest opposing viewpoints. I took her advice in college and wrote a compelling argument about why it was important to kill baby seals. My college professor believed that the grade of a paper was driven by the inherent “rightness” of the idea, and gave me a D. Eventually I realized that she taught me that knowing your audience is even more important than writing with technical proficiency.

Since you just finished reading a bunch of great comments from the linked document, I won’t reiterate the great arguments. Also I will try to avoid what Scott Adams noticed people do – “misinterpret [their] position, and then attack that misinterpretation.”

Creating a document no one uses is a waste of time. Spending time on documents people do use isn’t.

Don’t do it. Don’t use a functional spec to get superficial agreements and navigate the beurocracy that accompanies large projects. Don’t validate the specification trivially. Don’t deploy with a waterfall process (the spec is done, whew, now – on to design) and never revisit the spec. Don’t work with new developers, or remote developers, or anyone else who doesn’t have the context of direct eyeball-to-eyeball conversations with the customers. Also don’t hire any programmers without complete domain expertise in the customer’s business.

A functional spec is a single artifact, generated from a living repository of information about what needs to happen for a software project to succeed. It is not a static document. A solution can be described differently for different audiences and to achieve different objectives. The source code is a description of the functionality – a quite explicit one, describing exactly how it works (if you know how to read it). A design document, or a UML diagram, or an E-R diagram is another description of functionality – and it describes how a system is supposed to work. Most technical people can read a design document, but stakeholders may still be in trouble. A functional spec describes what a system is supposed to do without describing how the system does it. Larger audience, different objective. “Above” the functional spec lives a set of use cases, which describe how someone uses the system – again, different objective. “Above” the use cases lives a product strategy, or the objectives and benefits inherent in the system – what the system should enable, and why the system should do it.

When you have a cohesive strategy (Use price optimization software to maximize profits for my online bike shop), combined with complete, correct and consistent requirements (The software will model demand versus price…, The system will calculate the optimal prices based on that model, Product managers will approve prices before they are published to the website, and more…), you can create elegant designs, and implement great code.

The challenge is in using the functional spec appropriately in communication. Each “consumer” of the spec has a different objective – validating ROI, scheduling user-training, scoping the delivery, and more. It makes sense that they will want to see different pieces of the repository, presented in different ways. Misuse of a functional spec is a bad thing, but so is misuse of a car or an education. That doesn’t mean a spec is a bad thing, any more than an education is.

Waterfalls are just as bad for requirements as they are for development.

When you go sailing, do you spend all night reviewing the charts and planning your course, and making notations on the map? The next day, when you are on the boat – do you set the heading, validate that the compass reads what it’s supposed to, and then lash down the wheel and go below decks to watch tv until you arrive at your destination? Writing a functional spec, “freezing” or “baselining” it, and then moving on to development is no different.

My analogy is a little flawed. Software is a moving target. Imagine that you’re sailing towards another sailboat, which you expect to be at location X at time Y. Kent Beck got it right with Xtreme Programming Explained: Embrace Change (2nd Edition). Iterative development is key to success. If you try and make that work with a static spec, you’ve missed the boat. During the time that you spend developing the first rapid prototype, or the first release of the software, your goals have probably changed. Your prioritization of features almost certainly has. You develop new insights. Your environment changes. Your competition is not complacent. Revisit the specification the same way you revisit the implementation. Go back on deck, check your position and heading, and update your course.

Now those traceability diagrams have a lot more value. Your ability to describe the big picture, change it, and quickly appreciate the ramifications of making those changes will make you more competitive.

[Update: Waterfalls aren’t always bad. In this comparison of waterfall and incremental processes, we see when they can be good]

Conclusion.

I had the opportunity to meet with Kent a few years ago, and he made a great point. His experience was that people who were burned by lack of testing swear by testing. People who’s projects failed due to lack of design tout the benefits of up front design. People who failed by delivering great software that didn’t meet the user’s needs become champions of requirements. We’re all reactionary, says Kent.

The key is realize that all of these things are important, but none of them are a panacea. Leave any of them out at your own risk. Over-emphasize or mis-apply any of them at your own cost.

Requirements are important. Functional specs are useful. Just don’t misuse them.

How To Deal With Untestable Requirements – Rewrite Them

I just found Roger Cauvin’s blog, Cauvin, and was reading through his archive. I came across a posting from July, Should all Requirements Be Testable, that is worth thinking about.

In his post, Roger uses an example of an untestable requirement “We might specify that the car should last seven years without repairs as long as the owner maintains the car according to a certain maintenance schedule and doesn’t have a collision.” He makes a great point, that just because you can’t directly test the requirement, you should not ignore the requirement. And I agree.

The premise behind the rule that requirements must be testable is driven by the goal of avoiding ambiguous language in your requirements. Statements like “the application must have a clean user interface” or “search response times must be fast” are also untestable, but more because of language than anything else.

You can rewrite these hypothetical ambiguous requirements in a testable way –

“The application will meet the (attached) user interface guidelines.” Where the UI-guidelines describe detailed inspectable criteria (sharing a common navigation bar at the top of a page, no horizontal scrolling on an 800×600 interface, controls must be at least 10 pixels apart, etc).”

“Search results must return the first page of results within 2 seconds, when the user is connected to the same LAN as the server. If there are multiple pages, each additional page must be presented to the same user within 2 seconds of selection.”

Back to Roger’s example…

While you can’t wait 7 years to test the car before you decide to build it, you can rewrite the requirement to make it testable.

First, I would point out that the example requirement is ambiguous. Do they mean that none of the cars will have a warranty-repair? Or no more than 1% of the cars? Greater specificity should be included. Let’s add the 1% number. We will also want to specify “normal usage patterns” – which can mean no off-road driving for sedans, a specified temperature range, maximum miles per month, etcetera.

We still can’t directly test the requirement. And it’s not actionable – you haven’t told the engineers how to know when they’ve completed the design of the car.
How do car manufacturers build quality cars today? They test components and assemblies of components, and characterize their failure rates statistically. Then they combine that empirical data with a statistical model of the expected wear and tear of the vehicle over time. The result is a statistical prediction of when the car is likely to have it’s first warranty repair. And that statistical prediction is a continuum. But it’s testable, if you rewrite the requirement:

“The results of running our existing lifetime-quality-test* for sedans on the vehicle design will predict fewer than 1% of cars will have a warranty repair during their first 7 years of usage, with a 90% confidence level.” The lifetime-quality-test is a referenced document in the requirements, and it describes how components are tested.

Anyone have an “untestable” example for me?

Fixing the Requirements Mess

Stephen Larrison posted an article in his blog, Survive Outsourcing, about how to compete with offshore-outsourced projects by fixing the requirements mess up front. His background in manufacturing, as well as software development has lead him to a similar perspective as the one I touched briefly on in my post about where bugs come from.

Stephen points out that “You have to eliminate scrap and rework” in order to compete with teams that have lower cost implementation resources. He points out that this is the key to improving productivity in software development, just as it is in the manufacturing world.

I read a book a little over 10 years ago, The Machine that Changed the World, by Womack, Jones, and Roos. They first introduced me to the concept of the value of investments made upstream in the process. They specifically showed the payback of a dollar invested during the design of a new product (their focus was automotive) yielding the same results as ten dollars spent after a product enters manufacturing. It had the same effect as $100 spent fixing problems after they’ve been released to the field. IIRC, their point, in 1991, was that Japanese manufacturers were investing more at the design stage than U.S. Automakers. Seems almost prescient now with Toyota set to overtake GM as the largest car manufacturer in 2006.

There’s an analogous rule of thumb in software development that requirements bugs cost ten times as much to fix after they reach development, and 100 times as much to fix once they’ve been released to the field. A requirements bug caught in development seems cheap (you only have some wasted effort and minor delays associated with re-working the software to meet the corrected requirement), when you compare it to cost of releasing buggy software – which can result in lost sales and added costs (if the cost of “repair” is accounted for). But truly, the savings comes from fixing the bug before anyone starts building part of the solution based upon the incorrect requirement.

In Stephen’s article, he references CIO magazine, and an eye-opening statistic: “71 percent of software projects that fail do so because of poor requirements management”. It is seriously worth a read. Here are some more statistics on software project failures.
Scott

Telescopes, Microscopes, and Macro-scopes – How to View Requirements

Writing good requirements is more than just taking dictation. It is about documenting the goals and needs of the stakeholders (users, project sponsors, etc), in language that the creators of the system (developers, testers, etc) can read. The requirements have to be complete and correct, and they also have to be unambiguous. Determining the right level of abstraction for our requirements is an art, and can be the difference between usable requirements, and a lot of wasted time (or worse, a failed project).

In a recent presentation a St. Edwards University, I described functional requirements documentation as seeing the solution at one of three levels – a telescopic, macroscopic, or microscopic view. Each view represents a different level of abstraction from too abstract to too detailed. When functional requirements are documented at the wrong level, they aren’t actionable.

Telescopic view

When functional requirements are described too abstractly, they don’t provide enough information for implementation teams to properly scope the project. They also introduce too much risk – risk that the delivered product will not meet the needs of the stakeholders. Some examples:

  • Users can access the parts of the system that they are authorized to access.
  • The system will support 100 salespeople who submit quotes.

Microscopic view

When functional requirements are documented to an excruciating level of detail, they become just as unusable. There is the risk that too much detail, esoteric data, jargon, or logic can hinder validation with the stakeholders. We also run the risk of limiting the creative process and constraining the development team to an overly narrow definition of a solution. By writing these low-level, micro-managing documents, we are pulling the design process into the requirements process, where it doesn’t belong. In this post, we talk about how Shakespeare was responsible for both the spec and the design of his plays. As several commenters point out – that’s not the role we ask people to play in IT organizations.

In any system, the size of the requirements document grows proportionally with the level of detail – increasing the costs of maintaining the system. In large systems, this quickly turns a project into a quagmire of clarification and confusion. An example:

  • The system will synchronize transaction records between databases on the regional servers by transmitting change information via XML, updating every hour at the top of the hour. Concurrency conflicts will be resolved with “latest wins” semantics, adjusting for the time-zones in which the servers run, using the transaction times (associated with the underlying data) as recorded on the regional servers.

Macroscopic view

As Goldilocks would say, this one is just right. The minimum level of detail required to be unambiguous in expressing a required capability of the system. No implementation ideas or constraints. Not too abstract. Not so complex that a “shorter version” is required to communicate with the stakeholders, and not so over-simplified that the developers can not implement it without getting extra information. Some examples:

  • The system will allow the user to specify her state and county of residence (in the USA). When the user indicates her state of residence, the selection of possible values of county are filtered to only the counties in the selected state.
  • Users will have read-only access to their payroll information, but they will not be able to view other user’s payroll information. Some users will be identified as managers, and all users will have their manager identified in the system. Managers will have read-only access to the payroll data for all users who have them listed as their manager.
  • The system will support 100 salespeople. Each salesperson will submit an average of 100 quotes into the system per week. The system must validate (or invalidate) each quote within ten seconds of quote submittal when the salesperson accesses the system via the local LAN.
  • The system will maintain synchronized transaction databases on each of the regional servers. The servers will be synchronized on at least an hourly basis. Concurrency conflicts will be resolved with a “latest wins” semantic, based on transaction time as recorded on the regional server.

When teaching others how to write functional requirements at the right level, we propose the following to find the balance between vagueness and verbosity:

Do you have any examples of requirements that you had to deal with that were either telescopic or microscopic in their detail?

Concept Maps – Great Tool for Eating the Elephant (Brainstorming Ideas for a New Product)

elephant

I was chatting with a co-worker last week about the challenges of coming up with a great new software idea / product / project. His comment to me was

“Once I have a spec that describes what needs to be done – I’m set. If I don’t know how to implement something, I know how to figure out how to do it. My problem is in knowing what to implement.”

We weren’t talking about how to write specs, or prioritize features, but really about the “writer’s block” that comes when you try and figure out how to write a piece of software – at the synthesis stage. Assume a premise, or high level goal for the software: “I want a piece of software that lets me manage my schedule.” Now what? How do you get from a vague notion of a calendar application to actually identifying what you need to do to create your calendar application? At some level, this is aproduct strategy development problem, and there are a lot of interesting discussions you can have about how to do that. For this post, I want to focus on a great tool to help you visualize thoughts cohesively, and iteratively develop an understanding of what you want to do.

Concept mapping is a tool I use for the brainstorming process of defining a product’s specification. IHMC developed this product (available at their website). Thanks to Amir Khella and his blog, Elements of Passion, for leading me to this site. His post, On Creativity, covered a lot of great resources for creative thinking. One link of his was to FreeMind – a freeware mind-mapping program available via sourceforge. The developers of FreeMind did something great – they included a section with a list of alternatives to using FreeMind. I have used mind maps before, and been successful with them – my one complaint was that they optimize on viewing information as trees – essentially spatial outlines. Definitely a valuable tool, but I always had a nagging feeling that what I needed was to be able to express ideas in a graph (instead of each element having exactly one parent, it can have more than one).

I had finished putting together a proposal for introducing automated testing for a client, where there was a small team (6 developers) working on a code base with over 100 developer-years of legacy code. To put together the proposal, I had to cover a lot of ground – to make sure that I bid correctly, and that my client would get the benefits that they should. I used a concept map to help me identify the different areas where costs would be incurred (installation, training, retro-fitting of tests to “high risk” areas of the legacy code, ongoing time spent writing tests on all code developed moving forward, “pollination sessions” for the team members to share ideas and patterns with each other, etc). I probably would have missed some areas without using this approach to defining the problem for myself, and my client would have missed out on opportunities to benefit from the proposed project.

Back to the example, rephrased – “Develop a software tool to help me manage my schedule”. First I would create a box in the center of the page, containing my central idea. Then I would add whatever associated ideas came to mind.

First thoughts

In this diagram, I show the idea that I live “two lives” – work and family. And each has scheduled events. And the reason I care about my schedule is that I need to prevent conflicts between the two. I use an Outlook calendar at work – people send invites, reschedule events, etc. When I remember to do it, I block out some time for family events, if they get into “normal work hours” – which is often when “normal” is not 9-5. And my wife doesn’t know what my work schedule is – she has to ask me “are you doing something on such-and-such a date?”

As I look at the diagram, I think about the fact that my co-workers and clients will schedule and re-schedule meetings, and depend upon my calendar being “current” to know if I’m available. And wouldn’t it be great if my wife knew my “current” schedule too? Of course, she shouldn’t be able to see the details of my work meetings (like “Top Secret product planning meeting”), and my client doesn’t need to know when I’m going to the dentist – she just needs to know that I’m not available before 10AM on Monday. This leads to a couple more boxes on the diagram.

Calendar 2
Now I have the information staring me in the face – I should have the calendar hosted/shared/replicated in such a way that both professional and personal contacts can see a subset of the information. Everyone should see what time is allocated, and some people should be able to see a subset of the contents of the scheduled time blocks.Maybe the answer is to have two calendars, each with their own ACL (access control lists), which replicate “allocated time” information, but never the details. Maybe the answer is a task-specific ACL, and universal access to a central calendar. With multiple clients, each using their own scheduling systems, the multi-calendar approach seems more likely. So my spec may move into writing an Outlook plug-in that pushes data to my home calendar.

Moving to the next level of detail isn’t the point in this post. The power of building diagrams that describe a problem space, to articulate user goals is the point. I can use this approach to identify the other things I need to do to make this product useful to me, and potentially others.

Two thumbs up from me for CmapTools from IHMC.

Scott

Collision Detection

I was doing a code-read for a team member earlier this year, and stumbled upon an elegant algorithm. This is super-simple, I realize, but I believe it’s a great example of avoiding complexity. Einstein said it best – “as simple as possible, but no simpler”.

Problem : Given two solid rectangular shapes, make sure that they do not overlap in either horizontal or vertical placement in an X-Y layout.

Proposed Solution: My team-mate proposed a solution that was rigorous and thorough. To simplify the explanation, we’ll start with the analysis in one dimension. I’ll address the two rectangles as A and B, and the X-axis as left to right (left being lower numbers). He identified that we could have an overlap of the two rectangles in any of the following cases:

1. when the right edge of A was between the left and right edges of B

2. when the left edge of A was between the left and right edges of B

3. when the left edge of A was left of the left edge of B AND the right edge of A was to the right of the right edge of B

Note that when “A” is completely inside “B”, both conditions 1 & 2 would apply

He also accounted for colinear edges.

His analysis was good – he identified all of the possible overlaps, and his implementation was straightforward (NOTE: I tried to write this in pseudocode, but the wordpress parser was very unhappy with all of the less-than and greater-thans):

He implemented simple checks for conditions 1,2 and 3.
Not too bad, if only looking at one dimension. However, this was complicated by being two-dimensional, so within each of the above checks, there were permutations to consider. In my expression below – consider “X1″ to be “condition 1, applied along the X axis”, and “Y3″ to be “condition 3, along the Y axis”. His algorithm then became:
IF(
(X1 AND (Y1 OR Y2 OR Y3))
OR
(X2 AND (Y1 OR Y2 OR Y3))
OR
(X3 AND (Y1 OR Y2 OR Y3))
)
THEN collision == TRUE;
So, he ultimately checked all 9 permuations (overlap of left or right, or inclusion; versus each of overlap of top or bottom, or inclusion).

Elegant Solution:

There were 9 ways that an overlap could exist. It was much easier to define that an overlap did NOT exist:

Conditions refactored:

  1. the right edge of A is to the left of the left edge of B
  2. the left edge of A is to the right of the right edge of B
  3. the bottom of A is above the top of B
  4. the top of A is below the bottom of B

Each of these conditions describes a circumstance that can never be an overlap (of rectangles) – so only if none of these conditions is true, will there be an overlap.

My proposed algorithm, which we implemented, was

IF NOT(1 OR 2 OR 3 OR 4) THEN collision == TRUE;

Much simpler. This is the type of thing that I strive for, and hope to share with you when I describe “elegance in design” of algorithms.

We actually implemented “IF NOT(2 OR 1 OR 3 OR 4)…” because “2″ was the most common case, this operation was done thousands of times in a layout-calculation for a real-time user interface, and we wanted the minor performance benefit of short circuiting the operation.
Scott

Welcome to Tyner Blain

President, Tyner Blain

Howdy!

I’ve set up this blog to keep track about thoughts I have in the software development space. I’m Scott Sehlhorst, president of Tyner Blain LLC. I wear a bunch of hats, playing different roles throughout the software development process. Tyner Blain was founded with two goals – helping customers and developing software. We pursue both consulting engagements and software development. We’re located in the surprisingly great city of Austin, TX (well, I was surprised when I got here), and we started operations on 05/05/05. I’m just now getting around to the blog.

There are a lot of topics in this space that I hope to post about. I have passions about process, requirements, development as an artistic expression, quality, and HCI (human-computer interfaces). I often find things as simple (or mundane) as an elegant algorithm or a brilliant UI affordance to be uplifting. I’ll also be writing collections of posts in series-format, covering topics like use cases, structured requirements, and general introductory material (for people who need a little more context than the regular posts contain). I will also post lists from time to time.
I have two goals for this blog.

  1. Make a positive impact on the community that cares about creating great software by stimulating thought, provoking debate, and sharing ideas.
  2. To become better at what I do. Writing helps me form better thoughts, and learning from the comments of readers makes me that much better.

If one person reads this blog and has a novel idea, or improves their team’s performance, or makes their software easier to use, I consider that a win.
Anyway, welcome aboard, please comment and link. Let me know when I’m wrong, and let me know when you agree with me. Feedback is a big part of how we learn. When you do put an idea inspired by this blog to work – please tell us about it!

Thanks,

Scott