Monthly Archives: July 2006

Communicating Intent With Implementers

grey puzzle piece

Giving a functional spec to developers and testers is not sufficient for creating great software. To a developer, a spec is only the what and not the why. And for a tester, the software requirements specification is neither. Use cases provide the why that explains the intent of the system for the implementation team.


We started a series of posts exploring why we apply use cases as part of product management, identifying 8 goals for which use cases are relevant. That series post contains links to detailed articles on each goal as we write them over the next weeks. Bookmark it and come back to it to see the puzzle come together.

We recently wrote an article, Communicating Intent With Stakeholders, that shows how use cases are consumed from the perspective of users and customers, or stakeholders. In that article, we showed a diagram that compares the different perspectives of the software development artifacts.

Why, What, and How diagram

Use cases fall in the “requirements” row of this diagram. The requirements row represents the documents used to articulate the value of a proposed solution. The article, Requirements Documents – One Man’s Trash, provides a more detailed explanation of these differing perspectives.

Communication of intent with the implementation team is different than it is with stakeholders. The implementation team represents people performing one of two activites: building the solution and assuring the solution is correct. Different teams will staff these roles differently. Some teams will have separate QA organizations, and others will rely upon the developers to also be responsible for quality.


Some people will argue that a development team only needs the specification to create software. That’s absolutely true in a turn-the-crank situation. And its very common in many outsourcing arrangements that use a low-level outsourcing model. The downside of these approaches is that we hamper our developer’s ability to apply their creative skills to creating innovative solutions.

By providing developers with an understanding of why they are being asked to implement software that conforms to a specification, we get the opportunity to benefit from their feedback. A free electron developer [scroll down to the People section of this article for a definition] may have an epiphany about a significantly better way to solve the problem. Wthout insight into the intent of the software, that star developer is hamstrung.

Quality Assurance

Quality assurance personnel (QA) are responsible for assuring that the software does what it is supposed to do. While developers can write whitebox tests to assure that their code behaves as anticipated, QA has to rely on blackbox tests to assure that the intent of the system is being met. This requires that QA understand the intent of the system.

Blackbox tests are generally described as a series of user actions (or the automated equivalent), usually referred to as a script. Each script, to be as valuable as possible, should be designed to mimic what a user would do when trying to achieve a particular goal. Functional requirements are written to support use cases. While we can write short tests that validate individual functional requirements, these tests would really only be scriplets, because they do not represent an entire user session.

Good tests are atomic. When a test fails, we want to be able to say that test X for functional requirement Y failed. And scripts should be written with these atomic assertions in mind. There are two reasons, however, to group these assertions together into scripts that match use cases.

First, when using a structured requirements approach, a single functional requirement may support multiple use cases. Developers will want to know which functional requirement failed, and the context in which it failed. However, when we communicate project status with stakeholders, we will be talking to them in the language of use cases. Providing an association between functional requirements, their tests, and the relevant use cases makes this much easier.

Second, because these are blackbox tests, they are written without insight into the implementation by definition. It is possible that a particular functional requirement will pass all of its tests when performing one use case, but fail them when performing another. We can use pairwise testing or other techniques to find these circumstances by brute force. But we can also re-use the assertions (an individual test of a functional requirement) across multiple scripts (use cases).


Members of the development and quality organizations benefit from understanding why they are implementing or testing a particular functional spec. Use cases provide them with that context. Use cases are also the artifacts most easily understood by all team members.

domains of expertise

People in all three groups can easily consume use cases. If an individual is unfamiliar, a quick lesson on how to read use cases can help.

Communicating Intent With Stakeholders

teal puzzle piece
We can build a prototype of what the stakeholders don’t want, and then get feedback and fix it. Or we can review the use cases of what we intend to build, confirm that each stakeholder wants it, and build it right the first time.


We started a series of posts exploring why we apply use cases as part of product management, identifying 8 goals for which use cases are relevant. That series post contains links to detailed articles on each goal as we write them over the next weeks. Bookmark it and come back to it to see the puzzle come together.

Stakeholder Communication

There are two elements to communicating use case content. At a high level, we share the vision of what the software is intended to achieve by showing the use cases that support the market requirements. At a lower level we are also verifying that the details of a particular use case are correct. This may require an impromptu course on how to read use cases.


Intent is expressed as market requirements. We have previously defined (or are continuously defining) the market requirements for our software, and captured them in an MRD (market requirements document). This is an expression of what from the perspective of the stakeholders. Stakeholders will think in terms of use cases when they think about how.

If you haven’t already read Requirements Documents – One Man’s Trash, this would be a good time to review it – it shows how different people view the artifacts of product management differently. Customers will view the software from a similar perspective to the product manager in the following diagram:

Perspective Diagram

The what of our proposed solution is the intent, or explicitly what we intend to deliver. The use cases (from a user’s perspective) are the how. For a user, the use cases represent how the user can achieve value by using the software. Stakeholders are not software developers, and can easily get bogged down in the details of a spec. By keeping our communication at the use case level, we avoid overwhelming them with stuff that is irrelevant to them.

The driver of a car doesn’t care about how many degrees from top-dead-center the timing belt is adjusted. The driver cares that she’s getting a car that can accelerate fast enough to pass a truck on the highway. Our stakeholders have a similar perspective on software.


Some stakeholders want to be more hands-on, and will want to review the details of the use cases to make certain that they accurately represent the desired business processes. For them, this next level of detail is simply a more refined expression of intent. Think of this as a driver who wants to understand the torque and horsepower curves for the engine in their car. They still ultimately want to be able to pass a truck on the highway, but they want to understand more detail. They still should not inspect the car with a timing strobe.

As business analysts, we also want to review the content of the use cases, as part of making sure that we are getting the details correct. These reviews should be conducted with users, client-analysts, and line managers – people who are understand the process details.

Foundation Series: Data Dictionary Definition


What is a data dictionary and how is it used when communicating and managing requirements?


A data dictionary is a collection of the definitions of the structure of information that is relevant to a set of requirements. That’s a lot of words for a simple concept. We need to know (and constrain) a set of information about some business element when managing our requirements. We use a data dictionary to define what that information is, and any constraints on how it must be used.

Viewing The System

When using object oriented analysis (OOA) as part of defining requirements, we represent business concepts as objects and processes. For example, an order management system might define orders as having line items and customers. We can represent that information graphically with a UML diagram like the following:

uml diagram

In prose, we could also capture the same information as follows:

  1. The system shall include a representation of customer orders.
  2. Each order will have a single associated customer, and each customer can have multiple orders. Note that a customer is not required to have any orders.
  3. Each order will have at least one, and possibly multiple line items. Each line item is uniquely associated with a single order.
  4. Each line item represents a single product. Note that a product is not required to be represented in a line item. A product can be represented in multiple line items (even within the same quote).

While this diagram tells us about the structure at a high level, it doesn’t tell us enough information to go implement the solution. What exactly is a line item? What information does it contain? And what format must that information be in?

A Dictionary Entry

We could create a data dictionary entry for the line item object as follows:

Line Item

A line item represents a portion of a customer order that describes a product being ordered, as well as the quantity of that product being ordered. Each line item must include the following information:

  • A reference to the product being ordered, using the product ID per constraint X1. [Note, the constraint is imposed by the existing product data management system, with which our software is required to integrate.]
  • The quantity of the product being ordered, where the quantity is a positive integer. [Note, we would include a maximum value, if there were a constraint imposed by some other part of the system.]

Note that we have not specified that a line item includes a price. It is very likely that a line item would have a price, but we would be specifying implementation details if we did. Pricing may be done per product, or may be unique for each product for any given customer. Discounts may be applied based upon quantity of products in a line item, or dollar amount for an order. Discounts may be applied based upon all products ordered by a customer over a period of time. These different possibilities are a function of the requirements of the system.

When those business requirements are defined, they will dictate the ownership of properties by business objects. With that information, we can include the data as appropriate. For example, a list price property may be defined for the product object, or a customer-price may be defined for a line-item as a function of (product, quantity, customer). We would add that data as part of the business modeling. Note that this is a description of the problem domain, not a description of the implementation.
Another Data Dictionary Example

Here’s an example of a “Customer”

A customer represents the business or person for whom an order has been placed. Note that all character fields are to be represented in unicode 4.1.0 or later per corporate policy ABC. A customer has the following information:

  • Name. 50 characters representing the name of the customer.
  • Shipping Address 1. 100 characters representing the first line of the address to which all customer shipments are made.
  • Shipping Address 2. 100 characters representing the second line of the address to which all customer shipments are made.
  • Shipping Country. 50 characters…
  • Billing Address.
  • Customer Contact.
  • etcetera.

This list is intended to show all of the elements of information that must be present in the “customer object” to support the requirements of the system.

Further Reading

Joe, at Seilevel wrote a post back in March with a good explanation of data dictionary entries. As Joe points out, requirements can drive the need for specific information.

For example, my business users have told me that the number of decimal places of each weight value tracked by the system is very important for monitoring and reporting. It stands to reason that other objects and attributes might require the same level of specification. If you figure it out once, you can use it in many places.

Barbara, at B2T Training points out the importance of understanding the details of the data for a system. She also touches on the value of having that information in a separate document.

Many BAs document data as part of the business process or part of the Use Case. Our recommendation is that you document data in a separate part of the requirements package because it is often used in multiple places.


A data dictionary should be defined as a repository of all data definitions like the examples above. Those examples should be referenced in all requirements documents that rely on the defined objects. Requirements documents should not specify the content of the objects, they should defer to the referenced dictionary entries.

Some projects, especially migration projects, have many constraints tied to data formats and structure. These projects will have extensive data dictionaries, and multiple references to entries throughout the requirements document. Other projects will have far fewer constraints on data formats, but will still have explicit structural definitions for business objects (like our line item example).

– – –

Check out the index of the Foundation Series posts which will be updated whenever new posts are added.

Four Assumptions of the Apocalypse

Four Horsemen of the Apocalypse

Business Analysts often start with four erroneous assumptions when eliciting requirements. 50% of errors in software projects are caused by requirements errors. These four faulty assumptions, presented by James A. Ward, can exacerbate the error-prone process of gathering requirements.

The Assumptions

In his article on, James points out that analysts tend to fall in one of two camps – those who make these assumptions and those who don’t. The course of action an analyst takes when gathering requirements is driven by these assumptions. From James’ article, they are:

  1. Customers can define their systems requirements.
  2. The software development organization is a “customer”–not the “owner” of the process.
  3. Requirements management starts after requirements have been defined.
  4. The customer “owns” the requirements.

Our Analysis

While agile principles aren’t generally founded on these assumptions, some types of agile processes do appear to be built on them. Kent Beck, author of extreme programming (xp), argues that customers don’t know their requirements. Feature-driven development (FDD) is agnostic about this point.

Beck’s argument is that because customer’s don’t know what they want, we should start building stuff right away, with the expectation that by seeing tangible results (good or bad), they will have epiphanies about what they really do want. This approach treats requirement extraction as an emergent process, like strip-mining. With each new layer of mining, we unearth a new set of hidden requirements.

While this approach has strengths, it also has weaknesses.  Those weaknesses are caused by the assumption that the requirements can not be identified up front.

This reminds us of the old Thomas Edison response to a reporter asking about his 700 failed attempts at creating an electric light:

“I have not failed seven hundred times. I have not failed once. I have succeeded in proving that those seven hundred ways will not work. When I have eliminated the ways that will not work, I will find the way that will work.”

Med League Support Services

Alan Cooper’s argument is that with proper analysis, more accurate requirements can be discovered. This approach is much more like exploratory drilling and surveying than strip-mining. By leveraging different requirements gathering techniques with business process modeling and analysis, a business analyst can discover and invent the appropriate requirements to satisfy market needs.


We accept the premise that customers can rarely articulate their requirements in sufficient detail to develop a software solution. Their areas of expertise are their businesses, not software development. We do not believe the customer should be responsible for developing those requirements. We believe that the requirements can be identified before delivering something to the customer. We like the ideas proposed by James Shore a while ago about moving from the general to the specific.

This belief that requirements do exist, and can be identified should be combined with an incremental delivery process that focuses efforts on the most important requirements first.  This results in faster delivery of value.

Outside Reading: Top 10 Signs You Should Not Write Requirements

reading outside

Seilevel has a post that presents the top 10 signs that you should not pursue a career writing requirements, check it out.  Thanks Joy for the great article!
Our favorites:

#10 You cannot quickly understand new concepts

#9 You don’t have the patience to deal with customers

#4 You cannot form a mental model of all the pieces

Our take

Writing requirements is much more than taking dictation.  To develop great software, you have to develop an understanding of the needs of the customer.  From those needs, you have to synthesize a solution approach.  And you have to communicate that approach, both with customers (to validate it) and with the engineering team.  All of Joy’s entries (except the bonus #0 item) support this general framework.

We agree with Roger’s comments on the post that agile processes are not mutually exclusive to writing requirements.  Charles posted recently about the new product manager – implying that an agile product manager is different than a non-agile product manager.

There are many different agile processes, which use differing amounts of up-front planning, and differing formats for documentation.  Feature-driven development (FDD) does high level planning to understand the general approach of the product.  Details are then defined incrementally.  Incremental development works best when the most important stuff is worked on first.  This doesn’t preclude the need to communicate with customers and developers.  The fact that this communication happens incrementally doesn’t make documentation irrelevant.
The conversation about item #0 continues on Seilevel’s discussion forum, join in there, or add your thoughts here.

Verify Correct Requirements with Use Cases

red puzzle piece

The next piece in the puzzle of how and why we apply use cases to product management. Verification of requirement correctness.


We started a series of posts exploring why we apply use cases as part of product management, identifying 8 goals for which use cases are relevant. That series post contains links to detailed articles on each goal as we write them over the next weeks. Bookmark it and come back to it to see the puzzle come together.

Verification of Requirement Correctness

One of the challenges of writing requirements is assuring that we are capturing the requirements correctly. Correctness represents two elements. First we must document the requirement as articulated for it to be correct – any errors are “typographic” in nature. Second, we must document the valid requirement. This requires understanding the business objectives, as well as the source of ROI associated with a proposed requirement. If we properly record a stakeholder’s desire for a low-value requirement, that does not mean that the requirement is correct – it is only correctly documented.

In the past, we’ve shown how to apply object oriented analysis to validate requirement correctness. We identify a few downsides to using OOA diagrams to represent requirements – one of which is that people must be able to read UML to correctly interpret the diagrams. This is usually not a problem when communicating requirements with the development teams. The percentage of stakeholders who can read UML is probably the inverse of the ratio of developers – and that causes a problem.

When we are using a structured requirements approach, we can use use cases to help validate correctness. Structured requirements use use cases to achieve goals, or market requirements.

Structured Requirements Structure

Putting Things in Perspective

An alternative way to understand a requirement is to review the use cases that support it. After we have validated the completeness of a requirement, we can review the use cases to help make sure the requirement is correct. People can easily form a mental image of what a requirement is/does/achieves by reviewing the use cases that enable the requirement.

This allows us to answer the question, “What does this requirement intend?” From that answer we can then apply critical thinking to identify if it is the right intention. Knowledge of the use cases will help prevent a flawed understanding of the requirement.

We mentioned earlier that an aspect of requirement correctness is that it is valuable (e.g. we’re doing the right thing). If we have a faulty analysis of the business model, then reviewing the use cases won’t identify that flaw. Reviewing the use cases will allow us to confirm that the goal is correct with respect to the business model. Every company will use a different approach to creating a business model or analysis. Generally, they will be based on ROI analysis or expected value calculation. Some may use a payback period as a means of evaluating the value of requirements.

Product Managers Play Tug-of-War

tug of war

63% of product managers report to marketing and 24% report to development. 22% of requirements managers report to marketing with 55% in the development organization. These reporting structures can over-emphasize the needs of new users and super-users, while shortchanging the needs of the majority of users. Product managers will constantly be playing tug-of-war to get time to do the right thing.

The Softletter Survey

Softletter executed a survey earlier this year, which found that almost two thirds of product managers report to marketing, and a majority of requirements managers report to development. Detailed survey results are available by Subscription.


The Silicon Valley Product Group recently published an article about this very issue. They point out significant problems with both reporting structures. Hat tip to Nils for finding this one. Nilsnet is on our blogroll and makes good reading – you should check it out.

Product management in the marketing group:

Further, what usually happens is that the product marketing role and the product management role get combined. These roles and the skills required so different that what usually ends up happening is that one or the other (or both) gets poorly executed.

Product management in the development group:

Moreover, it’s easy for the product management team to be consumed in the details and pressures of producing the detailed specs rather than looking at the market opportunity and charting a winning product strategy and roadmap.

Alan Cooper’s Take

In The Inmates Are Running The Asylum, Cooper points out that marketing people tend to over-emphasize the needs of new users. Their interactions are primarily with people who we want to buy the software. As such, they spend most of their time understanding the needs of people who haven’t used the software, or who have just started using the software.

Developers, or as Cooper calls them, homo-logicus, are a special breed of people who are much more capable of dealing with complexity than average users. They appreciate good algorithms, customizability, and full-featuredness. They don’t run into the problems that most people have when dealing with software that has too many features.

Johanna Rothman asked the question in April, “Do engineers use their own software?” Her point was simply that if engineers have to “eat their own dog food” they will introduce fewer bugs. There are definitely benefits to this mindset. However, the engineers should not be specifying the features for the products (unless the products are to be used only by other engineers).

Competent Users

Our priorities as product managers should focus on competent users. Most people develop a level of competence quickly and most people stop learning when there is no additional benefit to learning more. Therefore most people don’t become experts. With a lion’s share of our users being competent, we need to make sure we emphasize them in our requirements and design decisions.

Organizational Bias

As SVPG points out, product managers will tend to evolve into the activities most valued in their organizations. Combine this with Cooper’s take on the needs of the everyman, and we end up having to devote energy to overcoming organizational bias in order to prioritize the needs of the majority of users.


Product Management should be represented in its own organization. This allows a focus on the right users, and will likely make it easier to avoid tactical tangents that take away time from strategic decision making.

Requirement Completeness Validation with Use Cases

blue puzzle piece

In our article, The 8 Goals of Use Cases, the first goal is that our use cases must support requirement-completeness validation. In this article, we explore how to address this goal and how use cases can help. There are many pieces to this puzzle, and this article is one of them.


There are several good books and articles about how to use use cases, but little if anything is written on why we use them. For folks who are new to Tyner Blain and Use Cases in general, here is an introduction on how to read a formal use case.
From our previous article:

Why Write Use Cases?

We write use cases for the same reasons that people use our software – to achieve goals. In our case, we want to assure that we are creating the right software. By looking at this high level goal in more detail, we can make decisions that drive the best possible use case creation. Let’s apply our product management skills to writing better use cases by writing an MRD for use cases.

The 8 Goals of Use Cases

We also wrote a series on use cases, including descriptions of formal and informal use cases, as well as use case diagrams. Check out the introduction to brush up on concepts.

Requirement Completeness

We also wrote an article on the importance of validating requirement completeness, as part of our series, Writing Good Requirements – The Big Ten Rules.

Simply put, if the requirement is implented as written, the market need is completely addressed. No additional requirements are required. When writing a specification, we may use decomposition to break individual requirements into more manageable, less abstract criteria.[…]

Completeness comes from analysis. And our degree of completeness comes from the quality of our analysis. There is no silver bullet, we just have to think. Remembering to validate completeness, and base our decisions on data gets us half-way there, but we have to get ourselves the rest of the way there.

Writing Complete Requirements

The Use Cases of Completeness Validation

Requirements completeness validation is performed, when using structured requirements, by validating that the supporting documents completely support the requirement being validated. Each market requirement is supported by one or more use cases, as shown in the following diagram:

structured requirements

Non-Functional Requirements Equal Rights Amendment

Determining if a goal will be completely achieved is a function of confirming that it will be completely enabled. This requires us to review the use cases that have been written to support the goal (or market requirement).

There are therefore two activities – writing the use cases, and reviewing the use cases. We’ve already described the activity of writing the use cases to support a market requirement. We only now need to cover the activity of reviewing the use cases.

Reviewing the Use Cases that Support a Goal

We will use the format of an informal use case to document this activity.

Title: Review Supporting Use Cases for a Goal

Trigger: Validation of the completeness of a market requirement is required.

Actors: Business Analyst and Stakeholder


  • Business Analyst (BA) organizes all use cases that have been identified as supporting the market requirement to be validated for completeness.
  • BA and Stakeholder (SH) review the goal.
  • BA and SH confirm that the goal can be achieved solely by performing the assembled use cases.
  • BA and SH confirm that none of the assembled use cases can be ignored and still achieve the goal.
  • BA and SH record their agreement that the use cases are both neccessary and sufficient to achieve the goal.

Additional Notes:

  • The reviewers are not evaluating the quality of the use cases, and work under the assumption that the use cases are successfully completed. Reviewing the quality or accuracy of the use cases is out of scope for this procedure (but is still required).
  • If additional use cases are required, the BA and SH will identify their titles and brief descriptions to act as placeholders for the purpose of this review.
  • If use cases are determined to be redundant or extraneous, their existence must be justified. For example, there may be more than one way to achieve the supported goal. Each method of success must be explicitly valued and prioritized.

Execution Tips

The process above tells us what we need to do, but doesn’t provide guidance on how to do it. Two common mistakes made when validating completeness are overlooking missing activities, and not discovering alternative activities.

To assure that missing activities are not overlooked, we can imagine playing a game, where the actors in the use cases are only allowed to perform the actions that are documented in the use cases. With one person verbally walking the actor(s) through the steps, the other person will likely find any missing steps. Changing from reading to listening triggers our brains to process the information differently and uncover gaps that we might overlook when reading (because our brains will fill in the gaps in the prose).

To discover alternatives to the documented use cases, we can apply brainstorming techniques. We can start the brainstorming process with a simple question, applied to each use case. “How else could someone possibly achieve the results of this use case?” We must make sure that we don’t constrain our answers to the easy, practical, or relevant – these constraints will make it harder for us to stumble upon a novel idea. We can also ask the question “How else could someone possibly achieve the goal?” For this question, answers are allowed to re-use existing use cases or ignore them completely.

While finding a better alternative approach could be a nice surprise, our primary goal is to explore the standing solution approach from all angles, in hopes of finding and strengthening it’s weaknesses.


Validating the completeness of a market requirement can be accomplished by reviewing that goal and the use cases that are designed to enabled it. If the use cases are neccessary and sufficient, then the requirement is complete. Brainstorming about alternatives can help us discover issues with completeness that are not obvious from traditional, top-down review. Brainstorming may also uncover a novel and more valuable solution.

This is only the first of the 8 uses for use cases. We will cover the rest of them in the future. Refer back to the original article, The 8 Goals of Use Cases, to find links to further detailed articles.

Make Your Meetings 60% More Effective

effective meeting

While effective meetings may not be the key to success, ineffective meetings are inarguably one of the largest time wasters in corporations. Applying these tips before, during, and after meetings will make us much more effective.

A software team will have many meetings, especially surrounding gathering and managing requirements. These can be brainstorming meetings, stakeholder interviews, or prioritization meetings. They can be requirement validation sessions, status updates, or any of a number of meetings surrounding the application of use cases to software product development.


Before the Meeting

The old chestnut, failing to plan is planning to fail, rings true here. An unplanned meeting isn’t an assured disaster, but the odds are that you will waste time. Always define the goal(s) of a meeting – if you don’t have clear goals, you don’t need to have a meeting. Communicate those goals and a schedule in an agenda, and make sure the prep work is done before the rest of the attendees are “on the clock.”

  • Define Goals for the Meeting

The meeting must have a purpose. Define it as a goal or goals of the meeting. Include the list of goals in the invite. This sets everyone’s expectations, and helps keep topics relevant throughout the meeting. By consistently using goals in meetings, people quickly adapt to the rythm. Let people know that if the goals are achieved early, the meeting ends early.

  • Prepare an Agenda in Advance

We should always send an agenda the day before the meeting (at the latest). The agenda provides preparatory value in that it allows people to review any relevant background information in advance. This avoids 90% of the need to bring people up to speed on the topics at hand. Only the chronicly unprepared attendees will both ignore the agenda and expect to derail the meeting because they didn’t do their homework.

Target specific times for each step in the meeting. We can monitor our progress throughout the meeting and adapt to the schedule if needed. The more lead time we have, the better the decisions we can make. Every item should be 15 to 60 minutes in length. Shorter times increase overhead, and longer times defeat the purpose of having checkpoints.

  • Administrivia

Remember to either allocate time for setup of projectors, finding chairs, dialing into a conference call, etc. If possible, show up early to do this, if not, let people know that the first five minutes will be spent on setup, etc. When working on building relationships, this setup time can also be used for small-talk. It also allows late-comers a second-chance to not disrupt the meeting when they arrive.

Include the list of attendees, and when everyone doesn’t know everyone else, include a one-liner about each person’s role on the project. Not their title, or a family history, just a blurb explaining how they are involved in this project.


During the Meeting

With a well-planned meeting, the main benefits come from using the allocated time efficiently. Make sure the right people are present, and the wrong ones aren’t. Show that you respect people’s time and appreciate their investment in attending the meeting – start on time, plan to possibly extend the meeting a fixed amount of time, and schedule a followup meeting if you overrun your contingency.

Remember, your meeting has specific goals. Make sure you run the meeting with the express purpose of achieving them. And make sure they are concrete and explicit. When everyone is investing their time in the meeting, they deserve to have a tangible return on that investment.

  • Keep Topics Relevant to the Attendees

Keep a meeting topic focused, and only invite (on the “To:” line) the people who need to be there. You can use “CC:” to inform others about the meeting. Make sure that the invite specifies that “CC:” people are only being notified, and their attendance is neither required nor opposed. All “To:” people are expected to attend or be represented.

When topics need to vary, such as a status update (ala Scrum) on multiple related projects, organize the topics into different parts of the meeting, and invite people to attend specific portions of the meeting. Essentially, you’re running multiple meetings back to back in the same room. That’s ok. When people can avoid part of it, they will appreciate it.

  • Demonstrate Respect for People’s Time

It may be unreasonable to ask people to turn off their phones. At a minimum, turn yours off, visibly putting it on the table while doing the rest of the meeting setup. If there is a possibility that an urgent call may come in (you are expecting a baby, for example), let everyone know that you might have to take a call, and apologize in advance should that happen.

  • Administrivia

Meeting start, end, and break times should be published in advance to set expectations. Strive to honor them, but remember the reasons for the meeting in the first place. If the team is really accomplishing stuff, It doesn’t make sense to terminate the meeting at an arbitrary time just because it represents the end of the originally scheduled time. Include an extra 30 minutes at the end of the schedule as a contingency. If you think it might be worth it to use the contingency, get buyin from all of the attendees before running over – they may have other commitments. Regardless, they will appreciate that you are showing that you value their time.

If you use the contingency time and still haven’t achieved the goals of the meeting, agree to a continuation meeting to be scheduled immediately after the conclusion of the current meeting. Many people won’t have access to their schedules during the meeting, so don’t try and nail down an exact time while everyone is still in the room. Agreeing on a day (next Tuesday) and duration is sufficient.

  • Deliverables

Strive to make the results of the meeting concrete. Concensus building and decision making are both important, but specific items results should be tangible. By making sure goals are measureable, this is very easy.

breaking boards

After the Meeting

In karate, we’re taught to punch through the board, not into the board. Follow-through is important with meetings too. When we have a meeting with a worthwhile goal, and then execute that meeting efficiently, we’re only hitting the board. We have to follow-up the meeting to punch all the way through it.

Start with a wrap-up at the end of the meeting. In addition to being a valuable active-listening technique, this summary allows everyone to leave the meeting with a fresh idea of exactly what was accomplished, and a reminder of what they have agreed to do.

Write a summary of the meeting results – basically a written document of the verbal wrap-up. Do this immediately after the meeting, and send it to all of the meeting invitees.

Make sure and deliver everything you have committed to doing during the meeting.

  • Wrap-up

Always leave a short time at the end of the meeting to verbally summarize the decisions made in the meeting, confirm responsibility (and dates) for follow-up action items, and schedule the date of the next meeting (if needed).

As soon as possible after the meeting, send out meeting notes that document the responsibilities and major conclusions of the meeting. Make sure that all people originally invited to the meeting get this follow-up note (even if they did not attend), and include the information of who attended for future reference.


We were inspired or reminded of ideas by the following posts and their comment threads. Thanks to the authors and their readers!

Customer Independence Day

Spirit of 76*

If This Be Treason, Make the Most Of IT! (Patrick Henry)

The customer is always right, except when he is wrong. When we have bad customers, we should fire them. Declare today as Customer Independence Day, where we declare our independence from bad customers.

The Loophole

While the customer is always right, there’s a big loophole – when the customer is wrong, he should stop being our customer.

Hawkins on Firing Customers

Christopher Hawkins provides a list of 11 customer archetypes who should be fired. Having them as customers is just bad business. Christopher provides a lot more detail, and here is his list of abusive client types:

  1. The disillusioned
  2. The suspicious
  3. The chiseler
  4. The bully
  5. The something-for-nothing
  6. The slow-pay
  7. The flake
  8. The liar
  9. The blackmailer
  10. The money pit
  11. The clinger

Our Take

We agree that people who exhibit these traits tend to make bad customers, friends, bosses, etc. We should always strive to resolve conflicts or change the behavior of our clients when it is unacceptable. Only when all else fails should we abandon a customer because of a personality problem.

There are times when we have great relationships with our customers, and we simply become unaligned. We should fire those customers too. Perhaps the client’s strategy is changing, and it isn’t one we want to support. Perhaps the customer’s needs are such that they ask us to perform work that we choose not to do. Perhaps our direction has changed, or is evolving, away from an existing customer’s business needs.
We will dilute our efforts if we try and be all things to all people.

As a small company starting out, it can be very hard to walk away from a bad “opportunity” with no paying alternative in sight. Someone once said “the hard thing to do is often the right thing to do.” It could be for you. It has been for us in the past.

Be Professional

We’ve decided to fire customer X. We don’t want to dump a shipment of tea in the harbor. Customers have memories, and they also have friends. Even if we never want to work for customer X again, we don’t want a bad reputation.

Some things to do when separating from a customer:

  • Be courteous and polite.
  • Provide ample lead-time. Two weeks is a minimum, even if the customer wouldn’t have provided it to you. Some situations require more lead time.
  • Review your documentation and update it as needed. Presumably, someone else will take the post we decline (or abandon). Make sure they have all the information they need to know everything you did.
  • Be proactive about knowledge transfer. Don’t wait for the client to ask for knowledge transfer – actively contribute to the plan, and drive it forward.
  • Reccommend alternatives. If you can propose your own replacement (and vouch for her) – even better.

– –
*(based on The Spirit of ’76 by Archibald Willard 1836-1918)