Monthly Archives: September 2006

Burndown Bullied Into Business Analysis

burndown visual

Burndown is a technique used in Scrum projects for tracking the progress within or across sprints. It is an exciting way to track how a team is progressing against a deadline – and we can apply it to any form of project-status. In this article, we will apply it to documenting business processes.

Thanks in Advance

Thanks in advance to Mishkin for pointing us to a great pdf about Scrum from Yahoo. Also thanks to Michael Cohn of Mountain Goat Software, for his innovative extension of burndown for tracking progress across iterations. The clear explanations and differing applications of this simple visualization technique are inspiring. We plan to use burndown for tracking progress on our current project.

Tracking Business Analaysis

Consider a project involving a team of business analysts, defining as-is processes and requirements for a portion of a large enterprise software deployment project. This project is helping a company migrate from a legacy system to a new solution.

For this hypothetical process, assume that we are documenting 50 as-is processes, with an average estimate of 10 hours of business analyst effort per process – or a 500 hour chunk of work. With 5 analysts working 5 “on-task” hours per day, that translates to 125 hours of capacity per week – or a 4 week cycle for the team.
Linear Forecast

The burndown principal is that we’re tracking a fixed amount of work, over a fixed amount of time. What is interesting is that we don’t monitor how much work we’ve done – just how much work we have remaining. This is important, because effort already spent is a sunk cost, and should not drive our decisions. We should focus on the time remaining to complete the committed deliverables.

Erratic Progress

Even if we spend five hours per day on a task, we will not be eliminating five hours of remaining work each day. Estimates are never perfect. We discover unanticipated problems along the way that increase our estimates of remaining work. We have brilliant ideas that can save time – reducing our estimates of remaining work.

The Ah-Ha! Visual

Combine the ideas of a linear forecast with the notion of tracking time remaining instead of time spent. Then track daily updates of estimated time remaining.

burndown graph

Velocity is what scrum practicioners call the reduction in remaining work per unit time. Visually, this is the slope of the curve. The steeper the slope, the greater the velocity (imagine a downhill skier), and the greater the progress. We’ve zoomed in on the first half of the burndown graph in the chart above. The remaining line is our current estimate of remaining effort (the solid red line). The forecasted line is the original “project management” forecast of equal progress every day. We can take away a few observations from the visual:

  • When the remaining line (solid red) is above the forecast line (dashed blue), we are behind schedule.
  • When the slope of the remaining line is steeper than the forecast line, we are gaining ground. When shallower, we are losing ground.
  • The gap between the two lines shows exactly how far ahead or behind schedule we are.

The Excitement

This clear communication of status allows our project manager to know exactly how we’re doing. It also allows our team to see exactly how we’re doing, and makes it easy to visualize goals like “make up lost ground” as well as providing nearly instant feedback. When the remaining line deviates significantly from the forecast line, we can revisit schedules and scope.

Estimating the Effort of Documenting an As-Is Process

Question Mark Dominoes

Estimating the gathering of requirements is hard. Not as hard as scheduling innovation, but easier than estimating implementation effort. One step in gathering requirements is often the documentation of the “as-is” process – how things exist today. We provide a framework for building those estimates – making the job a little bit easier.

As-Is Process Documentation

There are many methods of eliciting requirements. Different types of projects will use different methods of gathering the requirements. When working on migration projects where an existing system or process is being replaced, a good approach is to start by documenting the as-is process. This documentation provides a starting point for defining what will change. This approach has the added benefit of making it very easy to crisply define scope for the project.

The as-is process is a simple as it sounds – it is a documentation of the existing steps that a business or individual takes in order to achieve an objective. An as-is process is usually documented as a series of steps and decisions. A great way to do that is with a diagram of the process combined with supporting prose.

One benefit of using the combination of flow and prose is that the documents can be used to target communication about the process at either a high or low level, as needed. By “talking to” the flow diagram, we can ask questions or make statements about the process while “waving your hands at” the details. This allows us (later in the process) to elicit requirements about the big picture, and overall goals. We still have the supporting documentation to provide the detailed information when we need it. People who don’t think about what they do as “processes” – and many don’t – can still easily follow along a diagram, and nod their heads in understanding as you manage a discussion about the process.

Approach To Estimation

When we’re planning to use as-is process documentation as part of our requirements definition process, we need to be able to estimate how much time and effort will be spent documenting the “as-is” world. In NeverNeverLand, our client would already have as-is process documents that were detailed and accurate. Most contracts these days are in StarkRealityLand, where our client has out of date, incomplete, and inaccurate documentation, if he has it at all. Therefore, we have to estimate how long it will take to create it.

Our approach to estimation is to decompose our documentation effort into its constituent steps, estimate the time for each step, and add it up. While we’re doing that, we will also identify the people who will be involved in each step and define the scope of the process to be documented.

Process Scope

Each process should be defined as the work required to achieve a particular objective. The size of the objective does matter – we want an objective that takes between 6 and 12 hours of analyst time to estimate. This is a circular reference, but one easily resolved. If, after completing the estimation effort for the process, you find that it take 20 hours to document, then the process is “too big” and needs to be decomposed into two or more smaller processes. Some example processes could be

  • Hire a new salaried employee
  • Send out monthly invoices to customers
  • Rebalance investment portfolio

Decomposing Elicitation

It would be fantastic if requirements elicitation followed the following formula:

  1. Interview Expert
  2. Record Information
  3. Distribute Information

But it never does. Our experience has been that the following steps are more realistic:

  1. Initial interview of subject matter expert (SME).
  2. Create draft document (diagram of flow + documented prose).
  3. Review draft document (expect 25% to 50% changes).
  4. Revise document.
  5. Review document (expect 5% to 25% changes).
  6. “Final” revision of document.
  7. Approval / signoff process (varies with client).

If the process is “right sized”, the time estimates should look like the following:

  1. Initial SME interview – 90 minutes
  2. Create Document – 120 minutes
  3. Review Document – 60 minutes
  4. Revise Document – 60 minutes
  5. Review Document – 60 minutes
  6. Revise Document – 30 minutes
  7. Approval Process – 30 minutes

Total time – 7.5 hours.

wrong size pants

One Size Does NOT Fit All

Every team, and every process is a little different. There are factors that you need to consider to adjust the previous estimates. Here are some common things to consider:

  • BA Experience Level – Less experience increases the likelihood of having an additional unplanned iteration. It also means that interviews and reviews might take longer – because staying on-topic may be harder.
  • BA Domain ExperienceContext is everything. Without the ability to ask relevant follow-up questions, rework is likely to go up. Interview time may also be increased in order to gain an understanding of the domain on-the-fly. And documentation time can go up, as explicit time “thinking about the problem” may be required, beyond the implicit time spent thinking while doing (an important senior BA skill).
  • SME Expertise – Some SMEs are people who “do the job everyday” while others are people who understand the relevance of each step of the job. The more of a “doer” a SME is, the more rework we can expect in the first review. Also – the first review must have an actual expert even if the SME isn’t an expert. This may mean extra work for the business owner. Without this understanding, we can’t hope to write complete documents.
  • BA Skill – Work isn’t just about experience, it is also about aptitude. A BA who can ask good “assimilation questions”, write unambiguous prose, diagram with clarity, control a review meeting, and navigate signoff will meet their time estimates. One who can’t will add time wherever skills are lacking.
  • Business Owner Self-Awareness – Some business process owners (people who approve documentation and are responsible for the existing process) do not think in terms of processes. They may be following an ad hoc process today, and may not have rigid processes. Worse yet, there may be multiple owners, who disagree with the each other and/or the SME about what the process really is. These rudderless ships will increase rework, possibly add a review-revise cycle, and can extend meeting times.
  • Interdependent Processes – Most enterprise software projects will have a dozen to a hundred business processes (at the right size) that are being affected by the migration to the new solution. When these processes are highly entangled (calling each other, referencing each other, and relying on or deferring to each other), extra time may be required to validate the touch-points between each pair of processes. This time should be captured in the steps that involve updating the documents.

Participants

Generally, there are three roles being played in the documentation of an as-is process.

  • SME – Subject Matter Expert. Someone with intimate familiarity with the steps of the process – ideally, but uncommonly, someone who understands the relevance of, underlying requirement of, frequency and cost of each step in the process. Participates in the initial interview and the first review (making the first review also an active listening exercise).
  • Business Owner. Person responsible for the execution of the process being documented. Usually the person responsible for approving that the process has been documented correctly. Participates in both reviews and in the approval process.
  • BA – Business Analyst. Person documenting the process, as a precursor to requirements definition and gap analysis. Participates in every step of the process.

Summary

A process should take between 6 and 12 hours of BA time to complete. Roughly half as much time will be spent by SME/reviewers. Basically “an hour of doc for every hour of talk.” The skills and perspective of the team can affect the time estimates too. Rolling up low-level estimates will provide for good estimates at the start of a project – better than top-down, order-of-magnitude estimates based on a shallow analysis of the area to be documented.

Vote Early And Often – Getting Value From Brainstorming

voting machine

Brainstorming can be a simultaneously fun and effective technique for identifying software features or requirements. We’ve written previously about how to facilitate a brainstorming session and how to leverage the results. Timothy Johnson shares another way to use brainstorming results effectively. His way is more fun, and may be just as effective.

A Simple Formula

Here’s the idea, graphically.

brainstorming economics

Brainstorming yields ideas.

Ideas, combined with voting yields value.

The voting piece yields value, because it helps to focus our efforts on the best ideas.

Means of Voting

We previously proposed a way to gather everyone’s inputs, by having them value each idea. Our method included the following:

Count the requirements. We’re going to create three evenly sized priority buckets and place the requirements in the buckets (1,2,3). Each person will rate every requirement as a 1,2 or 3 (1 being most important). Give each person a stack of post it notes and a marker, and have them make out a fixed number of 1,2, and 3 post-its (evenly divided, with the remainder as 2s). It’s important that people be forced to divide the scoring evenly so that they don’t make every requirement a 1.

Everyone prioritizes the requirements. Have everyone physically get up, mill about, and stick their post-it-note priorities on all of the requirements. The scoring is somewhat subjective and individual. Provide a guidance about how ideas should be rated (value, feasibility, alignment with strategy), but ultimately each person will make a judgement call, and that’s ok.

from Five Steps to Picking the Best Requirements

Timothy proposes a simpler, easier, more fun way to get everyone’s inputs, including the following:

  • Count up the number of items on the list and divide by three. This is how many dots (or votes) each person receives. For example, if you brainstormed 60 items, then there will be 20 voting dots given to each participant.
  • If possible, assign a color to each person. Some facilitators like to give everyone the same color to keep things anonymous. I prefer accountability over anonymity any day.
  • Each person spends their votes like currency. They may place all their dots on a single item if they truly believe it is important. The bottom line is that it makes people intersect their priorities and their passions.

from See Spot. See Spot Vote

Analysis

These two approaches seem to have the same general effect, in applying the “wisdom of crowds” to weed out the bad ideas and center attention on the good ideas. Since brainstorming tends to create a very broad range of ideas, the approaches tend to be very effective at weeding out the impractical and valueless ideas. There are a couple interesting additional factors at play in the two approaches.

The first approach, which we proposed in January of 2006, has the positive benefit of forcing each person to think about every idea in a relative sense. Because each person has to vote on every idea, they have a limited number of “bad idea” chits and “good idea” chits to spend. This will force a number of independent “X is better than Y” analyses.

The second approach, where each vote is treated as currency is much more fun. People are going on a shopping spree. This approach allows someone who believes passionately in an idea to “spend” all of their votes on that idea. This has the positive result of allowing someone who wants to champion an idea to have a dramatic impact on that idea’s results. This also introduces the risk of a passionate person from forcing a “good idea” to the top of the list. There is also the risk that people who can’t make good decisions will dillute their inputs across too many ideas.

The first approach takes the totals and uses those numerical values in a form of idea-triage, to drive future investment (in the investigation of the ideas). The second approach includes a discussion of ideas and voting afterwords.

Both approaches capture an assessment of the group’s perspective on all of the ideas, and barring extreme behavior by any of the voters, will result in reasonable results. Timothy’s approach is definitely more fun, and allows people to express their desire to champion one or two ideas with disproportionate voting. Our previous approach feels a little more like work, and forces people to apply a valuation framework to the ideas.

Best of Both Worlds

The elements that are most important in the two approaches are

  • Capturing passion
  • Using a valuation framework to compare ideas
  • Having fun

Proposed Combined Approach

We would propose that using the “rate every idea” approach is important, while having the ability to express passion is important. We suggest keeping the {1,2,3} rating approach, but also giving every person in the session a “5 spot” that they can “spend” on any idea. They don’t have to spend it, but should if they really believe in something. This will help capture the passion that Timothy’s approach uses so effectively.

Free BPMN Stencils for Visio 2003 and Visio 2002

bpmn template screenshot

In support of our series of BPMN Tutorial posts, we’ve created a series of Visio 2003 stencils (*.vss) and a template (BPMN_Template.vst) of BPMN symbols. Anthony Britton has created a Visio 2002 version of the free stencils – thanks Anthony!

Download this free resource today courtesy of Tyner Blain!

Continue reading Free BPMN Stencils for Visio 2003 and Visio 2002

Cost Reduction Potential

roulette wheel

All process improvements are not created equal. How should we select which processes (or process steps) to improve? How do we approach this for a really large migration project? Start with understanding the potential for improvement and then narrow it down from there.

The Basic Concept

When re-engineering a business process, ultimately we want to maximize the ROI of our improvements. That means understanding the costs of today’s process, the costs of tomorrow’s process, and the costs of creating and transitioning to the new process. On a large project, this can be a herculean task. We don’t have enough time to do all the math. We don’t want to get paralyzed with analysis activities. Where should we start?

We should start with the processes that have the highest potential for improvement. Since profit can be simplified to a simple equation of savings minus deployment costs, we should start by finding the processes (or process steps) with the highest initial costs (and therefore the highest potential savings).

A Simplification

To simplify the presentation of this idea, we will ignore probabilistic costs (risks, errors, modeling) – not because they aren’t relevant, but because the cloud the issue. Imagine for the rest of this article that the only costs in a process are operating costs. The same approach can be extended and refined to include other very real costs.

We will also use an example that depicts process steps – it could easily depict processes. The only difference is the level of detail. On really large projects, use this technique to identify high potential processes, then drill down into them to identify high potential process steps.

Potential

To calculate potential, we calculate the costs of existing process steps. Consider the following simple (existing) process:

simple process

The process has five steps, A through E. Step B is a decision, meaning that some times we execute step C, and other times we execute steps D and E.

We want to know which step of the process to focus on improving – so we have to identify the step with the greatest potential for savings.

There is a simple formula for defining the cost of any step in the process.

Frequency x Effort x Burden x Period

  • Frequency (number of occurences per unit of time)
  • Effort (units of time spent in the task)
  • Burden (money per unit of time)
  • Period (unit of time for the analysis)

We can represent this with a simple template of a spreadsheet.

template

Sample Data

By creating sample data for each input, and calculating the cost, we can compare the potential for each step in the process.

sample data

If we were to ignore frequency information, then step C would appear to have the most potential, because it has the highest “per occurence-cost (with an effort of 5 hrs at $20 / hr). However, by recognizing that step C is only executed 10% of the time, we see that the cost of step D is actually the highest (at $36,000 per year).

real data

Interpretation

Our interpretation is that step D has the greatest potential for savings, because it has the highest cost. Steps A and E come are next, followed by step C. Step B is so cheap that it isn’t likely to be worth evaluating.

Next Step

The next step is to propose a replacement for the highest potential step (D). Alternately, we may look for a way to combine steps D and E with a single replacement (as mentioned in this article about process improvement). After exploring this solution approach, we would look at steps A and E as candidates for replacement. Estimating the costs of replacing these steps is the next thing we do. The costs of performing the steps today, minus the costs of performing the new steps in the future (plus the development and transition costs) determine if we should consider this as a step for replacement.

We will only consider those steps where the profitability of change exceeds our hurdle rate for investment.

Once we have profit data for each proposed step change, we can prioritize these changes, and then schedule them in conjunction with the skills of our development team.

Strategic Change Loophole

Executives like loopholes. Perhaps because their accountants find so many for them. We have a loophole here too – a process step may need to get replaced because of internal office politics, or for “strategic” reasons. We need to make sure our stakeholders have a way to include financially excluded choices.

Summary

Identify the most expensive processes to run. Determine ways to replace them. Calculate ROI and use it to drive prioritization.

Flesh Out Those Wireframes

Wire frame man(John Richards)

Stephen Turbek, at Boxes and Arrows, tells us how to get better results from our wireframes. Wireframe prototyping can provide feedback early in the design cycle, reducing costs and improving the quality of the final software. By putting a little flesh on the bone, we can get even better results.

Hooked From the Start

Stephen starts his article with the following quote…

How many times have you been asked, “So, is the new website going to be black and white too?” after presenting your wireframes to a client or a usability test subject?

A very solid opening to his position that when you only use wireframes, you introduce problems. Wireframes are designed to eliminate problems and “clutter” from the feedback session. Feedback sessions provide us with a lot of information, and the challenge is to separate the noise from the signal. The goal of wireframes is to eliminate sources of noise, to make it easier to focus on the signal. But using wireframes also introduces noise into the data.
He goes on to provide a real world example of a website under development for Verizon – showing wireframes and “low fidelity prototypes” that include more information than just the wireframes.

Why Wireframes?

Stephen makes an interesting point – wireframes (named after 3D modeling techniques) were initially designed to provide quick feedback and insight into 3D models, without the expense of complete rendering. As technology reduced the cost of of rendering, rendering cycles began to replace wireframes as early prototyping tools.

A wireframe, in the user interface world, is a minimalist visualization of a website or application. It shows where information will reside on a page, and what information will be shown. When using a wireframe to get feedback, it allows a designer to (attempt to) isolate the feedback about content and layout from other data (like feedback on color schemes and graphics). It also allows for more rapid prototyping because the prototype can be built as soon as a layout is done, without waiting for colors and graphics and branding to be incorporated into the design.

Why Not Wireframes?

Expanding on Stephen’s point, the lack of information detracts from an understanding of the usability of a design. Colors (such as blue hyperlinks) do provide visual cues for users. Branding and navigation elements provide a sense of context and comfort. The absence of these things can distract the people we were trying so hard to not distract.

What About Cost?

Another goal of using wireframes is to reduce the cost (and time equals money) of prototyping. Stephen shows how in less than 15 minutes, a wireframe prototype is converted into a prototype that leverages existing branding, navigation, colors, and images. 15 minutes is not a long time to spend to achieve this striking difference (check out the images in Stephen’s article). When showing multiple screens in a site with structured navigation, most of those investments will be re-used across multiple screens.

Why It Will Work

The ability to re-use the “extra bits” is the key to why it will work and not just the key to why it won’t cost more.

People who think about website advertising worry a lot about ad-blindness, or the ability of people to ignore ads over time. We actually depend on it here, to allow our regular readers to tune-out ads that appear in consistent locations and formats and instead focus on the content of the articles. This same phenomenon applies to these augmented wireframes.

The lack of context that a wireframe creates can be disconcerting or even confusing. The (valid) concern that drives the use of wireframes is that providing all the other stuff will distract the users and prevent them from providing feedback on content and layout.

The ad-blindness effect will quickly allow people to ignore the “other stuff” and focus on providing feedback on the content and layout of a page. People are good at scanning pages – they have an autopilot that takes them directly to the “meat” of the page. And the presence of the extra content allows them to do that – where the absence of that richness will immediately trigger a “what’s wrong?” or “what’s different?” analysis.

manikin

Tips to Making It Work

Stephen provides eight tips, which we really like. We are concerned about one of the tips – to use “real data” instead of fake data. I had always learned that real data risked distracting the users, who might fixate on the fact that you chose incorrect data to display.

Today, after a prototype review session, we got feedback from our reviewers that it would help them if we used real data.  The reviewers of our (fleshed out) wireframes were unable to visualize how the interface might behave with some real-world data examples. The data we were presenting and manipulating is fairly complex, and the contrived examples didn’t do a very good job of showing how the interface would handle the complexity it would need to support.

Using representative data is definitely a good idea – and as good as Stephen’s other ideas are, we’ll take his word for it that real data is even better.

Conclusion

Spend the incremental effort to make wireframes “feel real” by fleshing out some of the context that wraps the prototyped pages. That context will provide more comfort than distraction for users.

Process 2006 – Day 1 by Sandy Kemsley

Sandy Kemsley, of Column 2 fame, is blogging the Process 2006 convention “live” as it goes. Subscribe to her blog to stay on top of things. For now, here are the articles she’s posted from day 1.

Some choice quotes from Sandy’s posts:

“One thing that I really liked was his analogy about BPM: buying MS-Word doesn’t make you a novelist, and buying a BPMS doesn’t make you a process-oriented company.The technology is an important enabler, but there’s much more to it than that.”

“[OMG, we just hit a slide with 249 words — the print is getting smaller and smaller. Keith, you’re killing me!]”

“This is definitely the first presentation that I have ever seen, anywhere, that quotes Winnie the Pooh, and gives an example of strategic objectives as illustrated by Christopher Robin and Edward Bear. By the time he got to his main case study, which was the hospital administration process around having a vasectomy, I was wishing that he had been my prof at university.”

“Surprisingly, he then went on to talk about the emotional aspect of decision-making as it relates to customer centricity: how some decisions are almost purely emotional (like buying a convertible or having plastic surgery) and the financial practicalities of those decisions become less important. Not the argument that I expected from a reason-driven German banking COO, but he posits that being a customer-centric organization is understanding the balance between reason and emotion.”

I won’t tell you which quotes are from what articles – you’ll just have to check them out to find out more about the stuff you like. :)

Thanks Sandy for the blow-by-blow.

BPMN Deadlock

standoff

One danger of using a precise language like BPMN to describe business processes is that you can precisely get yourself into trouble. Deadlock (in BPMN) is a condition used to describe a process that can’t be completed. By designing (or describing) the wrong business process, you can create a process that never finishes.

Rewarding the Pedants

At the end of our earlier article about compensation events, we hinted that there was another problem with the diagram we debugged. That problem was deadlock. We diagrammed a process that could never complete. Congrats to the people who caught it!

A Simple Deadlock Example

Take a look at the following process diagram and see if you can identify the deadlock condition. Here’s a hint – you might want to review how gateways work in BPMN.

Deadlocked BPMN process

The process splits at an inclusive OR gateway, and then joins together with a parallel gateway. The deadlock case (a process that can’t complete) comes from the precise nature of the definitions.

  • Flow from an inclusive OR gateway will follow at least one, and could follow some or all of the paths that leave it.
  • Flow into a parallel gateway requires that all incoming paths be resolved before continuing

When the salesman sells more than twenty cars, both paths are taken from the OR gateway, and then both paths are combined in the parallel gateway. There is no deadlock condition. We have a problem when the salesman sells twenty or fewer cars.

In the problem case, one of the paths (“More than 20”) is not taken by the process (the salesman does not get a bonus). The parallel gateway still requires both paths before completing – it has no way of knowing that the bonus-path was never started (or rather, the parallel gateway doesn’t care). So the parallel gateway will “hold up” the process until both input paths are completed. Since the “bonus path” is never started, the parallel gateway will never be satisfied and the process will never finish. This is what BPMN folks call deadlock.

Simple Solution To A Simple Problem

The solution for this illustrative example is as simple as the problem.

Corrected BPMN Deadlock example

The parallel gateway that joins the flows together before “Pay Bills” has been replaced with an inclusive OR gateway. This gateway specifically waits for every path that was activated. When only one path is initiated, only one path is required. When both paths are started, then both paths must be completed before proceding.

A More Complex Example

This example is more along the lines of the mistake we referenced in our earlier article.

Deadlocked BPMN Process

In this example, our actor attempts to walk and chew gum at the same time. If he trips, he will reach one of the end events, but he won’t achive the multitasking award. At first glance this seems acceptable – the process starts and ends, and the actor doesn’t get a multitasking award.

However, this process is also deadlocked, because of the join gateway. The join gateway must complete both of the incoming sequence flows – but if the actor trips, the flow from the XOR gateway (trip?) will not be activated and the gateway will never complete.

A More Complex Solution

The key to solving this deadlock problem is to make sure that all of the paths into the parallel gateway are completed.

BPMN Deadlock removed

The XOR gateway (trip?) is now processed after the two sequence flows are reconnected. If the actor tripped, he will not receive the multitasking award. If the actor didn’t trip, he would.

Summary

When using parallel gateways to join paths, we must make sure that all incoming paths are handled. We can avoid the deadlock problem by replacing a parallel gateway with an inclusive OR gateway. We can also resequence our process steps to assure that all incoming sequence flows to a parallel gateway are activated.

Before And After – A Rule For Improving Processes

three fingers

Nils proposes his rule of three boxes as a consideration when developing software or software features to improve business processes. In short, make sure that you can actually execute the new process. It isn’t enough to create a good “replacement process” – you have to be able to transition to the new process and then back out of it. The new process is plugged into a business ecosystem, and it must coexist with the existing processes.

From Nils…

Improving the process is all well and good. But this rule addresses the fact that processes don’t live on their own – there are processes that come before and that come after.

Nils Davis

Nils presents an easy to understand diagram of the original process (A->B->C) and the end state after replacing “B” (A->B’->C).

Nils also presents four common causes of friction between the new improved process and the existing processes with which it works.

  • Platform Mismatch
  • Technology Change
  • Unfamiliar Look and Feel
  • Legacy Data Integration

From Tyner Blain

Nils uses photoshop as a great example – photoshop is easily an order of magnitude better at editing images than previous manual solutions. However, getting the image from a film negative to photoshop, and then onto paper (for use by the printer) was still hard. This slowed the growth of photoshop (or drove growth in the digital imaging and digital printing industries).

In addition to considering how best to integrate B’ with A and C, we should also consider A and C as “fair game” for re-engineering. We should consider the possibility of going from {A->B->C} to {A->D} or {D->C} or just {D}.

By keeping our requirements at the proper level of abstraction (not design and value focused), we can question the importance of replacing each element of the previous process. We can also develop an understanding of why the original process was put in place, and how best to modify it. This is a great approach when dealing with migration projects.

Another consideration is change management – what does our customer have to do when migrating from their old process to our new one? Perhaps the change management should break that transition down into stages (e.g. initial dual systems, then mixed systems, then finally a new system).

Conclusion

Make sure you’ve targeted the highest priority problems to fix. Then make sure the solution can be integrated into the existing process. Then determine how best to manage the transition of actually running the business from the old system to the new.

BPMN Compensation Event Correction

bpmn diagram

One of our readers (thank you!) pointed out that another blogger was critiquing one of our earlier business process modeling notation (BPMN) diagrams. Turns out we made a couple mistakes. Here’s a more detailed look at the compensation end event.

The Web-Publishing Process

One of the great things about writing and publishing on the Tyner Blain blog is the feedback loop. In BPMN, it looks a little like this:

blogging process

Usually, when a critic finds a mistake he will point it out to me directly, or traffic will come from the critic’s site, and we’ll find out via our analytics.

It turns out that we did make a couple mistakes, so thanks Bruce for pointing them out.

First, The Mistakes

In one of our articles on end events, we created an example to demonstrate compensation end events, and we had a couple mistakes in the diagram – not too good for a tutorial. Here’s the offending diagram, with a couple annotations in red showing the problems.

bpmn compensation event mistakes

These two mistakes (circled in red) indicate a mis-application of the compensation end event.

The compensating activity, “Record Possible CC Fraud” is a mis-application of the compensation intermediate event, because that task doesn’t actually undo, or compensate for the task to which it is associated (Process Credit Card).

The second mistake is that there are two gateways (a fork and an exclusive OR) that can be reached before the compensation end event, but no other activities. The compensation events, when triggering compensation, are designed to allow you to “go backwards” in the process, and as such, expect that at least one activity is processed before deciding to compensate. The presumption is that if compensation can be initiated without performing any other tasks, then it could happen immediately from within the task that has a compensation intermediate event on its boundary. The gateways don’t count as activities, because they only analyze existing data.

The Corrections

Here’s a new example diagram, without those two errors.

BPMN compensation event example

The first correction shows that the compensating task, “Record Transaction Recision”, is truly something that is done to compensate for recording the transaction. The second change is that there are two distinct tasks between the activity being compensated and the compensation end event. The first task, “Request Credit Card Charge”, sends and receives messages from the bank. This is the task that could cause a compensation (controlled by the gateway’s analysis of the results of that request). The second task, “Notify Customer of Failure” will also occur, but only if the bank denied the charge to the credit card.

Summary

Compensation events require that tasks be processed between the task being compensated and the triggering of that compensation. Compensating tasks must compensate for the task to which they are associated.

For the Truly Pedantic

There is also a semantic concern with the original diagram – the original diagram shows that the product would be delivered, even if the credit card charge failed to go through. This could indeed happen – when a vendor is unable to contact the bank, but unwilling to lose a sale. Think about the old “paper charge” handheld machines – they would create a slip that records the transaction in exchange for goods or services – and only later process that transaction, which could fail. But this is a concern, since it is very uncommon these days (only us old folks remember).

There is also another problem, although it may be more of a problem with the BPMN spec than the original diagram. We’ll talk about that in a future article. [Update – it is definitely a problem with the diagram, and not the spec. – Discussion to appear in 20 Sep article]