Is Agile Bad For Software Development?

broken software

Last week, Ivan Chalif, a product manager / blogger, tapped into a thread criticising product managers for not adopting and espousing agile, or at least rapid-release techniques. In this article we look at Ivan’s comments and one of the articles that he referenced. We also share our own perspective and an alternative analysis of what may have happened.

Short Release Cycles are Bad

At the end of his article, Ivan references a well presented position on why short release cycles are bad that was published by Justin James on TechRepublic last June.

Ivan’s article basically says that agile approaches are fine for non-mission critical software. Justin’s article includes criticism of software he has used, and implies that the problems are due to the development methodologies used.

In a nutshell, they are both saying that short release cycles are bad because they result in bad software being released. Ivan qualifies his perspective by saying that for some applications, like “consumer web applications” that’s ok – because being bad doesn’t hurt as much.

Our perspective is a little bit different – short release cycles are the best thing ever. Immature or incomplete releases are bad. We define incomplete as meaning “not all of the must be requirements have been satisfied.” We define immature as meaning “quality, usability, and performance are not where they need to be for the audience.”

Ignorance is No Excuse

On the point of requirements, we’ll add that failure to implement an undiscovered / undocumented must-be requirement still qualifies as incompleteness. A product manager’s failure to document a critical requirement may absolve the implementation team from addressing and testing it – but the “entire team” is still on the hook for it.

In short, we don’t think agile approaches need to be forced to live in the “software that isn’t important to users” box. Agile techniques and approaches can be used on any project. Any project can succeed or fail, and any process can succeed or fail on a particular project. What is more important is the team. And unfortunately, the environment – people trump process, but politics trumps people. If your organization can not or will not support an iterative development cycle, your project will fail if you use an agile process.

Mission critical and non-mission critical software have mandatory requirements that must be satisfied. Must-be requirements, by definition, must be satisfied. A build of the software that does not satisfy those requirements is one that is incomplete, and should not be released for production use.

Note that we specified “production use.”

Anti-Agile Argument

We’ll take a point-counterpoint approach to one of the arguments raised by Justin. Our position is that his criticism, while anecdotally valid, should not be a criticism of the process, but of its poor execution.

Here is an example of what I mean: today, I made hotel reservations with a Hyatt hotel on their Web site.[…]Except there is one huge flaw: it does not use HTTPS! So not only is my credit card number being sent in plain text (along with my first and last name), but it is going to be stored in my browser’s auto-fill system in plain text!

Justin James

Is that a problem? Absolutely. Is secure transmission of user financial data a must-be requirement? Unquestionably. I explained to my mom that her credit card number is safer on the Amazon form than it is on the receipt at a restaurant. The whole ecommerce world is dependant on that assumption. The team that implemented the website Justin was using screwed up big time. But how did they screw up?

The team that built this Hyatt website screwed up in one of four ways:

  1. They did not identify the security requirement.
  2. They did not write code to satisfy the security requirement.
  3. The code they wrote did not satisfy the security requirement.
  4. They wrote code that broke a previously working solution to the security requirement.

This clearly can be blamed on the “entire team” but can not be clearly blamed on agile software development.

Is Agile To Blame?

Let’s look at each of the possible explanations from above in more detail…

They did not identify the security requirement

The first possibility – failure to identify the requirement, is a risk with every development process. Identification of the critical requirements is a matter of product manager competence (or product champion, or whoever on the team is gathering the requirements). More rigorous requirements gathering will identify more requirements. You don’t neccessarily have to have a big up-front requirements (BUFR) exercise before writing the first line of code. You do have to have “enough” requirements gathering. A BUFR process does not assure a complete set of requirements any more than an incremental process prevents it.

All teams will reach a point when they decide they have “enough” to risk developing against an incomplete spec. A product manager has to develop a sufficient breadth of understanding to make an informed decision about when “enough” is enough. That’s why we encourage defining the space with use case names and brief descriptions of use cases. Understanding the domain is the goal of this exercise, and it is not the same as defining all of the requirements up front.

This team failed to get an understanding of the domain, and overlooked something as obvious as the need for customers to have secure transmission of their credit card numbers. That is a failing of the product manager, not an agile approach. Note: we dismiss any process that says “don’t develop an understanding of the domain” as a bad one, agile or otherwise.

They did not write code to satisfy the security requirement.

Was the requirement mis-prioritized as being less than critically important? If so, blame the product manager again.

What if the requirement was defined, identified as a must-be requirement, and still wasn’t implemented? Who’s fault is it then? Did the team have poor execution, or did they follow a process that says “define the requirements, but don’t address them if you don’t feel like it?”

I’m not aware of an agile process that says you should release the software to production without implementing a solution to a known critical requirement. Definitely can’t blame this on agile. You could potentially blame it on an executive or project manager who said “release on date X, I don’t care if it’s done!” In which case, Justin just had bad timing, because any agile process would put this at the top of the list for the next release. If not, it isn’t agile.

The code they wrote did not satisfy the security requirement.

OK, now we’re looking at a team that 1) identified the requirement, and 2) attempted to implement it in their release, but 3) failed to test it (or released with a known but uncommunicated bug). I say uncommunicated because Justin didn’t get the message, even if someone on the team tried to send it.

Lack of testing and failure to communicate are not characteristics of agile processes. Just because you’re delivering code rapidly, managing the amount of work to be delivered in a timebox, does not mean that quality is ignored. That is certainly a choice that any foolish team (or irresponsible team) can make. But it is not unique to agile development, or even characteristic of agile development.

The agile projects that we’ve seen tend to have better quality than waterfall projects. Not because they are agile, but because the developers who are following an agile process have tended to (this is anecdotal data) have a greater appreciation for the benefits of continuous integration and quality in general. They live by the test cases. If it isn’t tested, it hasn’t been completed. And if it fails the tests, it hasn’t been completed.

There is a greater possibility that a waterfall project will allow developers to write code for months on end without creating tests – with the expectation that it will get tested in the end. We’ve worked with a client who’s timeline for a release of a key set of features included 3 months for development and one week for testing. When we asked “Where’s the time allocated to fix the code when you discover that it is broken?” we got the strangest looks. As an aside, in the year after we helped that team implement a continuous integration process, they’ve reduced their outstanding bug list by almost 80% and the team lead expects it to be cut in half again in the next month. They’re making great progress just by adding one agile technique (continuous integration). The team has to create, review, and pass tests before they check code into the trunk for the current release cycle.

Absolutely not a failing of agile software development – this would be a failing of the team to deliver quality product.

They wrote code that broke a previously working solution to the security requirement.

This is another variation of the previous situation. 1) They identified, scheduled, implemented and tested the functionality. 2) The later broke it, but didn’t retest it. If they discovered that they broke it, and didn’t fix it, that falls in the previous bucket. More likely, they didn’t discover that they broke it.

Regression testing is an important element of continuous integration, and from a cost perspective, the primary reason to automate testing as much as possible. Technically, as much as practical – if it is cheaper to test manually, test manually. As the code base builds, so does the test suite. And a simple addition to the check-in policy is all that is needed.

Without regression testing: If it doesn’t work, don’t check it in (and therefore don’t release it).
With regression testing: Also, if it breaks something else, don’t check it in without fixing the other thing (and therefore don’t release it).

The team we worked with before was working on a mammoth code base with over 100 developer-years and hundreds of thousands of lines of code. They had such a huge backlog of outstanding bugs because 1) there were another 5 to 10 developer-years of new functionality being added to the code every year, and 2) they had no practical way to discover or prevent regression bugs. The developers rarely checked in code that didn’t work, but regularly introduced errors or unearthed latent errors in the existing code. And effort spent fixing those errors (after they were discovered) introduced more errors. It was a vicious and never-ending cycle. [Ed: Here’s the case study we published on integrating testing automation.]

When they adopted the continuous integration philosophy, they included regression testing as part of their required process for release – no existing tests could be broken. And they’ve been able to start dedicating time to addressing old bugs – some of them years old with confidence. They’ve also been able to refactor the existing code so that it reduced the cost of ongoing work, allowing them to accelerate the improvement of their quality levels. It is a great story.

Failure to regression test is not a characteristic of an agile process.

Conclusion

There’s no disputing Justin’s bad experience. We believe that a poorly delivered product is a result of poor team execution, not poor process decisions.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

30 thoughts on “Is Agile Bad For Software Development?

  1. The key point that you mention in your analysis is that agile methods (particularly XP) tend to be test driven. If the Hyatt web site is not secure, it’s because the developers did not create, implement, and regressively run a security test. So they weren’t really doing agile (unless the product manager errantly decided web security wasn’t important, which is orthogonal to the methodology).

  2. The more probable scenario (in my experience) is that the web site was written with security in mind (SSL at least), however it was deployed incorrectly. Enabling SSL is a deployment issue. Coding for SSL is a development issue. You can’t reliably say which failed (or both) without actually getting on the server to see how the site was deployed.

  3. Tim,
    I have to disagree. Development includes deployment. The entire team is responsible for delivery of a product to the user. If any piece of the team has a hiccup, it is the responsibility of all team members to make sure that it is corrected. By saying it was a “deployment” issue creates an us vs. them environment that is NOT conducive to a productive shop.

  4. Thanks guys, great discussion so far! I have to say that I am definitely in the “deployment is part of delivery” camp, and therefore a responsibility of “the entire team.” Some teams will organize so that the people writing the code are not the ones deploying it, and some won’t. But it doesn’t matter – deployment is still part of the product.

  5. Scott –

    First and foremost, I would like to thank you very much for bringing some debate and excellent criticism to a post that I thought received very little attention!

    Second, I will say that I actually am in agreement with every single point you make. Indeed, I have actually softened my stance quite a bit on “Agile” (I am sticking with your use of “Agile”, as that is how I use it as well) development methodologies (http://blogs.techrepublic.com.com/programming-and-development/?p=361).

    At this point, I feel that “Agile” methods can be employed “safely”, provided a few requirements are met up front:

    1) The software cannot be mission critical for something like air traffic control, life support systems, anti lock brake systems, credit card processing, or other applications where matters of health, safety, and the such are on the line. Those systems require such a massive amount of testing that “Agile” is nearly impossible. Indeed, imagine the lawsuit on it: “the defendant was so obsessed with ‘market share’ that they ignored the ‘industry best practice’ of long release cycles in their rush to market.” Ack.

    2) The code team has to be top notch. The quicket way to a disastrous project is to put a shoddy or inexperience coder in a situation where they *feel* pressured to write code quickly (regardless of whether or not the pressure is actually there). As your sub-par coder is rushing to keep up with the imaginations of the rest of the team, he or she is introducing bugs. If your test team is less than 100%, you have bugs.

    3) The architecture of the code must be such that quick changes are not likely to damage existing code. A new but broken feature is much more forgivable than breaking a previously working feature. This is one reason why XAML is pretty darn nifty to me. I like the idea of building core code components, and then rapidly, easily building them into a reconfigurable workflow as needed. Likewise, ASP.Net’s “Web Parts” and the Java Faces equivalent are winners in my mind. The additional overhead pays off in reusing isolatable, testable code that can be brought into new areas and pulled out easily if it fails to work right.

    4) The product manager needs to be absolutely ruthless! Every new feature needs to be carefully evaluated to see if it is something which most of the users need, or if only one or two really vocal users are demanding. That’s one of my biggest reactions against “Agile”, is that the squeaking wheel gets the grease. The idea of one or two users who have the email addresses of the “right people” being able to blow away months of planning, usability testing, and so on, just to get some pet feature added is against my principles.

    5) If the “Agile” process touches the UI in any way, usability folks need to be involved. The features added must be rigorously tested to ensure tha1t the usability of the application is maintained.

    6) Test, test, test! Even on a “Waterfall” project, I like to have nearly as much time devoted to the testing and refinement stage (especially on Version 1!) as I had for initial development. Why? Because once the end users get the beta, they identify new needs, original requirements are no longer thought to be valid, the business rules have changed, and so on. I have yet to meet the project that reached users at version 0.8 and they said, “it is exactly what we want and need, and there are just a few bugs in it.” More typically I get, “gee, we did not think that requirement all of the way through” or “now that we see it, we think XYZ should be added too.” It’s just the nature of the beast. :)

    Thanks again for the thought provoking article!

    J.Ja

    1. 3) The architecture of the code must be such that quick changes are not likely to damage existing

      ….requires an architecture to break.

      I wonder how many projects miss the mark because they did not design the software within an architecture/framework/”the way we do things”/”the way we name things”/”What a thing means”/”list of definitions?”

      1. Great point, Will!

        Like many aspects of agile, I have found this to be a nuanced aspect as well. All code “has architecture” – either intentional or emergent. In the early 2000’s we would look at refactoring of the architecture to meet emerging needs as an expected and “good” thing. I think it was Kent Beck who said the architecture needs to support only what the code is attempting to do today – and when greater needs are introduced, the architecture may need to evolve.

        While that is true, I’ve also found that some thinking ahead about future needs does help with both reducing the effort to introduce those new capabilities. There are times where the assumptions implicit in architectural choices all but neuter the ability of the team to adapt to changing requirements. An example that comes to mind is adding support for multiple users of a tablet device. Many assumptions within a mobile OS break when the notion of “is this user allowed to do X?” is introduced because the notion of multiple users is simply non-existent.

        On the flip side, I’ve seen teams incur excessive costs because they had to jump through hoops to meet the needs of an architecture that anticipated future needs that may never come to pass. It is a tough balancing act.

        I also wonder how many teams make these choices poorly.

        Thanks again for the great comment!

  6. @Scott S.
    A provocative discussion indeed, especially since it referenced me in the first sentence :-)

    @Justin J.
    Items 2, 3, and 5 on your list seem like very high hurdles to get over in order to justify using Agile. What are your thoughts on the maturity of the organization (early stage startup vs. established company vs. big, honkin’ enterprise) affecting the successful use of Agile?

    Item 4 is a tough nut, too, especially for young organizations that HAVE to be responsive to prospects and customers in order to keep the product/company going. I have been in situations where I have been expressly told by the executive staff to do whatever it takes to close whatever the next big deal is. Not a great way to manage a product, but it sometimes has to be that way in order to continue to have a job as a PM (at least at that company).

  7. This is an awesome discussion! I don’t know that I can even keep going on all of the topics. I’ll pick Justin’s #2:

    Team capability matters. Justin makes a great point that rapid-release cycles will crater with an unskilled team. So will waterfall processes. The difference is that you spend a lot more time and money with a waterfall process before you discover how bad of a job the team did. If you let the developers release the code “into the wild” without some form of validation and testing, then yes – agile is a death sentance with the wrong team. Again – at least you’ll find out earlier.

  8. On Justin’s #3:

    Code entanglement. When I was still doing coding/consulting, I was airlifted over and parachuted into some spaghetti code nightmares that would blow your mind. And usually with a complete lack of testing automation.

    There are three ways to fix that problem.

    1) Rewrite the whole thing but do it right this time. Almost never a good business decision, as much as it appeals to technologists.

    2) Change the way you write code starting right now. All new code requires tests, and should be as well designed as reasonable. Over time, you’ll be able to gain confidence that “at least you didn’t break the new stuff.” And that will gradually afford you the opportunity to start refactoring the spaghetti a strand at a time. It may take a year, but there’s no incremental cost (In my anecdotal experience, the breakeven was 3 months, including the time spent creating a testing framework and training the team to use it – AND including the extra time it initially took per work item to write tests. Within a month, there was enough time/cost savings to “pay for testing by the developers.”)

    3) Sell the company and change your cell phone number.

  9. On #4 –

    Product managers HAVE to focus on multiple customers. Companies, especially young ones, as Ivan points out, HAVE to focus on key customers.

    My wife used to work as a mediator, and she used to say “if both sides are unhappy, you know you’ve reached a reasonable compromise.”

    Don’t know that I can top that!

  10. Great discussion! Personally I think you can do business-critical projects using an agile development approach. As long as you are clear about delivering must-have requirements, and you must give the right emphasis to testing and quality to reflect the nature of the application, in whatever methodology you choose. Must admit I would think twice for air traffic control system though :-)

    Kelly Waters
    http://www.allaboutagile.com | Blog all about agile

  11. The meaning of the sentence “Short Release Cycles are Bad” depends on what is the definition of “short”, “release”, and “bad”. Those definitions are different for hardware driver, e-mail client, web site, and enterprise documentation management system. Different people have their own different definitions even for the same product.
    To make the discussion about short releases useful, you need first make an explicit definition of terms, and then speak only in those terms.
    For example, for the device driver the “release” means making the executable part, the help system, the user manual in printed and downloadable form, the installation package, and the supporting web site. Each of those components must be tested for functionality, usability, compatibility, and many other “abilities”. Then you make the distributive CD and integrate driver documentation with device documentation. The essential attribute of the release is documented acceptance testing of the distribution package on customer’s side.
    For this kind of product “short” releases ( 6 months or less ) inevitable should have small amount of changed functions ( 10% or less ), to provide acceptable quality level for the given cost of development.
    In this context short releases are “bad” ( inappropriate for both manufacturer and user ), because:
    – From manufacturer point of view, the usefullness of the small change does not justify the cost of development.
    – From user point of view, the usefullness of the small change does not justify the risk of product replacement.
    The discussion of the example makes sense only when all participants agree to use definitions introduced in that example. If participants introduce their own definitions, this will be another, different example. From this perspective, consecutive comments of Tim Weaver, Aaron Korver, Scott Sehlhorst, and Roger L. Cauvin (April 17-18) do not look like a discussion (for me), because basic definitions in those comments are implicit, and readers may assume other meaning of terms than authors.
    It would be great to have side-by side comparison of various products: what is release, what is short, what is bad, which development processes are most suitable.

  12. Kelly – completely agree, and thanks for commenting!

    AVA – I think you make great points, and you’re right – the details definitely vary with type of project. They also vary with skill of the team, nature of the office politics, etc.

    When the cost of a deployment outweighs the benefit of an incremental deployment – and your example is a great one – you definitely shouldn’t do it. However, if you believe there is a benefit (ignoring costs for a second) to accelerating your release cycle, I would suggest that you
    1) quantify that benefit somewhow
    2) determine how much you would have to reduce deployment costs to make it a good business decision, and
    3) figure out how much it would cost you to make those changes. If there’s enough ROI, you should do it.

    You can look at ways to automate some of the testing (for example permutations of dev environment via virtualization) that might work for you. You can explore the possibility of making physical media distribution optional (and/or for a fee) with website downloads available more frequently. You can take a page from the agile folks, and offer “latest stable” + “latest tip”, so that people who WANT to risk it all on the bleeding edge can try, and people who don’t want to will stay away. My intuition is that these things would help with any market, but ultimately it is your call.

  13. Sorry for the late follow up here, it has been a heck of a week…

    On the maturity of the organization, I think it is less about the maturity and more about the commitment. Start ups tend to have a lack of focus in the drive to get something *anything* out the door before their opportunity passes. Established companies have just enough legacy code and habits to be a challenge, but are still short on time. Mature companies are set in their ways. Each situation is different, but regardless of how it is approached, the whole team needs to be commited to “Agile”, and know how to do it right, to make it happen.

    On the topic of #4, the “Cranky PM” post is 100% right. “Agile” only works when the product is for only one customer! Go ahead, try being 100% responsive to your first and only customer, and you end up with a product that only they will use. Here is an example. Let’s say that you are working on a “just in time” parts ordering system. Your first customer is General Electric. So you are totally responsive to them, and build exactly what they want. Do you think that Toyota or IBM or anyone else for that matter will want it the “GE Way”? Nope! Instead, you *properly architect your code* (bye bye “fast release cycle”!) and make it so that customizations can occur at implementation time, via database, config file, etc. You need a rather monolithic code base to do that, because you are abstracting so much of the code.

    I agree that a poor team will blow up a waterfall project, but the explosion is so much more contained. When your first milestone is missed, you know something is rotten in Denmark. With “Agile”, you wake up one day and find out that some rogue programmer has been taking direct calls from the customer, and cowboyed his way into a real mess. With waterfall, there is just as much testing, it just occurs at a more “built” state. Indeed, I could argue that waterfall is *more* efficient, since you are not retesting the code every twenty seconds. After a certain point, the tests take much longer than the code changes. Therefore, it is much slower to run them constantly than during set points.

    Finally, let’s get honest with ourselves. *No one*, not even the biggest “Agile” proponents, will use “Agile” for an air traffic control system or something equivalent. If “Agile” can’t handle a “life and death” project, why would you trust it for “mission critical” applications? At that point, what do you have? Something good for free consumer Web sites and that is it? On the other hand, the Java license states that it is not suitable for air traffic control systems, nuclear power plants, etc…

    So I really am going to stick to my “softened stance”. “Agile” can be useful, but only in a very specialize role. To me, it is like a C-Section, one way of getting the baby out, but most doctors do not recommend doing it without a good medical reason. If you have a valid business reason, like a single customer for your product who requires constant evolution of the product, or who is willing to accept the “initial release” provided that their “wish list” gets built quickly, go for it. If you are building mass market software that requires reliability and is complex enough for users to have to be trained in it, “Agile” is simply the wrong tool.

    J.Ja

  14. >Go ahead, try being 100% responsive to your first and only customer, and you end up with a product that only they will use.

    I’m pretty certain that’s a consequence of Conway’s Law and not necessarily of Agile adoption.

    By the way, one of the progenitors of Scrum, Jeff Sutherland, runs one of the most proficient Agile development teams in the world at PatientKeeper, which is mission-critical decision support software for physicians and nurses. Contradicts assertion #1.

  15. MSM – very cool, had not heard of PatientKeeper.

    Another thought I’ll throw out there – “100% responsive” does not equate to “do 100% of the things they ask you to do.”

    In the open agile project we’re running here, we’re being responsive to every customer who shares their inputs. We may not do everything they ask, but we will respond to, and incorporate all of their inputs in our decision making.

  16. I know I am late into this discussion, and have not read everything, but I want to get right back to the beginning and quote this statement:

    “…failure to implement an undiscovered / undocumented must-be requirement still qualifies as incompleteness.”

    Well, I guess this can’t be realized until after the fact, so you can be complete…then, oops, you are incomplete. Is the whole insecure Hyatt thing what you are referring to with this statement? I mean, securing a customer’s credit card can’t be considered an undiscovered requirement at this point, I would call it a forgotten one. An undiscovered requirement has to be something no one has thought of before within the domain, and if no one has thought of it yet, YOU CAN’T IMPLEMENT IT. …sorry, its late, but when I read the above statement the first time, I thought my head would explode, better to type in CAPS for a bit to let off the pressure.

    OK, the Nyquil is kicking in, will check back tomorrow… Dave W

  17. Hey Dave, thanks for joining in on the thread (even if it is a year later :)). It proves out (to me) that the flashback posts are worthwhile, and that many of our articles are still relevant a year or two later. Also, of course, thanks for being a long time participant here – your contributions always make the discussions better. You should set up a gravatar, because it will make your contributions stand out even more.

    OK, as to “undiscovered requirements” – yes, I’m driving a hard line about them being undiscovered by the team doing the work, not necessarily that they are undiscoverable by anyone.

    Sure, the security example may have been discovered but forgotten (I put that in possible-failure-mode #2 from the article). But if it were not discovered, that does not mean it does not exist. Nor does it mean that the team should be absolved of it.

    One of the challenges that companies face when outsourcing product development is that the third-party may not understand their domain. The Hyatt example is especially egregious, but it could be something more arcane (and just as critical).

    WordPress software (at least as of a year ago) was not hardened against SQL injection attacks. I know – my blog was successfully compromised, and I had to update to the current version, replace all of the source code (and customizations), backup and process the database, and do a lot of due dilligance to block the exploit in the future and assure that the damage had been repaired. WordPress has been around for a long time, has extreme market penetration, and has many people scouring the freely available source code. SQL injection attacks have been around a long time too. I expected (a year ago) that WordPress was hardened against them. But it wasn’t.

    Just because that team had apparently not discovered that requirement, it did not absolve them of the responsibility. A better team would have found it.

    In a particular domain, there will be other examples that are just as obvious within the domain.

    • Of course an insurance agent can be licensed in multiple states
    • Of course the UPS must provide enough power to run the equipment
    • Of course the signup form must support visually impaired users

    You can make a compassionate argument, with respect to any particular team, that that team would not be able to discover requirement X. I contend that if X is required, you have staffed the project incorrectly. The blame may not lie with the team member, but it doesn’t just evaporate.

    A process, such as agile, that incorporates frequent feedback cycles, will present opportunities to uncover otherwise undiscovered requirements.

    A process, such as agile, that encourages the team to respond to these “after the train has left the station” requirements is better than a process that requires them to languish because they were discovered “too late.”

  18. This is still a great conversation, even a year later.

    I’ve revised a lot of my thoughts regarding Agile and Agile-like development over the last year or so. Some of it was because I discovered that much of my negative feelings stemmed (rightfully!) from people promoting project anarchy as “Agile”. And some of it is because of things like what Scott says right above. I’ve seen more and more the sheer incompetance and mediocrity of our industry, enough so that I am blindingly sick and tired of it. It is clear that probably 90% of the people involved with requirements gathering, specification writing, etc. have no business doing these things. They don’t know what to ask, or how to ask it. You have BA’s asking users and managers who have never been involved with development, “what do you need?”, expecting the user/manager to essentially do their job for them, write it down in a way that a programmer can understand, and call it a day. The users/managers don’t know how to identify their real requirements (that’s the BA’s job!), so they give surface level fluff like, “I need a drop down menu here…” And what do you get? A project spec that essentially says, “replicate what we do on paper with a computer program please: replace the paper with a Web form, the filing cabinents with the RDBMS, the clerk with a search screen.” Guess what folks? If the existing paper process was so great, you wouldn’t be looking to dump it.

    In reality, what should be happening is an identification of the goals, a complete process rewrite, and then an implementation of that process in software.

    What Agile/Agile-like methods bring to the table is a means of compensating for the overall incompetance of those involved in the process, by allowing them “bonus rounds” or “extra credit” opportunities frequently to make up what they missed. The problem is, the people who stink at this so badly that they are constantly missing major requirements are the same people who turn “Agile” into “Anarchy”.

    We can’t win, and frankly, I have become incredibly pessimistic about this industry as a whole. We are not delivering value to our users, and for every ounce of productivity we deliver, we create an ounce of pain. Email is a great communication tool, but it is used to swamp people, for an overall loss of productivity. Applciation replace paper, but don’t figure out how to handle “edge cases” that on paper can be easily solved by writing a note in the margin and crossing out a few items on the form. And so on and so on.

    Sigh.

    J.Ja

  19. Hey Justin, thanks for chiming in, and welcome back.

    I really hope that you get to work with some good teams in the next year. It doesn’t have to be like that.

    I will pick on one thing you said, since this is such a fun conversation. You said “what should be happening is…a complete process rewrite…”

    It really helped me, as a former coder, to be able to apply the opinion I expressed in comment #10 (scroll WAY up) about complete code-rewrites, to process-rewrites. The only reason to rewrite a process is if you can make that process more effective (by increasing top-line or bottom-line revenue). If the value of the change does not justify the cost of the change, don’t do it. You should certainly know, as an organization, what your processes are. For an organization to ‘know’ something over time, that means to create documentation.

    If your processes are not documented (at a process level, not a procedural level), then you can’t really know where you will get the most benefit from re-engineering. There is always more work to be done than can be done. The best organizations invest their resources where they can get the greatest return. And to do that, you have to understand your processes. Some of them will most certainly benefit from rewrites. But some of them won’t.

    Thanks again for helping keep this alive – really good stuff!

  20. “The only reason to rewrite a process is if you can make that process more effective (by increasing top-line or bottom-line revenue). If the value of the change does not justify the cost of the change, don’t do it.”

    I agree, and I’ll up the ante by $10. I consider “automating this process” or “writing software to handle this” to be considered a “process change”, even if it is supposed to faithfully reproduce the existing process. So in a nutshell, unless someone can prove to me that spending time & money to pay project managers to manage, BA’s to analyze, programmers to write code, etc. will have real ROI, it’s not worth even thinking about it. And so many of these projects are supposed to save a half dozen clerical positions at $9/hour a whole 10 minutes per day, or compensate for their inability to follow the process (usually because the process is bad, or poorly documented/trained)… if the project took 2 months to complete with 1 full time programmer involved the whole time, a PM spending 10 hours/week on it, a BA for 20 hours and a QA person for 80 hours to test, and 20 hours of sys admin time to deploy/support (not unreasonable for a small project), that is a cost of 420 hours of “very expensive employees” (PM, BA, programmer) and 80 hours of “pricey” employee (QA), for a total cost of about (assuming $70/hour cost for “expensive” and $50/hour for “pricey”) $33,400. So, at $9/hour for those clerical employees (total cost, about $12/hour including benefits and overhead), we would need to save roughly 2,783 man-hours *just to break even*. Wow, that’s a lot of time! That is over a YEAR of employee time. Or to make it more explicit, that project would have to allow 2 clerical employees to be laid off (or reassigned) to show ROI within 6 – 9 months. All of a sudden, this project looks like a really, REALLY bad idea!

    Yet, I see companies greenlighting this project all of the time!

    It’s simple math, but people fail to get it. The sad truth is, IT is all too often NOT leading to “productivity gains”, but is leading to “wasted time and money”. All too often, it is CHEAPER to pay high school graduates $9/hour to do Mickey Mouse work at a reasonable level of quality than it is to engage IT. I hate to say it, but it is true. Eventually the bean counters will figure this out and a lot of IT folks will lose their jobs.

    And of course, no one is asking the question, “if the current process is so great, why are we emulating it in software?”

    I posit that the moment it is suggested to write software to replace manual work, that should automatically trigger a full re-evaluation of the process, determine if it is worth writing in code, and if it is, re-engineer it to be approrpiate to software, not merely replacing paper with a Web form and the filing cabinet with a SQL database.

    J.Ja

  21. Love it!

    I touched on the need to think this way in Business Analyst Profit Center, which surprisingly, was written a couple weeks before this article :).

    Here’s the problem. The business is a profit center, with an overhead charge for IT. Even when it is not true, that IT cost is perceived as a fixed cost by many business managers.

    IT is almost always managed as a cost center. IT teams have budgets, and they do projects. They manage to capacity. This removes the incentive for IT teams to manage to ROI. At least in the couple dozen or so Fortune-500 IT organizations where I’ve worked and consulted.

    You’re right, and in my experience, it is usually a lack of education / awareness that causes the problem. Unfortunately, the management accounting systems (with this profit center vs. cost center approach) don’t encourage people in IT to manage towards ROI, so people don’t get the context or training needed to do the simple exercise you outline.

    I just gave myself a heart attack a few minutes after your comment when I brought the site down during an upgrade. If you’re reading this, both the site and my ticker are fine. But I gotta go rest now. Have a great one, I’m really enjoying this conversation. Anyone else want to join the fun too?

  22. Scott –

    Good stuff there. The connection between BAs and IT is crucial. People do not realize that a BA *should* be doing more than acting as a translation layer. When I first got interested in learning more about the software development process other than writing the code, I was doing a lot of learning through my employer’s online training, in the direction of management. I learned a lot about risk management and Six Sigma fundamentals. Reading that stuff really opened my eyes. Since then, I have tried to bring the spirit of both of those disiplines to the table. Indeed, Six Sigma is, in my mind, a type of risk management, since by reducing the potential points of failure, you are managing risk.

    But my experience in the business world has been that companies simply don’t “get” these concepts. Risk management? The only risk management that occurs is nearly completely emotion based. “Risk” is when someone feels queasy about an idea, and “managing” risk means to shoot down the idea. This is why so many people in the business world act like dogs backed into a corner, lashing out at anyone trying to improve things. In an emotion-based “risk management” environment, “risk” means “will this change the status quo?” regardless of the safety nets, condition of the status quo, or anything else.

    People feel a fundamental lack of control over their own destinies. When they acheive a modicum of control, they defend it. That’s why the manager that rejects any kind of process re-engineering from outside of their group will then turn around and go to IT with a diktat of how the software should work. It’s all about control.

    The approaches that I’ve had the most success with involve spending very little time discussing the software or even the project. They approach the customer in a manner designed not to collect requirements, but to instill trust. You talk about past projects, and the sucess that they’ve been. You talk about how the clients tried to have ultra control, but it caused problems until they loosened their grip and let you do a proper *analysis* of their *business* and then draft specs based on that. You talk about things that have happened, both good and bad, so that the customer sees your honesty. And so on and so on. After a while, you’ve built the trust. Next, you spend a lot of “hands-on” time with the customer. You learn their job. You spend time with the people doing the grunt work, find out from them what’s right, what’s wrong, what they would like to see changed. Hint: people are a lot more comfortable being brutally honest and open when they are not at their desk; go to lunch, or visit the smoke break area (for the record, even after I quit smoking, I found that spending time in the smoking area was critical to getting my job done). Do the same thing with the supervisors, the department heads. Don’t take notes right then and there. That scares people. Just talk, listen, ask questions that show that you’re learning. Keep asking, “why?” If you can, spend some time trying to do their job.

    So what has happened? At this point, I’m quite familiar with their process, what’s right, what isn’t, what management thinks they’ve said the process is, and what the workers are actually doing (rarely in agreement!). Management has said, “we do this because it is a legal requirement, we do that to make the records easier to handle”, and the workers have said, “we avoid that part of the process because it is hard to do right, and we ‘fake’ this part because they didn’t buy the equipment needed for it to work.” Now I have a realistic idea of the challenges.

    At that point, you disappear for a while, and draft a document. This document should contain the following items:

    * Introduction to the topic
    * Description of the current scenario, including “challenges”, “successes”, and “failures”; be brutal but tactful
    * Goals identified by management and through analysis of the current scenario; most of the goals will NOT be driven by the client, but by your analysis
    * Point-by-point breakout of features/plans/whatever in your solution, with in-depth details explaining precisely how these will address the goals
    * Section providing them with some choice, and a full disclosure of the pros/cons of the choices. Remember, you are fighting against their fear of being powerless, it is better to let them make some choices and feel like they have control, even if they make bad ones. By setting up their choices for them, you ensure that they can’t do TOO much damage to the over all project
    * “Next steps” outlining precisely who is responsible for what, including what decisions need to be made
    * Conclusion

    This document format has yielded outstanding success for me when I have been able to employ it, as has this entire procedure. It flips the traditional BA on their head. Instead of saying, “what are your requirements”? and letting the customer dictate outside of their domain (software projects, software design, UI, etc.), you are allowing them to use your expertise, and giving them control over what is properly their domain (“these are three ways that I have found to meet your needs, which of these works best for you, and why?”).

    J.Ja

  23. Loved your segue on elicitation, Jason! I think that is the key to “Elicitation 311” The 101 class focuses on techniques, but as you say – you’re only as effective as your relationships. I think you’ve reached an enlightened stage. No amount of trust, without good elicitation skills will help you elicit requirements.

    A lot of the business analysts I’ve worked with really need to walk before they run. But I’ll make sure and incorporate the trust-factor as an underlying theme when I coach someone in the future. Thanks!

  24. The things that lead to a successful product release all stem at root from the dedication and professional pride of skilled developers and designers. Such people make sure that they properly understand the problem or application domain, are aware of best practice in their specialisms, are skilled in the use of their software tools, and are committed to making the project a success.

    How they are organised is secondary. Just make sure that the methodology is appropriate to the project and neither gets in the way, nor adds pointless overhead and “busy work”.

Leave a Reply to Justin James Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.