Making Offshore Development Work

offshore oil rig

Economic pressures are driving most companies in high-developer-salary markets to explore using offshore development teams as part of their approach to developing software. Developing software with a global team presents new challenges as well as new benefits. If you do it right, you can have a more cost-effective team. If you do it wrong, you can have a disaster.

Different Models for Offshore Development

There are essentially four different models for managing a software development team, with respect to onshore and offshore roles.

  1. No offshore development, also known as insourcing.
  2. Low-level outsourcing, having the implementation (but not design) team members offshore.
  3. High-level outsourcing, having the implementation and design team members offshore.
  4. Complete technical outsourcing, having all technical implementation team members offshore.

Each different model has the same set of people involved in the process, with the same channels of communication. The differences are in which communications happen across geographic and temporal boundaries, and which communications happen in the same time zone.

For this article, we are focusing specifically on low-level outsourcing, where the communication channel most affected by different timezones is the one between design and implementation.

Low-Level Outsourcing

Consider the following software development process diagram (from Four Application Development Outsourcing Models):

low-level outsourcing

Each area surrounded by a dashed line represents a different type of work, or a different required dominant skill set. Every stage in the development process requires a different set of dominant skills, and all areas share a set of common skills. When exploring different offshoring models, teams are most effective when they identify the distinctions in dominant skill set requirements, and divide responsibilities along those boundaries. When you create an artificial (or arbitrary) boundary within one of the regions in the diagram, you create opportunities for misunderstanding. With those misunderstandings, you can have people redundantly working on the same thing, or even worse, you can have tasks that don’t get accomplished. You also introduce the possibility of discord within your team as people either proclaim “that’s my job” or “that’s not my job.” You can split a team within the areas shown above (and we’ll talk about that in a future article), but it is harder to do successfully.

Low level outsourcing is an approach where the implementation team members who write the code and tests are offshore. The requirements work is done onshore (where salaries are expensive). Interpretation of the requirements is also done onshore. The design of the testing plan is done onshore, and the design of the solution is also done onshore. The creation of tests and the implementation of the code is done offshore.

In the diagram above, testing of requirements happens on the left, and testing of the implementation happens implicitly on the right. If you haven’t been reading Tyner Blain for a couple years, that may not make much sense. Testing of requirements and implementation are different, and you need to do both.

Testing: Isolation of Variables Reduces Costs

Testing is something that can be approached from a few different perspectives, and the word “testing” means different things to different people. In this process flow, we are focusing on two main areas of testing – testing the requirements and testing the implementation. When you test the requirements, you are asking the question “does this solution do what the product manager intended?” When you test the implementation, you are asking the question “does my solution behave as designer intended?” This is a nuanced difference, and non-technical people may say “what is the difference?” The designer intends exactly the same thing that the product manager intends.

If you think back to your high school science class, you’ll remember the concept of “control of experiments.” This is the field of practical application of logic to scientific experimentation. By taking a logically rigorous approach to designing a science experiment, you can isolate variables, and test each of them independently. This prevents you from drawing false conclusions from your data. The same process leads you to test both the requirements and the implementation. If you’ve ever submitted a bug and had the implementation team close it out with the statement “working as designed”, you already know the benefit of testing both. Just because something is designed to do X does not mean it is not supposed to do Y. By introducing a designer between the product manager and the testing, you introduce the possibility that the designer is the source of the bug – by misinterpreting the requirements.

It is possible to “test” the design and then test the implementation – this will isolate the design from the implementation. Unfortunately, the only way to “test” a design (without also testing the implementation) is conceptually, with a thought experiment. And that’s exactly what the designer does as part of designing. No one else is really going to understand the design well enough to do that. And if the designer is doing the thought-testing, there is no way to isolate if the designer misinterpreted the requirements in the first place. That same misinterpretation will influence his testing in the same way that it influenced his design (See error source E3 in Where Bugs Come From for more details). This is critical to designing, but it does not work for testing a design.

It is possible to test just the requirements. You create tests based solely on the documented requirements. You run those tests against the implemented solution. When the test passes, every step in the process worked. These are known as black-box tests, because you can run the tests without any insight into how the software is written (it is a “black box”). The problem comes when a test fails – you know something is wrong, but you have to do (expensive) research to figure out the source of the problem. It could be that the implementation failed to do what was designed. Or it could be that the software design failed to meet the objectives of the requirements. There is a way to reduce the cost of this analysis – by testing both the requirements and the implementation.

You can easily test the implementation. An implementation test, commonly known as a white-box test, and usually implemented as a unit-test, inspects the effectiveness of a particular implementation at doing what the designer intended. When you combine implementation testing with requirements testing, you isolate the designer variable. If a requirements test fails but the implementation tests pass, the problem is with the design (or with the design of the test). When both requirements and implementation tests fail, you know that at least the implementation is wrong.

In the diagram above, the testing on the left side represents testing of requirements. The right side of the diagram implicitly includes testing of the implementation as part of implementing. You need to ingrain your implementation testing into your development philosophy. Would you deliver code without compiling it? Why, as a developer, would you consider delivering it without testing it? Compilation is not just a build step, it is also an implicit test of compilability. You should also include the implicit test of “does what I intended it to do.”

Combining the discipline of continuous integration with test driven development is the most effective way to accomplish this. Note: remember this part, as it is a critical component to making low-level offshore development work. Without this, you may as well give up – you certainly aren’t going to be more cost-effective.

Communication Across Time And Space

The key to making offshoring effective, as with any development process, is to make the communication work. For communication that happens between people in the same location (or at least roughly the same time zone), the problems and solutions are no different than when you have an insourcing model. What’s different is the communication that happens between members of the onshore team and the offshore team. This communication is not just remote (technology helps us solve these problems with instant messaging, phone calls, and other real-time (or near-real-time) techniques), but also phase-shifted in time. When you have team members working while other team members are sleeping, you slow down the collaborative process. You introduce a near-crippling latency into the communication channel.

Imagine the following expensive question and answer session:

  • Person A asks person B a question.
  • 12 hours later, person B responds with a request for a clarification.
  • 12 hours later, Person A clarifies the question.
  • 12 hours later, Person B responds with an answer.
  • 12 hours later, Person A acknowledges the answer.

When this communication channel happens between an onshore person and an offshore person, it takes 48 hours instead of 48 minutes. The more this happens, the more expensive it is to outsource. The key to making low-level outsourcing work cost effectively is to minimize the impact of this communication latency, while realizing the benefits of lower salaries in the offshore location.

Communications On Which To Focus

The “happy path” communication channels (shown with blue arrows in the diagram) are the transfer from “test design” to test, and from “implementation design” to implementation. You have to communicate the designs in such a way as to minimize misinterpretation of the design. Never prevent needed communication. Your goal is NOT to stop communication, but to preempt it by eliminating the need. The only thing worse than taking too long because you have a lot of communication is failing to communicate enough. Your goal is NOT to prevent communication, but to minimize the need for it.

The “trust but verify” communication involves making sure that the implementation meets the design. In (requirements) testing, it means reviewing that the tests exhaustively cover everything identified in the test design. It also means reviewing that each test (implementation) actually (effectively) tests what it is designed to test. As team members demonstrate their capabilities, they require less oversight – which is true of any mentoring relationship. In implementation, it means verifying that the code does what the design requires. You could read the code and make a determination, but that is a manual inspection of the code, and manual inspections have been shown to be at best 80% effective as a testing method. What you need to do is create a unit test suite, run it continuously, and only allow developers to check-in their code (to the trunk) when the entire suite passes. Then all you have to do is review the test suite to assure that it will test the design effectively. It wouldn’t hurt to also run the test suite locally (onshore) as a verification, but fundamentally, you are trusting that your developers will follow this continuous integration process.

You are using testing and test design as a mechanism to validate effective communication. You can think of it as a form of active listening. When you (or more appropriately, your design document) says “X”, you can review the test design to confirm that your listener designed tests that assure “X” will happen. Do not just rely on informal communication and acknowledgement. Cross-cultural communication introduces a lot of complexity and misinterpretation. People who do not share a common language or culture also tend to interpret symbols very differently, and will rarely have the same connotations for given interpretable words.

Definition of tests and validation of testing is important beyond the immediate communication of design. There are also the feedback loops that come from the “unhappy path” when something goes wrong (and a bug is introduced). Each of these “where was the bug introduced, and how do we fix it?” cycles is also affected by the latency of cross-shore communication. Good testing reduces the number of these otherwise unwarranted communication cycles.

Developing good design docs is also critical to the success of this communication. The definition of “good” for a design doc is so dependent on the exact circumstances that it is impractical to try and define what that is (at least within this article). A design doc needs to be written with the reader in mind, not the author. Beyond that, we won’t try and make any statements of truth.

Conclusion

The strategy to successful utilization of offshore resources for development and test implementation work starts with communication. The strategy also ends with communication.

  • Create artifacts (good design docs) that minimize the clarification cycle across the onshore-offshore time boundary.
  • Review implementation tests as an active-listening mechanism to confirm that communication of design intent was effective.
  • Practice continuous integration (both as an offshore developer and as an onshore designer or development manager) to assure that your solution stays true to the design and the requirements.

And, as always, have great people – because people trump process.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

9 thoughts on “Making Offshore Development Work

  1. sdei, thanks for commenting, and welcome to Tyner Blain!

    I agree with you that good process is critical, and domain knowledge can be an important factor to the success of any given project. I left both of those out of my analysis, because those factors have just as much influence when teams are co-located.

  2. Scott-

    Communication is the key, especially relating to time and space. I’d like to hear more about offshore + agile. This seems like a concept that is still propagating.

  3. Dang it!

    I wrote a really good response, I promise. Had some issues with the site, had to upgrade (which is why it was down for a little bit), and apparently lost my comment.

    I’ll sum-up:

    1. Yes. Absolutely agree.
    2. I will write an article (or 2, or 20) about this in specific.
    3. The secret is to minimize the amount of “highly chatty” communication across the high-latency channels, to have any hope of efficiency.
    4. It can be done.

  4. Offshore agile does work!
    We ran a Ukrainian team of testers using Scrum daily standups to keep track of the efforts.

    Chunk your effort and follow a daily plan so you minimize your risk and catch any problems quickly.

  5. Great insight.
    I am keen to understand experiences where aspects of product management and business analysis has been offshored. Do you have best practices to share for that scenario?

  6. Subrata, thanks for the comment and welcome to Tyner Blain!

    And I especially like the question – we’re building up to an article on exactly that. So far, we’ve covered sending low-level implementation work offshore, and sending technical design work. Next up – complete technical outsourcing, and then product management.

    I do think you’ve inspired another topic – low-level business analysis offshoring.

    Thanks again, and stick around – we’ll get to it eventually.

  7. Outsourcing is supposed to bring more brains into the effort. It is also supposed to free up managerial focus of the customer, you, the one who outsource the work. If you are managing that work, it should be pretty obvious that you have not freed up any of your managerial focus.

    The spec is the contract. They do the work. If they fail, you go out and hire someone who can do the work. No communications is required. If you have to talk to them while they work, then outsourcing by definition is not working, and they are not applying their brains to the effort.

    It’s like your auto mechanic and his warranty. If he didn’t do the job right the first time, why do you think he will do the rework any differently.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.