Software Cost Estimation With Use Case Points – Technical Factors

apples and measurement

The technical factors are the first thing you assess when doing a use case point analysis. Technical factors describe the expectations of the users for the delivered software. Generally, it is an assessment of non-functional requirements. There are 13 technical factors that you have to analyze.

Background

This is the second article in a series on applying use case points to create reliable software cost estimates. What makes use case points different is that they allow the project cost estimation to happen much earlier in the process. This cost estimation technique was developed by Gustav Karner for Rational Software Corporation in the mid 1990’s.

The introduction to software cost estimation is the right place to start if you came to this article first.

Technical Factors

When applying any general cost estimation technique, you have to account for many variables. Every software project is different, and if you don’t account for those differences, your estimation will not be reliable. In the use case points method there are 13 factors that have to be considered. All factors do not have the same potential impact on a project cost estimate, so each factor has a multiplier, representing the relative weights of the factors.

Here are the 13 technical factors of use case points estimation. Each factor is listed as Name (multiplier) – Description. For each factor, you will assign a relative magnitude of 0 (irrelevant) to 5 (critically important).

  1. Distributed System Required (2) – The architecture of the solution may be centralized or single-tenant , or it may be distributed (like an n-tier solution) or multi-tenant. Higher numbers represent a more complex architecture.
  2. Response Time Is Important (1) – The quickness of response for users is an important (and non-trivial) factor. For example, if the server load is expected to be very low, this may be a trivial factor. Higher numbers represent increasing importance of response time (a search engine would have a high number, a daily news aggregator would have a low number).
  3. End User Efficiency (1) – Is the application being developed to optimize on user efficiency, or just capability? Higher numbers represent projects that rely more heavily on the application to improve user efficiency.
  4. Complex Internal Processing Required (1) – Is there a lot of difficult algorithmic work to do and test? Complex algorithms (resource leveling, time-domain systems analysis, OLAP cubes) have higher numbers. Simple database queries would have low numbers.
  5. Reusable Code Must Be a Focus (1) – Is heavy code reuse an objective or goal? Code reuse reduces the amount of effort required to deploy a project. It also reduces the amount of time required to debug a project. A shared library function can be re-used multiple times, and fixing the code in one place can resolve multiple bugs. The higher the level of re-use, the lower the number.
  6. Installation Ease (0.5) – Is ease of installation for end users a key factor? The higher the level of competence of the users, the lower the number.
  7. Usability (0.5) – Is ease of use a primary criteria for acceptance? The greater the importance of usability, the higher the number.
  8. Cross-Platform Support (2) – Is multi-platform support required? The more platforms that have to be supported (this could be browser versions, mobile devices, etc. or Windows/OSX/Unix), the higher the value.
  9. Easy To Change (1) – Does the customer require the ability to change or customize the application in the future? The more change / customization that is required in the future, the higher the value.
  10. Highly Concurrent (1) – Will you have to address database locking and other concurrency issues? The more attention you have to spend to resolving conflicts in the data or application, the higher the value.
  11. Custom Security (1) – Can existing security solutions be leveraged, or must custom code be developed? The more custom security work you have to do (field level, page level, or role based security, for example), the higher the value.
  12. Dependence on Third Party Code (1) – Will the application require the use of third party controls or libraries? Like re-usable code, third party code can reduce the effort required to deploy a solution. The more third party code (and the more reliable the third party code), the lower the number.
  13. User Training (1) – How much user training is required? Is the application complex, or supporting complex activities? The longer it takes users to cross the suck threshold (achieve a level of mastery of the product), the higher the value.

Note: For both code re-use (#5) and third-party code (#12), the articles I’ve read did not clarify if increased amounts of leverage would increase the technical factors or decrease them. In my opinion, the more code you leverage, the less work you ultimately have to do. This is dependent on prudent decisions about using other people’s code – is it high quality, stable, mature, and rigorously tested? Adjust your answers based on these subjective factors.

Assigning Values To Technical Factors

For each of the thirteen technical factors, you must assign a relative magnitude of 0 to 5. This relative magnitude reflects that the decisions aren’t binary. They represent a continuum of effort / difficulty. Those (0-5) values are then multiplied by the multiplier for each factor. For example, a relative magnitude of 3 for cross-platform support would result in 6 points – because cross-platform support has twice the impact on work effort as a focus on response time.

Technical Complexity Factor

The final step of technical complexity analysis is to determine the technical complexity factor (TCF). You only have to remember TCF when talking to other folks about use case points. The acronym has meaning only in this context.

The TCF is calculated first by summing up the relative magnitudes (multiplied by the multipliers for each factor). That sum is divided by 100 and added to 0.6 to arrive at the TCF.

For example, if the relative magnitude of every technical factor were 2, the adjusted sum would be 28. The TCF would then be TCF = 0.6 + 0.28 = 0.88.

Next Step

The next step is to calculate the Environmental Complexity, a representation of the capability of the team and the environment in which the software is being developed.

5 thoughts on “Software Cost Estimation With Use Case Points – Technical Factors

  1. Great points!

    I would add the following:

    1. Quality – still very difficult to quantify in software projects, therefore difficult to plan. A quality coefficient should be developed indicating how much of it will be incorporated into the product.

    2. Portability – if the killer app needs to run on Linux and Windows, on WebLogic and WebSphere and OracleOS, on IE and Firefox and Oprera, you get the picture…, someone needs to factor it into the estimates

    3. Configurability – Point 9. Easy To Change touches part of the aspect, however some applications require more configuration than others. This should also be made part of the estimation.

    Finally, the 13 points of estimation seem to assume that developers are trained in the language and the platform they develop in. What if an innovative language or protocol is proposed? How will developer training and the degree of novelty will affect the use-case LOE?

  2. Nice Article!.

    But how to measure the relative magnitude of a particular factor? like to say for highly concurrent factor, Suppose the db receives 30 requests per min. I may assign value 5 considering it as high but the original value would be 3 so Is there any benchmark to calculate the magnitude?

  3. Great help of explanation on understanding the factors.
    What is the reasoning behind the constant 0.6 and dividing by 100 in the formulae? I am looking for this information, if anybody can help covering reasoning behind formulae for both TCF and ECF

    1. Hey Sita,

      Great question. I believe the constant exists because each value can be from 0 to 5, so if someone were to assess every technical complexity factor to be a zero, there would still be some amount of work involved. Dividing by 100 is just, I suspect, a normalizing effect, to allow people to estimate with “commonplace” numbers.

      I honestly forgot that I wrote this 7 years ago. While all of the aspects / factors still apply today, as the coding languages, and architectural norms, libraries, and platform usage patterns have evolved; I would question the validity of the relative weightings of the different factors. As an example – multi-platform support still adds complexity, but is more likely to manifest as a single server supporting multiple browser clients. The different libraries abstract different amounts of differentiation, and the different browsers have an evolving set of differences (and associated complexity required to account for them). Net net, it may be more or less relevant when assessing the impact of this complexity factor. I have no idea which it would be.

      I would personally want to find a more recent version from the original author, before relying on the recipe reflected in this series, as of 2014.

Leave a Reply

Your email address will not be published. Required fields are marked *