Category Archives: Polls

Polls that gather the inputs of Tyner Blain’s readers.

Product Manager Role Details and Survey Results

survey

Pragmatic Marketing runs an annual survey of product managers. We looked at 440 results from the 2006 Product Manager Survey to uncover the trends in how different product manager roles are defined. The survey involved questions breaking down the allocation of time to different activities. In this article we look at how those activities varied for product managers, product marketing managers, segment / market managers, and technical product managers.

Previous Analyses

For most people, the first thing they want to do is understand product manager compensation data. That article included an analysis of gender bias in product manager compensation. We quickly followed with another article that provided details on product manager compensation versus company size. In response to reader questions, we took a look at product manager staffing levels. In that article, we tried to determine how many product managers to have for X products.

Now that we know how many product managers to hire, what should we have them do?

Product Manager Role Details

The role of a product manager is strategic. There are six areas of activity that are critical to product management.

The six areas

  1. Market Research
  2. Product Definition and Design
  3. Project Management
  4. Evangelize the Product
  5. Product Marketing
  6. Product Life Cycle Management

Product Manager Role Definition

Within those six areas are a number of activities, and respondents to Pragmatic Marketing’s survey provided a lot of data about what they do on a weekly basis. The survey asked product managers how much time they spent on each of seventeen different activities.

Pragmatic’s Activity List

Each respondent was asked if they spent less than an hour, less than half a day, a full day, or more than a day on each of the following product management activities:

  1. Researching Market Needs
  2. Preparing Business Case
  3. Writing Product Requirements
  4. Writing Detailed Specifications
  5. Monitoring Development Projects
  6. Writing Copy for Promotional Material
  7. Approving Promotional Material
  8. Creating Sales Presentations and Demos
  9. Training Sales People
  10. Going on Sales Calls
  11. Visiting Sites (Without Sales People)
  12. Performing Win/Loss Analysis
  13. Planning and Managing Marketing Programs
  14. Measuring Marketing Programs
  15. Work with Press or Analysts
  16. Creating Original Content For Customers
  17. Creating Original Content For Employees

Survey Results By Activity

Here’s the breakdown of time spent by activity for all survey respondents.

Combined Product Manager Activity Data

[larger image]

Each activity has a row in the table. To read the table, each column represents the amount of time spent on the activity. The headings represent the exact text presented in the survey. From left to right, the columns represent

  • The name of the activity as described in the survey
  • “Under an hour” spent per week
  • “Under a half a day” spent per week
  • “A full day” spent per week
  • “More than a day” spent per week

The numbers in each cell are the number of respondents that selected that level of effort for each activity. When a cell represents more than 25% of the respondents, the text is colored red and marked in italics.

For each activity, the level of effort that had the greatest number of respondents is also bold with a gold background.

The results show a fairly even distribution of activities in each product manager’s week. The areas that received concentrated attention were

  • Researching Market Needs
  • Writing Product Requirements
  • Monitoring Development Projects
  • Creating Sales Presentations and Demos

Very consistent with the elevator pitch (30 seconds or less) description of what a product manager does. And all but the last one (sales-support) are identified as strategic activities. More than half of the respondents spent a day or more monitoring development activities, though. That seems a little high. Perhaps a more detailed analysis of the data will shed some light.The survey data asked people to describe their titles too. Next we evaluated the levels of effort by title.

Product Management Titles

The survey results included data from people who identified their titles as being most like one of the following:

  1. Product Manager
  2. Product Marketing Manager
  3. Segment/Industry/Market Manager
  4. Technical Product Manager

Here are the same tables, but filtered to include only the responses by title.

Product Manager

Product Manager Activity Levels

[larger image]

The data for respondents with the title Product Manager is very consistent with the overall group data.

Product Marketing Manager

Product Marketing Manager Activity Levels

Product marketing managers have a very clear focus on sales and marketing support. They spend as little time as possible monitoring development activities. They also don’t appear to be sacrificing a subset of the marketing activities – their effort appears to be relatively evenly distributed.
[larger image]

Segment / Industry / Market Manager

Segment Manager Activity Levels

[larger image]

There were very few product-line manager responses in the data, but the areas of distinction are that they spend more time on preparing business cases and approving promotional material. They also spent far more time planning and managing marketing programs. This is good – these are the activities best leveraged across products.

Technical Product Manager

Technical Product Manager Activity Levels

[larger image]

Technical product managers spend much more time on inbound activities like monitoring the development team. They also are more heavily involved in writing detailed specifications. They still have healthy levels of market research and writing requirements. And they minimize the time they spend on outbound activities like sales and marketing support.

Conclusion

The levels of effort are generally reasonably well distributed across the many activities identified in the survey. Further, the roles that have distinct focus (inbound, outbound, multi-product) spend their time appropriately.

CMMI and RMM One Minute Survey

timed test

Please take one minute to answer the following two questions, because people need to know how their process maturity compares with everyone else. You’ll be answering two easy questions –

  • What is your CMMI level?
  • What is your RMM level?

Background

We just completed a series of six articles about CMMI levels and RMM levels. We discussed how the two frameworks can be connected, and explored the reasons for trying to reach the next level on either scale.

Question 1

[poll=2]

Question 2

[poll=3]

Thanks for taking the survey, and if you have any thoughts about the results, just comment here.

Pragmatic Marketing 2006 Survey

survey

The polls are open! Go to their announcement to take the annual Product Management and Marketing Survey!

Previous Results

Salary Trends

Pragmatic has some good detailed analysis of the data within each year’s survey results. We thought it would be interesting to look at trends over time. Interaction design tells us to focus on personal goals as defining the framework for how someone approaches their job. Surveys aren’t really going to capture those driving goals, or things like utility, job satisfaction, etc. The closest thing we have to a normalizer is looking at product management salary trends over the years of the survey. We also don’t have normalizing data that would show us years of experience, cost of living, or a normalizing stock-option method (like Black – Scholes) to create an “equivalent compensation” analysis across the years.

Within each year’s results, there are some demographic breakdowns by region of the country – but those only help a little. Markets like Silicon Valley, Austin, and Boston will skew the data relative to smaller markets. It would be interesting to see (in future survey results) what the salary data looks like as a scatterplot versus a cost-of-living index for the locale (city, not region) of the respondants.

salary trend data

We saw salary rises immediately following the dot-com bust, followed by some stagnation and deflation in recent years.

If we adjust for inflation we see some less optimistic annual changes in real earnings.

  • 2001: 0.7% Loss in buying power
  • 2002: 3.2% Increase in buying power
  • 2003: 4.0% Increase in buying power
  • 2004: 3.3% Loss in buying power
  • 2005: 4.2% Loss in buying power

Looks even worse. If we show the same graph as above, but in 2000 dollars, we get the following:

inflation chart
This highlights the fairly rapid decay in product manager salaries over the past few years.

Women’s Suffrage

Notice also the unreasonably large gap between blue (female) and maroon (male) overall compensation data.

Next: Go take the 2006 survey.

Take this poll or we’ll shoot this kitten

Really cute kitten

[Ed: If you read Tyner Blain via RSS you have to visit the site to vote in the poll. Also, we’ll use a camera.]

An earlier post on CRUD use cases started a fantastic debate (both public and private) about what it means to write great software, and if it’s even possible to write good software when we start with requirements. This leads to a discussion of the value of requirements driven development (RDD). If you search on Google, you’ll see at least one whitepaper from every RDD-application vendor. Not exactly impartial.

So, here’s a poll. Coerced, maybe. Impartial – probably. If you’re new to the Likert scale – the unlabeled numbers (2,3,5,6) just serve to graduate the space between the “well described” positions.

Our poll asks how you feel on a McLaughlin scale about the impact of requirements on the greatness of software.

1. Metaphysical dependency. Great requirements enable great software (required, but not sufficient for greatness)

2.

3.

4. Take it or leave it. The benefits of requirements balance out the cost of managing them – no more, no less.

5.

6.

7. Inverse dependency. Requirements suck the life out of our team and our project – we’d be better off without them.

The poll:

Thanks for voting! And add comments if you want to explain your vote.

Top Ten Use Case Mistakes

broken glasses
The top ten use case mistakes

We’re reiterating the top five use case mistakes from Top five use case blunders and adding five more. For details on the first five, go back to that post.

There’s also a poll at the end of this post – vote for the worst mistake.

  1. Inconsistency.
  2. Incorrectness.
  3. Wrong priorities.
  4. Implementation cues.
  5. Broken traceability.
  6. Unanticipated error conditions. The error conditions are explicitly called out in a formal use case as exception courses. When we fail to think about how things can go wrong, we take a bad situation (an error) and make it worse by leaving our users with no reasonable way to deal with the errors.
  7. Overlooking system responses. When people use computers, the computers respond. It is a cause and effect relationship – and ideally one that is predictable and comfortable to the user. Reading a use case should be like watching a tennis match, with activities being performed alternately by the user and the system. “The user does X, the system does Y, the user does Z…”
  8. Undefined actors. Novice and expert users have different ways of using an application. Different design tradeoffs will be made to accomodate for these users. Understanding the domain of the user can also be important. Imagine a calculator application – the use case of “get a quick answer to a calculation while doing something else” will be very different for a loan application officer than it will be for a research scientist.
  9. Impractical use cases. We have to remember to validate with our developers that they can implement the use cases, given the current project constraints. As a former co-worker is fond of saying – “It’s software – we can do anything” which is true. But considering the skills of the currently staffed team, the budget and timeline for the project, and the relative priority of the use case is prudent.
  10. Out of scope use cases. If we don’t define the system boundaries, or scope of our effort, we risk wasting a lot of time and money documenting irrelevant processes. Starting with the specious argument – although our user has to drive to the office in order to perform her job, we don’t include her commute in the scope of our solution. An online fantasy sports league application would certainly include a use case for picking players for individual teams – it may or may not include researching player-statistics. Knowing where the boundary is will prevent us from defining and building undesired or unnecessary functionality.

More discussion on common use case mistakes

I liked this article by Susan Lily on use case pitfalls. Susan goes into more detail on out of scope use cases(#10 above), where she talks about defining the system boundary in UML use case diagrams as a means of helping to avoid out of scope use cases. She also encourages using a standard template for use cases (Inconsistency – #1) and proposes a minimum set of criteria for creating your own templates. She provides a good argument against CRUD use cases – in a nutshell, they do not represent primary user goals (but rather tertiary goals).

At one point she proposes a compromise of including low-fidelity screen mockups in use cases as a means to make them easier to understand and more efficient to communicate. I disagree with her here – this is at best a slippery slope, and more likely the use case equivalent of my requirements documentation mistake. Because images can be so powerful – even the simplest screen design elements will provide design guidance (Implementation cues - #4) to the developers – IMHO, it is unavoidable.

We’ve added a new feature to Tyner Blain – polls on individual posts! We’re going back and adding polls to the most popular posts, and including them in many of the new ones. Each poll can have up to 7 entries – if an item isn’t displayed, hover over the up or down arrows and the list will scroll. If the text for an entry appears truncated, hover over it with the mouse and the text will scroll. Vote early and vote often, and thanks for your vote!

Poll: The worst use case mistake is

If you selected ‘Other – not on the list’ please add a comment and tell us why!

Software Testing Series: Black Box vs White Box Testing

Armwrestling

Should I use black box testing or white box testing for my software?

You will hear three answers to this question – black, white, and gray. We recently published a foundation series post on black box and white box testing – which serves as a good background document. We also mention greybox (or gray box) testing as a layered approach to combining both disciplines.

Given those definitions, let’s look at the pros and cons of each style of testing.

Black box software testing

Black box

pros

  • The focus is on the goals of the software with a requirements-validation approach to testing. Thanks Roger for pointing that out on the previous post. These tests are most commonly used for functional testing.
  • Easier to staff a team. We don’t need software developers or other experts to perform these tests (note: expertise is required to identify which tests to run, etc). Manual testers are also easier to find at lower rates than developers – presenting an opportunity to save money, or test more, or both.

cons

  • Higher maintenance cost with automated testing. Application changes tend to break black-box tests, because of their reliance on the constancy of the interface.
  • Redundancy of tests. Without insight into the implementation, the same code paths can get tested repeatedly, while others are not tested at all.

White box software testing

White box

pros

  • More efficient automated testing. Unit tests can be defined that isolate particular areas of the code, and they can be tested independently. This enables faster test suite processing
  • More efficient debugging of problems. When a regression error is introduced during development, the source of the error can be more efficiently found – the tests that identify an error are closely related (or directly tied) to the troublesome code. This reduces the effort required to find the bug.
  • A key component of TDD. Test driven development (an Agile practice) depends upon the creation of tests during the development process – implicitly dependent upon knowledge of the implementation. Unit tests are also a critical element for continuous integration.

cons

  • Harder to use to validate requirements. White box tests incorporate (and often focus on) how something is implemented, not why it is implemented. Since product requirements express “full system” outputs, black box tests are better suited to validating requirements. Carefull white box tests can be designed to test requirements.
  • Hard to catch misinterpretation of requirements. Developers read the requirements. They also design the tests. If they implement the wrong idea in the code because the requirement is ambiguous, the white box test will also check for the wrong thing. Specifically, the developers risk testing that the wrong requirement is properly implemented.
  • Hard to test unpredictable behavior. Users will do the strangest things. If they aren’t anticipated, a white box test won’t catch them. I recently saw this with a client, where a bug only showed up if the user visited all of the pages in an application (effectively caching them) before going back to the first screen to enter values in the controls.
  • Requires more expertise and training. Before someone can run tests that utilize knowledge of the implementation, that person needs to learn about how the software is implemented.

Which testing approach should we use?

There is also the concept of gray box testing, or layered testing – using both black box and white box techniques to balance the pros and cons for a project. We have seen this approach work very effectively for larger teams. Developers utilize white box tests to prevent submission of bugs to a testing team that uses black box tests to validate that requirements have been met (and to perform system level testing). This approach also allows for a mixture of manual and automated testing. Any continuous integration strategy should utilize both forms of testing.
Weekend reading (links with more links warning):

White box vs. black box testing by Grig Gheorghiu. Includes links to a debate and examples.

Black box testing by Steve Rowe.

A case study of effective black box testing from the Agile Testing blog

Benefits of automated testing from the Quality Assurance and Automated Testing blog

What book should I read to learn more?

Software Testing, by Ron Patton (the eBook version, which is cheaper).

Here’s a review from Randy Rice “Software Testing Consultant & Trainer” (Oklahoma City, OK)

Software Testing is a book oriented toward people just entering or considering the testing field, although there are nuggets of information that even seasoned professionals will find helpful. Perhaps the greatest value of this book would be a resource for test team leaders to give to their new testers or test interns. To date, I haven?t seen a book that gives a better introduction to software testing with this amount of coverage. Ron Patton has written this book at a very understandable level and gives practical examples of every test type he discusses in the book. Plus, Patton uses examples that are accessible to most people, such as basic Windows utilities.

I like the simplicity and practicality of this book. There are no complex formulas or processes to confuse the reader that may be getting into testing for the first time. However, the important of process is discussed. I also have to say a big THANK YOU to Ron Patton for drawing the distinction between QA and testing! Finally, the breadth of coverage in Software Testing is super. Patton covers not only the most important topics, such as basic functional testing, but also attribute testing, such as usability and compatibility. He also covers web-based testing and test automation ? and as in all topics covered in the book, Patton knew when to stop. If you want to drill deeper on any of the topics in this book, there are other fine books that can take you there!

I love this book because it is practical, gives a good introduction to software testing, and has some things that even experienced testers will find of interest. This book is also a tool to communicate what testing and QA are all about. This is something that test organizations need as they make the message to management, developers and users. No test library should be without a copy of Software Testing by Ron Patton!

– – –

Check out the index of software testing series posts for more articles.

Foundation Series: Black Box and White Box Software Testing

Blackbox tests and whitebox tests.

These terms get thrown about quite a bit. In a previous post, we referenced Marc Clifton’s advanced unit testing series. If you were already familiar with the domain, his article could immediately build on that background knowledge and extend it.Software testing can be most simply described as “for a given set of inputs into a software application, evaluate a set of outputs.”
Software testing is a cause-and-effect analysis.

Continue reading

Use case series: UML 2.0 use case diagrams

The UML way to organize and manage use cases.

Pros

  • Provides a high level view of the use cases in a system, solution, or application.
  • Clearly shows which actors perform which use cases, and how use cases combine to form business processes

Cons

  • Presents an “inside-out” view of the sytem. This description reflects “what it is” not “why it is” – and it is easy to lose sight of why a particular use case is important.
  • Poor communication tool when speaking to users and stakeholders about why and when the system will do what it will do.
  • Time consuming to create and maintain

Instead of duplicating the explanation and summary work already done by Chris at grillcheese.blogspot.com, I’ll point you to his post, Introduction to UML-2 use case diagrams. Agile modeling has a detailed post on UML-2 use case diagrams.
There are ultimately four pieces of information you want to know about use cases. UML diagrams will show you two of them.

  1. Which actors perform a particular use case? UML diagrams show this.
  2. Which use cases are combined to create a business process? UML diagrams show this.
  3. When is a use case scheduled for availability? UML diagrams do not show this.
  4. Why are we doing a particular use case? UML diagrams do not show this.

Knowing that we can’t answer all 4 questions with a single communication tool, here’s what we should do:
(1&3) Create a matrix view of use cases versus actors to show which actors perform each use case, and when they will be available.
use case matrix
(2) Create a UML 2.0 use case diagram if you find that the benefits for your communication outweigh the costs of maintaining the diagrams. In projects I’ve worked on in the past, a simple flow chart with use case names have been used. These simple charts can be made in a fraction of the time, are more easily scannable, and present information more densely. If you are managing requirements with a tool that automatically generates the diagrams, then do it – but don’t spend a lot of time on them. A flowchart takes almost no time to draw, and communicates the information just as effectively (and more succinctly). Suggestion – use the flow chart.
(4) Ultimately, UML diagrams (often referred to as “use case cartoons”) focus your attention on what you are building, at the expense of losing focus on why you are building it. Create a mapping or maintain links (traceability) from use cases to goals.

The why of the use case is the most important information. Don’t let use case cartoons distract you from it.

Poll: Which use case format do you use?

If you answered ‘Other’ please comment and let us know what you use!
Quick links to posts in this series

Use case series: Informal Use Case

The informal use case is the tool of the Agile Requirements Manager. It is a paragraph describing the user’s goals and steps. Also referred to as a basic use case.

Pros:

  • Easy to create – quick development, iteration, and collaboration. This enables a rapid approach to documenting use cases, and minimizes the cost of developing the use cases.
  • When done correctly, yields the most bang for the buck of any use case approach.

Cons:

  • Challenging to be rigorous – the short format makes it difficult to capture all the relevant information (and difficult to avoid capturing irrelevant information).
  • Lack of consistent structure – can be transition from use case to use case, since the format is free-form
  • Capturing the right level of content for your team can be tricky.

Note that the paragraph format can also be replaced by a numbered series of steps – the key differentiator of this form relative to a formal use case is the lack of structured fields for “everything else” about a use case (preconditions, assumptions, etc).

An example of the informal use case format in the wild, in direct contrast to a formal format for the same use case.

[Update 2007/01/20: Download our free informal use case template today]

Rosenberg and Scott published a series of articles about incorporating use cases into their ICONIX software development process – the first article is here – Driving Design with Use Cases free subscription. They describe a “semi-formal” use case format, which is between informal and formal. They also describe ICONIX as a process that lives in the space between RUP (Rational Unified Process) and XP (Extreme Programming). Their process is a UML-centric approach to system representation, which incorporates the use case information into a structured and larger framework.

The rest of the articles in the series are:

Driving Design: The Process Domain

Top Ten Use Case Mistakes

Successful Robustness Analysis

Sequence Diagrams, One Step at a Time

The goal in this agile approach is to be “just barely good enough.”

That does range an interesting question – is good enough good enough? And how do we define it? There are several factors that weigh into making this decision.

  • Domain expertise of the current team, and are there any switch-hitters?
  • Amount of time the current team has spent working together (and how well they know each other).
  • Geographic and temporal displacement of team members (are we working through emails and document repositories, or are we scribbling on white-boards together”)
  • Language barriers, pedants and mavens – the personalities on our team

The bottom line is that it all comes down to communication. If brevity is inhibiting our ability to be unambiguous, we should use a semi-formal or formal format for our use cases. If project schedule requires, and our team enables rapid iteration, we should use informal structure for our use cases.
Quick links to posts in this series