Our agile project is to create a site that lets you rate articles. In our corporate goals, we defined the goal to make it easier for people to find and read great content. Last night I was doing some research on social networks and thinking about the nature of our ratings approach. In this article I share some of those thoughts, and the reason for changing the ratings approach relative to previous designs.
An interesting article on the history of social software (going back to the 1940’s) got me thinking about the timelessness of content, and the fact that there are different approaches to scoring or rating things.
Our Content Is Almost Timeless
Generally speaking, the topics we need to read about are “timeless” – at least their relevance can be measured in multiple years, not multiple minutes. An article can become obsolete, someone can build a “better” mousetrap, etc. But short of that, great articles will be great for a long time. The Mythical Man-Month is decades old, and still relevant. We need to make sure that our rating approach will keep “great” articles at the top of the heap, rather than bury them under the stack.
I was also looking at one of the real-time pages at digg. What an extremely cool thing to watch. However – one thing jumped out at me – the articles with the most immediacy of content were getting all the attention. A fifteen minutes of fame thing. If you look at the “most digged [Kevin and Alex say digged on their podcast, not dugg] of all time” – none of those articles were getting noticeable attention. They had become old news.
When there is a great resource, like any of Scott Ambler’s UML 2.0 pages, it needs to be easy to find for people using the site. Perhaps using a “rate it 1 to 5” approach would be more effective.
I created a couple bulletted lists of pros and cons of two general approaches – aggregating ratings and averaging ratings. Here’s what I came up with – feel free to augment or dispute in the discussion on this article.
Sum Of All Scores
An approach that sums all of the scores, so that the number of “votes” affects the score. For example, if 10 people voted “yes” for an article, it would have a score of 10. If 20 people voted, the score would rise to 20. This is the general approach used at digg.com.
- The score reflects the number of people who liked the article enough to vote on it.
- As the number of people using the site grows, newer articles will tend to get more votes and get more of the attention.
- The “runaway” article dynamic will happen – and the crowd will drive attention towards the most attention-getting articles.
Average Of All Scores
An approach that averages all of the scores for an article. For example, if 10 people rated an article an average of “3”, the article’s score would be 3. If 20 people rated it a 3, the score would be a 3. This is the general approach used at Amazon (and Tyner Blain) for rating books (and articles).
- The score obscures the number of people who liked the article enough to vote on it.
- Older “best” articles will be at the top and all articles will have to play “king of the hill” to see which become the top scoring articles. Tie-breakers can be based on the number of ratings for an article.
- Would need some minimum number of ratings for the score to have credibility. The community would probably self-police this by quickly voting on articles that have “biased” initial votes.
- As the community grows, obsolete articles will gain new (lower) ratings pushing them off the top of the heap naturally.
Based on this analysis, the “scoring” approach should be the same general approach used to rate articles at Tyner Blain and books at Amazon. I’ll update the domain model to reflect this change. I think a simple 1-5 scale (where 1 is bad and 5 is good) would work effectively.
Chloe had some great comments on the Use Case Briefs article discussion about people being able to learn from articles with negative reviews or low scores – as ideas to stay away from. That “unintented” use would also be better served with an averaging-score, if it were easy to display “low score, high number of ratings” articles.