All comments posted on this blog do not reflect the opinions of any organization that I am affiliated with. These are my personal perspectives only.

Friday, September 26, 2008

Ratings & Enterprise 2.0

Be cautious in rating employee E2.0 contribution.

Andrew McAfee recently questioned the need and value for rating employees and their usage of E2.0 applications. The chart on the left is his example of some of the things he would consider.

Although these metrics may be of interest to those managing or designing various E2.0 programs, these statistics can be quite misleading. Specifically, I caution those that want to use these indicators as 1) the sole measure in determining the success of their E2.0 programs and 2) the sole means of ranking the value of a contributor.


Success of your E2.0 program can not be based only on adoption/usage percentages. The value of E2.0 includes just giving people an opportunity, enabling self-organization of the most relevant participants, identifying valuable conceptual outliers, reinforcing culture, accessibility for future benefit. These don't show up in adoption rates. I discuss these in greater detail in an earlier post, "5 benefits of social computing that adoption rates don't show". Success isn't a simple metric but requires various perspectives well beyond just adoption rates.


Without any ratings, how would you know whether to trust the content? Rating a contributor based on activity levels is intended to provide participants with a gauge to the "quality" or "accuracy" of the content. This is a dangerous and false assumption. Just because someone may post a lot, or interact a lot, doesn't mean that their content is necessarily of high quality. As an analogy, I spend a long time doing house repairs, not because I am good at but for the exact opposite reason!


A heavy focus on individual ratings will also diminish the real value of tapping into the long-tail. Clay Shirky gives a good example on how one can completely miss the point. He points to comments made by Steve Ballmer dismissing the open source concept of Linux as not really being true/valuable since the majority of work is really done by a small group of participants. Steve's flaw is that he's associating "value" with "quantity". The question isn't about how much input you provide. Even if you provide only one single piece of contribution, what if that contribution turns out to be a major breakthrough? Or in the software example Clay suggests, what if that one patch provided a fix for a major security hole in the software? What's that worth?


My suggestion for those looking at designing E2.0 programs is that ratings are valuable, but they should be at the content level. Individual ratings could be derived (aggregates or averages) from the various ratings of the individual contributions. There is sometimes a desire to use ratings as a means to motivate employees to contribute. If you focus on "quantity" you are incenting the wrong behaviour, focus instead on "value".


One final point. SIMPLICITY. Just because we can measure something, doesn't mean we should. From my perspective, having 6 metrics can be confusing and intimidating to participants. I prefer to get it down to a single metric which has intrinsic value. For example, in a recent application we've designed, we allow (and encourage) participant rating on the content by asking, "Did this content help you?". This means the rating is the total number of people helped by the content. It provides meaning to both readers and authors. Do we track other metrics that we don't display? Absolutely. Is this method perfect? No, but it is simple and in my opinion worth the trade-off.

1 comment:

Anonymous said...

Good advice considering most of us have been trained that more metrics are better.