Undue influence and the wisdom of crowds

I was listening to an interesting podcast from the guys who wrote Freakonomics  (it’s available for free download on iTunes, if you are interested).

The subject that were discussing was “Do Expensive Wines Taste Better?”

To summarize: in the first section, one of the guys recalled a time where as young professor he was in a rather exclusive group at his university that had weekly fancy dinners (with lots of good wine).  These were folks who claimed to enjoy fine wines and food, etc.  He wasn’t a drinker and thought that was a bit of a waste of money, so the next time that they were going to have a wine tasting event, he volunteered to organize it.

He got 2 expensive wines (roughly $100/bottle) and 1 really cheap wine, about $8/bottle, (all of the same varietal).

In decanter 1 went good wine #1

In decanter 2 went good wine #2

In decanter 3 went the cheap wine

In decanter 4 went one of the good wines, again.

On average, the four wines received almost identical scores.

On top of that, on an individual level, people rated the two samplings of the same wine as being the most different.

So what does this mean?  Perhaps this was just a group of people who didn’t know as much about wine as they thought?

They then took it to the wine experts and the results showed the same thing…people thought that the expensive wine was better only when they knew it was more expensive.   They did some more tests, but I think that you get the point by now.

How does this apply to online communities?

I would suspect that as social media, crowd-sourcing, and so forth become more and more prevalent, that if we  (as online community leaders) are not very careful, it will becoming increasingly very easy for us to skew data, and improperly (and unintentionally) influence user data points.

One of the chief benefits about social media and crowd-sourcing is the vast amount of unfiltered, raw data and feedback that you can get from customers/prospects/users.  Having an expert opinion about things can be very helpful, but many so-called experts are subjective (and perhaps biased). Real world feedback is now so easy to get, and has so much value.

So as community leaders there is delicate balance that we must maintain between being leaders in our area, and also not influencing the data points that we seek as independent validation.

If a 95-point rating or a $75 price tag can make wine taste better, how is that sort influence affecting our user opinions?  Perhaps, there is also some critical thinking that we should do about the so-called experts who populate our communities.

Advertisements

Comments are closed.

%d bloggers like this: