Organizations are turning to Big Data because they believe more information will improve decision-making, whether it’s whom to target for a sale or whether a product should be recalled.
But what if the real value of the data isn’t in providing us with more information, but in replacing us as decision makers?
Andrew McAfee, co-director of the Initiative on the Digital Economy in the MIT Sloan School of Management, goes way meta in two recent Harvard Business Review blog posts that question not just how to use data — but who should be using it.
It started with his post, “Big Data’s Biggest Challenge? Convincing People NOT to Trust Their Judgment.” His overall theme in that piece was the disparity between human judgment and algorithms in decisions that rely on predictions. For instance, algorithms outperform parole boards when it comes to determining which prisoners are more likely to re-offend, he writes.
That’s worth a read, but he does a better job of defending his theory in this month’s post, “When Human Judgment Works Well, and When it Doesn’t.”
The discussion mostly focuses on the development of human intuition. Primarily, he’s drawing on the works of Nobel prize-winner Daniel Kahneman and Gary Klein, both of whom agree that good intuition develops only in the presence of two circumstances:
Those two conditions don’t exist as much as we’d like to think. For instance, he (rightly) points out that radiologists don’t know the results of their readings, and so have little feedback about the accuracy of their diagnosis. This means they’re in less of a position to develop “intuition” than anesthesiologists, who receive immediate feedback and monitor the effects of their decisions.
This is a pretty high-level debate, and you may wonder what it has to do with your job. But I actually think it has everything to do with the value of data. For instance, reader John Forrest raises an excellent question when he notes:
“You quote Kahneman however you don't mention the conclusive evidence he presents that shows people are very bad at heeding the objective predictive results which their experiments elicit. Until you fix the judgement at the meta level, you can have the best backward looking validations you want - it won't improve the quality of decision making. If a prediction emerges in a forest of data but no one hears it, is it still a prediction?"
To me, that raises another good question: Is there a point in investing in data to support decisions, if the decisions are being made by the wrong entity in the first place?
Both of these articles are definitely worth your time to read, but I especially enjoyed the discussion on the second post. The posts are available for free reading, but you do have to create a login for HBR blogs.