After a few minutes of pondering exactly what Google is trying to accomplish with a ultra-moderated commenting feature on Google News, I think I've got it. I don't like it all that much, but I think I understand.
The "experimental" feature -- you can see an early example here -- will be different from the commenting free-for-all you get on digg and many other aggregation/social tagging sites. At its own blog, the Google team says it will manage the commenting process by ensuring that only people involved in the general topic area of a hot "story" are commenting on it; this would be accomplished by Google actively inviting what it considers qualified sources to comment, and by setting the barrier for submitting a comment pretty high. The process will be by e-mail (not the traditional Web form), and uninvited would-be commenters must include info like telephone numbers by which their identities can be verified..
Another big difference from traditional media, Google contends, is that it won't edit these comments. Quoting the Google blog:
"But we're hoping that by adding this feature, we can help enhance the news experience for readers, testing the hypothesis that -- whether they're penguin researchers or presidential candidates-- a personal view can sometimes add a whole new dimension to the story."
Disclaimer: I'm an old traditional media elitist.
In journalists' parlance, Google is simply trying to add additional, qualified sources to what it calls a "story," which on Google News is a cluster of articles about a given topic. A lot of the coverage this morning on the development seems a tad confused by the very term "comment," which on the Web evokes the idea of a flood completely anonymous chatter -- and an army of poor Google staffers vetting all that noise. That's not what Google is shooting for here, clearly.
However, I could go on for pages about the dangers of thinking that letting even a "qualified" source have an open, unchallenged platform to express his or her "personal views" -- aka, bias -- is a dicey proposition, at best. How much should you trust a 10-graph essay from a real estate developer about why he wants to pave over a park?
But, I'm in the minority on this one, even in my own office, I know. I've never seen a more crystalline definition of the philosophy that makes me nuts about info on the Internet than this comment by Google-watching blogger Ionut Alex Chitu, about the credibility risk I just described:
So what if the denial is a lie? Let the public decide that. In fact, that's the whole philosophy behind Google News: aggregate different perspectives on a subject so you can see the whole picture and decide who's right and who's wrong.
Rant warning: Well, let's go get those Duke Lacrosse players and pitch 'em in jail, shall we? And I'll rush right out and sell off that Apple stock -- again. All information is not perspective; the chemical equation for water is H2O, period. By definition, a perspective cannot be a lie. "News" is the tricky and enormously resource-intensive business of trying to establish a few objective points of reality, then presenting and evaluating perspective that at least is kinda tethered to that reality.
In his interesting post on the subject, Chitu notes that the mechanism for posting a comment on Google News is cumbersome and will probably take too long for the breakneck, ADD pace of the Internet. He also criticizes the information flow of expecting readers to leave Google News, read several articles about a topic, and then read Google's hosted comments as unwieldy -- and I'd agree. What you'll get is a lot of folks reading comments about stories they haven't yet read -- yuck.
Cade Metz at The Register employs a frail Tooth Fairy metaphor to illustrate the enormous hassle of verifying people's identity on the Internet. Both Metz and Chitu suggest Google has a lot of tools to facilitate video and other "citizen journalism" models, but has not aggressively tied them together.
I think these criticisms are missing the point of what Google seems to be up to here. With Google News, it appears to be trying to straddle the fence between the Web 2.0 model of "all information is equal" and the more "filtered" approach of traditional media, which does crazy stuff like verifying someone actually is who they say they are before publishing his or her comments. We at IT Business Edge are on Google News, and like everyone else, we have to submit to a review process to even qualify for the channel.
I want to applaud Google for its intentions here, but ultimately it seems it's trying to get off a little easy by employing a technology that simply doesn't support the goal. We have commenting features on our blogs; we don't moderate them actively. Some comments are great, highly valuable input from our readers; some are gibberish. We simply aren't in a position to moderate all this stuff; how could Google be expected to? Even the Digital Millennium Copyright Act and the 1996 Communications Decency Act recognize this isn't realistic in their provisions that shield "service providers" from immediate copyright and libel actions (I might cynically suggest that Google's decision to not edit the Google News comments might have something to do with maintiaining these legal shields.)
It seems to me that Google News would be better served to go the extra mile and hire a bureau of journalists who could do follow-on podcast interviews and video conferences with sources about the "hot" stories of the day, as determined by its Web 2.0 metrics that already drive the service. Open it up to the verified "sources" who have some skin in the game. Queue up and, yes, filter questions from readers. Wear out all that technology it owns.
It's more work, but that's the original content game, and that's what Google News is nibbling at here. Lord knows, Google has the money.