Debating Human vs. Computer Analysis


I've said that opposing viewpoints over human vs. computer analysis of social media content don't constitute a debate, because I've never heard both sides at the same time and place. Now, thanks to an email exchange between Mike Daniels (Report International) and Mark Westaby (Spectrum) for Research magazine, I have to stop using that little observation. It's now—finally—a debate.

Tracking online word-of-mouth: The people vs machines debate

After an exchange of the usual points and counterpoints (speed, accuracy, sarcasm, synonyms...), the discussion really gets going in the comments. Mark makes a point that may summarize why I find this stuff interesting:

Automated analysis should not be viewed as a replacement for human analysis. Rather, it is a different method that is opening up entirely new and tremendously exciting ways of analysing data.
(One of Mark's current projects, Fin-buzz, provides a hint about his meaning.)

The usual debate: a closed question
If you're looking at it from a media analysis perspective, this question comes down to quantity and quality. How much media can you analyze in a way that you will trust? The new technologies will let you analyze more media sources faster, if you accept the results. In a world bursting with new publishers, that could be a good thing, and that's where we find the usual—reminding myself to use the word now—debate.

Moving to an open-ended question
Speed and scale benefits come from the application of new tools to old questions—not a bad thing, but not terribly interesting. Coming at it from another angle, the rise of automated analysis suggests a question about the removal of obstacles: What would you do with online information if you could "read" all of it? We're seeing some early ideas; what else is it good for?

Which question are you thinking about? Is "good enough for media analysis" your standard, or does the prospect of a different set of capabilities (with new tradeoffs, yes) inspire new ideas?

Update: T.R. Fitz-Gibbon picks up the discussion on the Networked Insights blog: Social Media Analytics, Humans vs. Machines.

Photo by Narisa.



This whole debate seems too close a parallel to the debates about "expert systems" in the late 1980s and early 1990s. As with most technology solutions, any "automated" application sold to analyze human interaction, whether written or spoken, is likely an oversold feature set with brand promises that not only typically fall short but, in many instances, aren't even possible technologically.

No matter how long the technical architect says "the solution requires both humans and automated systems" the customer will expect that means the humans won't need more complex skill sets. After all, aren't the systems they purchase described as "automated"? As one result, such solutions almost always fail to exceed client expectations. If the "automation" proponents would simply drop the adjective from their description of the tools involved, perhaps a real debate would ensue. We know the human analysis is human, but, even according to its own proponents, the "automated" analysis tools aren't automatic.


Larry, that's a great point. I won't hold my breath for products with a HASA label, though. ;-)


I listened in on Mike Moran's (Converseon) webinar with Vivaki and IBM the other day, and he spent a lot of time talking about human vs. machine.

I think he was arguing for manual characterization of messages. This is not even remotely possible with the huge volume of data we (MotiveQuest) are working with. We have had recent projects with over 40 MM unstructured text messages in them.

No question that using the same linguistic model for scoring sentiment in online gaming as osteoporosis will not be accurate. Sick means different things in those two categories.

Also even within a category language is fluid over time. A year ago the word "pre" meant something entirely different in cellphones than it does now.

So what is needed is software tools (and strategists who can run them) that can accommodate this fluidity of language over time and categories. In our world, the strategist tunes the linguistic model for the project at hand. (And this is not just for sentiment, but sentiment, word association, motivations, emotions, issues, drivers, competitive dynamics, etc.)

I guess you could call this hybrid scoring - because it is machine characterization - but with significant human input on the characterization model for every project.

Enough said - but this is not a "human" vs. "machine" problem.

MotiveQuest LLC


Good to see you on this topic ! My 2 cents.

Fin-buzz is an excellent demonstration of automated sentiment. This looks as serious work.

What I understand so is that there is a huge effort in dictionary building and understanding the context of language.

I've seen examples of this working. So yes, one can build automated sentiment system with predictive analytics technologies and a serious effort in configuration.

What I have a more than serious doubt about is systems that claim they can rate automatically any conversation on any topic with reasonable accuracy.



I think you're getting into an answer to that open-ended question. The follow-up question is, are the insights from the massive text analysis effort different than you would get from a manual effort on a smaller sample? I think we're talking about a research-oriented take on the data, so how do the methodology discussions go?

[Wondering if I should get controversial and speculate about a correlation between degree of automation and innovative uses of the data—nah, not tonight. :-) ]

Comments are now closed for this entry.

About Nathan Gilliatt

  • ng.jpg
  • Voracious learner and explorer. Analyst tracking technologies and markets in intelligence, analytics and social media. Advisor to buyers, sellers and investors. Writing my next book.
  • Principal, Social Target
  • Profile
  • Highlights from the archive


Monthly Archives