January 2012 Archives

Sf skylineI like blogs for developing and sharing ideas, but if you really want to see progress, you need to spend time with people, face to face. Notice which problems get them animated, and which topics bore them. Look in their eyes to see which ideas are working and which are not. Considering the unresolved questions of measurement and analytics in social media, spending some time together sounds like a great idea.

That's why I'm excited to be a part of the Social Media Analytics Summit, taking place April 17–18 in San Francisco. As conference chair, I get to present a couple of sessions, moderate a couple of panels, and generally stay in the middle of things throughout the event. Offstage, I plan to spend a lot of time listening to what people are doing and seeing how they respond to the other ideas in the room.

I also plan to have a very pleasant time with the people who would go to a conference dedicated to social media analytics. These conferences with very specific topics are always good for meeting interesting people. And, you know, business opportunities have been known to emerge in these gatherings, too.

The program includes some very sharp folks (I would know, I invited some of them), talking about the burning questions, effective strategies, and practical applications of social media analytics. It's a safe bet that everyone will learn something from this group, starting with the pre-summit interview series. As always, the conversations after the sessions will probably be even better.

Psst. You want a discount?
If you read this blog—and I think you do—the Social Media Analytics Summit is worth a look. This isn't social media in the context of a larger conference; it's all ours. If you decide to attend, use the discount code NATHAN300 to save $300 on your registration. Super Early Bird pricing is good until February 17, so you have a couple of weeks to think about it before the price goes up.

See you in San Francisco.

Photo by Abhishek Chhetri.

RulerplaneI think I've figured out the source of the difficulty—and controversy—in some of the measurement discussions around social media. It all starts when we talk about measuring things that can't really be measured, because they can't be observed. If we called it what it is—modeling—we'd see that differences in opinion are unavoidable.

Take influence. As a concept, it's not all that hard to define, and I don't think there's a lot of disagreement on what it means. But have you ever seen a unit of influence?

What did it look like? A lot like persuasion? What does that look like?

How about reputation? Have you seen a good one lately?

How about engagement? That's all about attention, and interest, and emotion, and focus, and—well, nothing that you can actually see, even with the best instruments.

Measurement requires observation
We don't argue about the definitions of all online metrics. Many of the basics—page views, unique visitors, even hits—have precise definitions, so the discussion moved on to their relevance and the reliability of available data. The shared characteristic is that they're based on observable events. A web browser requests a set of files from a server, and the computers exchange information that can be tracked.

In survey research, the survey itself provides an observable moment. You might question the validity of the questions or sample, and interpretation is open to—um—interpretation, but you do math on people's responses.

We have discrete events in social media, too. People connect to each other on social networks, they like, tag, or share things, and they publish their opinions. These are all actions that can be observed, though what they mean can be the start of a heated discussion. The frequently misleading labels can confuse the interpretation of the data, but the starting point is a set of observations.

Enter the model
With influence, reputation, and engagement, we're dealing with the abstract. None is particularly hard to define, but none can be observed directly. When you can't measure directly what you need, you look for something you can measure that relates to it somehow. You need proxy data, and that's where disagreement begins. What's the right proxy?

Models can be simple or complex, but they all have this in common: each represents the modeler's estimate of how measured characteristics relate to the desired property. Models are abstractions—equations that use measurements to derive values for characteristics which can't be observed or measured.

A model might be based on someone's intuition or extensive research, it may be strong or weak. But here's something else they have in common: the model is not the thing.

The map is not the territory.
—Alfred Korzybski

The reason we don't have standard metrics for such desirable commodities as influence, engagement, and reputation is simple. We can standardize measurement, because we define what is being observed. Modeling defies standardization because it seeks to measure that which cannot be observed, and in the process of defining a model, we incorporate elements that do not apply to every situation.

Modeling for a reason
Models reflect the opinion of the modeler and the objectives they support. Because apparently simple concepts might be used for different purposes by different specialists, we end up with diverse models using the same labels. In essence, we talk about the labels, because they represent familiar ideas (influence, et al), but the models represent what we really care about (such as positive word of mouth, leads, and sales).

If you understand that the label is just a convenient shorthand for a model that takes too many words to describe in conversation, it's not a problem. If the model generates useful information, it's doing its job. Just don't assume that any one usage of the label is the correct usage. Modeling requires judgment, interpretation, and prioritization in context, which are incompatible with standardization.

Photo by gilhooly studio.

Revisiting 2011

When I started looking at the year's most-read posts a couple of years ago, I noticed that the list always includes a lot of older posts. So, I started a new post last year: a list of the past year's posts that didn't make the 2011 Top 10, but that I think are worth another look.

Previous years' lists
2010: Top 10 posts, Thinking through 2010
2009: Top 10 posts

Top Posts of 2011

I don't really do predictions—at least, not publicly. I do, however, find it interesting to look back and see which posts have drawn the most attention in the past year. As in previous years' lists (2010, 2009), some of the most-read posts are old ones—going all the way back to 2006 (!).

Remember these?

  1. New Dashboards Blend Analytics Sources - September 2010 (#9 in 2010)

  2. Monitoring Social Media Before You Have a Budget - May 2008 (#1 in 2009 & 2010)

  3. What Does Salesforce-Radian6 Deal Mean for Everyone Else? - March 2011

  4. Global Social Media Usage Patterns - January 2011

  5. Human vs. machine analysis - April 2007 (#4 in 2010)

  6. Visual text analysis - April 2007 (#2 in 2010)

  7. The Specialization of Social Media Analysis - March 2011

  8. Professional-Strength Social Media Aggregators - June 2010 (#8 in 2010)

  9. Text Analytics in the Cloud - February 2011

  10. Defining social media relations - November 2006
With only four of the top ten from 2011, this view always misses what I think of as the more interesting posts, which is why I choose my own list for revisiting 2011. All of which sets the stage for what's breaking out of the drafts folder next.

Happy New Year.

About Nathan Gilliatt

  • ng.jpg
  • Voracious learner and explorer. Analyst tracking technologies and markets in intelligence, analytics and social media. Advisor to buyers, sellers and investors. Writing my next book.
  • Principal, Social Target
  • Profile
  • Highlights from the archive

Subscribe

Monthly Archives