Recently in Measurement Category

SpikeEveryone loves a chart that answers a key question, but I particularly like the ones that make you think: Why did that happen? What changed? What are we missing? What happens next?

A spike on a chart is a big ol' why, waiting to be asked.
me, 2010

It's an old point, but a few examples came to me last week. Beyond the immediate interpretation of the numbers (e.g., big number good, small number bad), I think these patterns imply follow-up questions along the lines of "what happened here" and "why did it happen?"

  • Spike in a trend
    A sudden change means something happened. What? Why? Did the value then return to the usual range? Is the new value temporary or a new normal? Do you need to take some action as a result? The spike is the chart telling you where to look, which I suspect most people do instictively.

  • Smooth line on a historically bumpy trend
    A bumpy trend line that grows more stable is telling you something else, but the follow-up questions are similar. Did the data source stop updating, or is the change real? Remember to watch the derivatives of your metrics, too. If the metric keeps changing but the rate becomes constant, is that real or an artifact of the data collection? What happened, why, what action in response…

  • Crossing lines
    A is now bigger than B; does it matter? Obviously, it depends on what A and B represent, but it's a good place to understand: what happened, why, what it means, how much it matters, and whether to expect it to continue. If it's a metric that people care about, expect to discuss it.
Beyond the numbers
Thinking beyond the graphs, I remembered two things from conceptual diagrams that always make me curious:

  • Empty boxes in a matrix
    If the framework makes sense, its boxes should be filled in, whether it's the consultant's standard two-by-two matrix or something much larger. An empty box may represent an impossible combination—but it could be a missed challenge or opportunity. I once found $12 million in sales in an empty box, and so empty boxes always get my attention.

  • Solid lines around a space
    A clear definition says as much about what something isn't as what it is. When the definition takes the form of a diagram—an org chart, a Venn diagram, a network graph—I wonder about what's just outside the diagram. The adjacent markets and competitors from the future; the people who are near—but not in—an organization. What does the white space represent, and what does that mean to you?
These came to me as I was getting ready to attend a lecture by Kaiser Fung (which was excellent—ask him about the properties of big data). I'm sure there are many more. Without wading into technical analysis waters, what other patterns make you stop and think?

When people ask me what I do, I usually say something about exploring the edges of the market for intelligence and analytics capabilities, starting with social media data. I also like to connect threads from separate topics and look at things from unusual perspectives. With that as a warning of sorts, let's pull some threads from new methods, old metrics, and emerging science to see what they do together. It may sound like so much theory so far, but this is all about practical analytics for management.

Thread one: A new view of social media in a customer journey framework
It started with a briefing from SDL on their new Customer Commitment Framework (CCF). I'm always interested to see people do something different with social media data, and I give bonus points for tools that provide quick and clear access to useful information.

Sdl ccdSDL's approach is to monitor business performance at key points of customer journeys by analyzing what they have to say in social media. They want to know what people are thinking as they progress toward a decision, whether that decision is about buying a product, telling others, or becoming an advocate for the product. CCF's analysis is always presented in the context of a customer journey, so—in theory, at least—its numbers provide a drill-down into the performance of different parts of a company's marketing and operational performance as experienced by customers.

I haven't tried CCF and its dashboard component yet, but if it works as promised, its alignment to identifiable business levers could make it a valuable analytical tool.

Thread two: Exploring possible futures with agent-based modelingHow customers behave
Call it simulation to avoid scaring people off, but the complexity science tool of agent-based modeling has come to market. When I saw that Icosystem had spun up a company, Concentric, to offer ABM tools for marketers, I knew I needed to learn about it.

Concentric's book, How Customers Behave, was a good start, but some of my earlier reading and the Santa Fe Institute's MOOC on complexity made the background sections somewhat redundant. One key takeaway I've found is that, despite the complexity label, this stuff isn't too complicated to understand.

D digital journeyWhere SDL looks for signals about what has happened, Concentric starts by building models of customer journeys, playing out the decisions faced by individuals in the market ("agents"). Once the model can "predict" the past, it's ready for use in simulating the effects of different strategies and tactics. Depending on your needs, their software can incorporate social media and other online data sources, or it can look broadly across media types and operational data.

Is this what you expected?
If you put threads one and two together, you get simulations to explore possible outcomes of different strategies, and measurement of customer opinion at critical points to indicate actual performance. One looks forward to explore what may happen, and one looks at the recent past to understand what has happened.

It seems like a powerful combination to me, but let's add one more thread. What about hard data?

Thread three: Marketing analytics and the view of the process
I once worked on a project for one of the big phone companies that was concerned about customer churn in their high-speed Internet business. They were adding new customers as fast as they could, and they wanted to avoid losing existing customers. Our analysis rested on the insight that sometimes you lose the sale even when the customer wants to buy. Before it was trendy, we looked at the post-purchase customer journey and found some measurable issues.

I usually see customer journey models that assume that customer attempts to purchase virtually always succeed. If you're selling online or in retail, that's probably close to true. But do you know that it's true, or do you assume it?

For the phone companies, DSL service circa 2002 was constrained by geographic footprint, technical limitations, compatibility issues, and customer ability. Each of these added another step in the journey and another opportunity to lose the sale. Given the operational metrics they already had—order attempts, accepted orders, activations, etc.—you could track post-order performance as a series of multipliers between zero and one. A simple step-down chart would show you where you were losing customers, so you would know where to invest to improve the process.

For a subscription-based business, recurring revenue is everything, so you need to pay attention to customer retention and anything that drives them away. This is already a long post, so let me point to a post from Keith Schacht about customer acquisition and retention, and a VOZIQ post on customer issues in telecom. The point is, your customer's experience may not end with the purchase, and your measurement of the experience shouldn't, either.

When you combine the effects of attracting new customers and losing existing customers, you end up with a survival analysis, which brings us back to complexity and the potential of agent-based modeling. The fact is, a given rate of customer additions and losses combine to set a ceiling on your possible customer base, so it's crucial to understand where and why you lose them.

Really, it all comes together
Pull these three threads together, and we start to see the potential of forcing analytics out of their comfortable silos. Agent-based models offer a tool for understanding how things might (not will) play out under different scenarios, including the possible outcomes of different marketing strategies. Analytics based on similar models of customer journeys provide a view of the recent past to test expectations and continually reevaluate the models. Combining social media data with customer data and operational metrics allows us to see a more complete picture: Where hard data is unavailable, social media indicators can fill the gaps. Where hard data is available, it provides a test of the social media indicators.

Pull the threads together, and you get a view that combines future and past; what people say and what they do; what happened and why. Sounds pretty useful to me.

Whole Lotta Influence Going On

What do you do when you disagree with a conference speaker? Do you tune out, start checking email, check the schedule for when the next session starts? Do you post snarky comments to Twitter and Facebook? Do you challenge the speaker in the Q&A time? What if the topic is well worn, and you're getting tired of hearing the same points you disagree with? Are you tired of the influence arguments yet?

While sitting through a presentation for nonprofits on communicating with online influencers, Heidi Massey got that familiar feeling, so she challenged the speaker, Justin Ware, in the Q&A. They continued the discussion after the session, and it's led to a great pair of point/counterpoint posts on using Klout scores:

I like this. Not only do they go to the effort of thinking through their arguments and writing something coherent, they do a service to everyone else by linking to each other's posts. It's a nice idea: post your position, and link with someone with a different opinion. Do you think it will catch on?

Speaking of influence, was there a memo about putting influence startups in Portland? First Tellagence, now Little Bird? If you need to identify the right people for spreading your message, the available tools are multiplying fast.

There should be a rule that if you have an argument on a Friday, you have to cite Monty Python.

Using Live Data as Eye Candy

At some point, every user of data fantasizes about an over-the-top command center (it's not just me, right?). The emergence of the social media command center concept is creating an excuse to indulge that desire for a NORAD/NASA/DOT mission control, replete with a constellation of flat screens and constantly updating charts. If you're thinking of jumping in, you'll want to read Jeremiah's lengthy post on the topic. But what if your needs—and budget—are more modest? What if you're looking for one very nice overview for a public place?

If you go out into the world as a customer, it's hard to avoid televisions in public places. The trendy business equivalent is the live dashboard that shows how things are going, from web traffic to sales to the stock price to online chatter. We're past the days of a single-column TweetDeck in a conference session; these offer tweets, pictures, metrics and more. If you want a live picture for the reception desk, team area, conference room, or trade show booth, it's now easy to put together something worth looking at.

Here are a few I find interesting:

  • TwinglyLiveboard (Twingly)

    Liveboard is all about the tweets, combining live-updating metics with sample tweets. The top-level metrics (total tweets, unique users, retweets, etc.) are animated with an analog odometer effect that serve as a sort of pulse for the display. Its charts list the top tweeters and hashtags associated with the topic, and visualizations depict volume by day and hour.

    See the live demo, and be sure to click on the screen and move it around; there's another visual off the right side of the screen (or make it fit your screen by reducing the height of the window).

  • MultitudeMultitude (JamiQ)

    Multitude is a moving timeline of a Twitter search, illustrated with the images people attach to their tweets. JamiQ describes it as a wall, which would be a good use for it. The design is simple, clean, and not interactive, so it makes a reasonable backdrop or lobby display. The updates can move quickly, so it benefits from being shown on a wide screen.

    See the live demo.

  • TickrTickr

    Tickr combines the summary on the wall with the combined-source analytics dashboard, creating a live-updating view that can be tailored to different purposes. Load it up with sources of performance data—business, operations, or technical—and it's a constant reminder of how things are going. Point it at social media sources, and it's another candidate for the trade show display.

    The company's "try it" page includes links to multiple live examples using social media data. The site also has case studies that show the use of other data sources.

Buy, adapt, or build your own?
The wall-mounted dashboard plays a different role than the analyst's interactive view. Once configured, it's meant to run without user interaction, and a clean, no-controls interface design makes it look more like TV than computer software. As always, it pays to start with some thought to what you want to accomplish with the display (beyond scratching that desire to show off your data). Even eye candy should have a purpose.

Realistically, many dashboards might be configured for this kind of use. If you can configure the widgets on the screen, and if they update without user action, you have the raw ingredients for this kind of application. If you're using a social media analysis platform, you might be able to set up a live view of people talking about your company or event. The newer dashboards that combine social media data with other sources could be set up for this. Leftronic, for example, seems to specialize in big-screen, non-interactive display applications.

Is this something you're doing? Have you seen an unusual use of this type of display? Where do you want to see live data?

And, as always, who have I missed?

Additions:

CrowdFrom the first time I described the three buckets of social media data, I knew that one category was different. Content and activity analysis are built on the lessons from established schools of measurement, and while we argue about the specifics, the objectives aren't so alien. The last category—people data—seems more exotic, and it's the least discussed area of measurement. What do we do with data about people, then?

What are people data?
Social media data provide information about both individuals and groups of people: who they are, who they know, what they care about, what they have to say, where they go… Have you noticed just how much information people are sharing about themselves, both intentionally and unintentionally? Collect it from various sources, and you're looking at people data.

As I mentioned in the introduction post, the boundaries between categories aren't absolute, so you could look at much of the data that does into an analysis of people as either content or activity data. The difference comes about when we start thinking about the people as individuals or as identified groups—the focus is on the people, which is why it's useful to look at the data differently.

Analyzing data about individuals
When using the data to consider an individual, you have several basic options on how to approach the analysis. Remember to think and, not or; there's no value in deciding which approach is the right one until you have a specific objective.

  • Profiling
    Compile a detailed personal profile from multiple sources, merging multiple social account profiles with customer data and content analysis of the person's online activity. The resulting information could provide context to customer service agents or sales reps as they interact with the person.

  • Scoring
    Apply a model to rate someone's influence, authority, or relevance, which might help you prioritize efforts in blogger outreach. You might also view someone as a customer, scoring credit, lead strength, customer value, or loyalty.

  • Predicting
    Activity data linked to an individual might be useful for predicting future behavior. How good is your crystal ball?
Working with data about individuals always runs the risk of turning creepy. I'll get into the balance between privacy and the value of data another time, but be sensitive to the risks as you decide how to use information about individuals.

Analyzing data about groups
Zoom out from the individual view to think about the what the data can tell us about groups of people. First, we might identify different types of groups, and then we can develop profiles that communicate why we're interested in particular groups.

  • Identifying
    Groups come in various forms, both formal and informal. The easiest to profile are organizations with formal membership (which includes employers). More casual groups might form through social network sites, discussion forums, or meetup groups. Finally, we have the extended networks of indirect connections, some of which are conveniently entered into online social networks.

    We might also find value in virtual communities implied by some characteristic, from interest in a common topic to locations, both real and virtual. How information travels in such a community could be useful to understand.

    I've had some interesting conversations on the subject of social network analysis, and how its use in social media isn't necessarily in sync with the science on social networks (in the original, not online, sense). If you understanding that you're mapping something other than social relationships, though, I think there's underdeveloped value in applying network analysis to more data points.

  • Profiling
    Profiling a group is less likely to turn creepy than individual profiling, but there's still a right way to do it. First, describe how the group was identified; for some uses, that may be all the information you need—if you're developing a targeted marketing promotion, for example. Going deeper, think about what the group is interested in and where they go (online and in the real world). Who are their leaders—and what is leadership within the group? What's important to them, and what's their history?

    Before you interact with a group, make an effort to understand their norms. The unwritten rules vary by community, and what works in one setting can be precisely wrong in another. As you work to understand and interact with groups, you're dabbling in anthropology, so you might consider its methods.

Our society is producing an astounding amount of data about people, both as individuals and in groups. It's easy to cross the line into overly intrusive use of the data, but it's hard to find a common definition of where that line is. That's a topic I plan to explore in depth in the coming months.

Photo by James Cridland.

DashMonitoring social media. Measuring social media. Social media analytics. All of these treat social media as data, but social media generate at least three types of data: content, activity, and people. In the last post, I wrote about content data, which is the starting point for listening. This time, let's talk about activity. What are people doing that we can analyze?

What is activity data?
Activity data is just what it sounds like: data about the behaviors of people as they use social media. When we're tweeting, pinning, tagging, posting, commenting, sharing, and liking, the systems we watch are watching us back. It's like web analytics, except that social media support many more activities than most web pages, and the activity takes place on social media sites instead of companies' own web sites.

Analyzing activity data
If you're used to measurement conversations with an unstated assumption that you're talking about content data, you probably talk a lot about sentiment and topics. If you listen to web analytics folks talk about social media for a few minutes, you hear about entirely different metrics: friends, followers, fans, likes, shares, retweets, and more.

Compared to content data, activity data presents a set of harder metrics, meaning there's not much doubt about the actual numbers. They're based on observing the use of features built into the software, rather than an interpretation of someone's writing. There's little ambiguity in clicking on a Like button, for example. It's either been clicked or not. The real question is what that means.

An embarrassment of metrics
The challenge in using activity data is less about the underlying technologies and more about tying them to business objectives. We have a lot of available metrics to choose from, and to complicate things, similar-sounding metrics from different social media sites can't always be compared. Always start with the most important question ("what are you trying to accomplish?"), and be sure you understand what the metrics really represent.

With activity data, the web analytics folks have an advantage, because their existing metrics tend to be closely tied to business performance. They already measure how well their web properties generate interest, leads, and sales. It's not too much of a stretch to extend the marketing funnel to include social media properties, too.

Besides its effectiveness in leading customers directly to the e-commerce store, you might measure social media activity as evidence of customer or community connections (engagement), or think of users as an audience for your messages (reach). Some metrics may have value with minimal interpretation, such as product ratings scores. Any tactic you employ that is designed to lead to an action has the potential to be evaluated with activity data, so—again—what are you trying to do?

Lines that go up and to the right make for successful presentations, if you understand what the line represents and how it relates to the business. Activity data can give you those charts; all you have to do is pick the right metrics. And as you're considering metrics, remember the three types of social media data.

Next: Working with Social Media Data: People and Groups

Screen capture by Darren Krape.

typing.jpgBefore you can analyze, you need data. In thinking of what you can do with social media data, I find it helpful to think about three buckets of social media data: content, activity, and people data. Let's talk about content. If you look at social media from one angle, that's what it is: lots of content. What do you do with that?

What is Content Data?
When we talk about listening and how people express their opinions, we're talking about working with content data. From the text of tweets, blog posts, and product reviews to pictures, videos, and audio recordings, content is everything that people are posting and sharing online. When people ask about sentiment, opinion, and complaints, they're asking about content.

Analyzing Content Data
Remember consumer-generated media? That was the mindset in 2006 when I started looking for companies that worked with social media data. People were empowered by these new, "Web 2.0" technologies to share their thoughts and opinions with a global audience. The companies they talked about suddenly needed to pay attention, and the existing paradigm with the closest fit was media analysis. So, much was borrowed.

The media analysis world was about understanding media coverage, when media meant professional writers and paid publications. You could count things: how many articles mentioned you, how many times were you mentioned within articles, and how did that compare with the competition. You could rate mentions as favorable or not, and you could see if your messages were picked up by journalists. There's more to it, but you get the idea.

It turns out that a lot of established media analysis techniques work for consumer-generated media, too. The challenge is that the new media sources generate a lot more content, so you need to sample the data or automate the process to keep up.

The other paradigms that usually enter discussions of content data are opinion research and the customer service queue. You can hardly turn around without running into these, "the world's largest focus group" and the new channel where customers expect a response.

Turning Content Into Usable Data
The promise of all this content is that people are sharing their thoughts with anyone who pays attention. The challenge is in turning the data into something that can be analyzed. That's where we get into coding the data—scoring it for sentiment, identifying the topics and entities (such as people or companies) discussed, rating the opinions and emotions expressed. It's hard work, especially when you consider the need to work with foreign languages.

In the case of text—posts, tweets, and the like—turning raw text into usable data is the job of text analytics. Whether they use statistical approaches that compare new texts to previously scored texts, or they parse the grammar to "read" the content, text analytics systems take text in and give coded, structured data out. From there, the processing gets easier.

All content is not text, but more of it could be. Back in the professional media world, you might be able to get transcripts or closed-caption data to augment video content. Beyond that (and even deeper into the research lab than text analytics), you can find systems that extract speech from audio and video, converting it to text for further analysis. Finally, most content sources include hidden metadata, such as topic tags and author information, that adds context and clues for analysis.

There's a lot to content analysis, which is why it's a growing specialty. I've spent a lot of time blogging about it here over the years, too. But if we step back and look at the big picture, it's only one of three types of social media data.

Next: Working with Social Media Data: Activity

Photo by Michael Sauers.

buckets of berriesIn preparing for last month's Social Media Analytics Summit, I needed a talk on the emergence of the social media analytics industry—which was tricky, since I don't usually talk about social media analytics. I didn't want to set up an elimination round of buzzword sweepstakes, arguing for this usage or that. Instead, I looked for a unifying theme, which led to a new question and three categories of social media data.

I've used a disappointment setup in my presentations for a while. "What's the best tool?" "It depends." The point is to get people thinking about what they're trying to accomplish, rather than jumping on the bandwagon for a popular tool. One of the questions I've suggested is "how do you measure social media?" There's an assumption hiding in that question, which became a limitation when I tried to update my slides. I needed a better question.

What can you do with social media data?
The key was to focus on the basic building blocks of analytics: data, analytics, and application. We tend to focus on the analytics technologies and the end-user applications, but what about the data? What if we focus on social media as a source of data? Ah, there we go.

What kind of data do social media give us to work with? If you look at the various specialists working the question, I've found three basic categories:

I'll go into each of these categories in the next few posts, but first, let's acknowledge that these are not rigid boundaries. Mixing data types and analytics lenses is definitely something to encourage, but if we want the data types to play together, we should understand what they are, first.

Next: Working with Social Media Data: Content

Photo by hugovk.

RulerplaneI think I've figured out the source of the difficulty—and controversy—in some of the measurement discussions around social media. It all starts when we talk about measuring things that can't really be measured, because they can't be observed. If we called it what it is—modeling—we'd see that differences in opinion are unavoidable.

Take influence. As a concept, it's not all that hard to define, and I don't think there's a lot of disagreement on what it means. But have you ever seen a unit of influence?

What did it look like? A lot like persuasion? What does that look like?

How about reputation? Have you seen a good one lately?

How about engagement? That's all about attention, and interest, and emotion, and focus, and—well, nothing that you can actually see, even with the best instruments.

Measurement requires observation
We don't argue about the definitions of all online metrics. Many of the basics—page views, unique visitors, even hits—have precise definitions, so the discussion moved on to their relevance and the reliability of available data. The shared characteristic is that they're based on observable events. A web browser requests a set of files from a server, and the computers exchange information that can be tracked.

In survey research, the survey itself provides an observable moment. You might question the validity of the questions or sample, and interpretation is open to—um—interpretation, but you do math on people's responses.

We have discrete events in social media, too. People connect to each other on social networks, they like, tag, or share things, and they publish their opinions. These are all actions that can be observed, though what they mean can be the start of a heated discussion. The frequently misleading labels can confuse the interpretation of the data, but the starting point is a set of observations.

Enter the model
With influence, reputation, and engagement, we're dealing with the abstract. None is particularly hard to define, but none can be observed directly. When you can't measure directly what you need, you look for something you can measure that relates to it somehow. You need proxy data, and that's where disagreement begins. What's the right proxy?

Models can be simple or complex, but they all have this in common: each represents the modeler's estimate of how measured characteristics relate to the desired property. Models are abstractions—equations that use measurements to derive values for characteristics which can't be observed or measured.

A model might be based on someone's intuition or extensive research, it may be strong or weak. But here's something else they have in common: the model is not the thing.

The map is not the territory.
—Alfred Korzybski

The reason we don't have standard metrics for such desirable commodities as influence, engagement, and reputation is simple. We can standardize measurement, because we define what is being observed. Modeling defies standardization because it seeks to measure that which cannot be observed, and in the process of defining a model, we incorporate elements that do not apply to every situation.

Modeling for a reason
Models reflect the opinion of the modeler and the objectives they support. Because apparently simple concepts might be used for different purposes by different specialists, we end up with diverse models using the same labels. In essence, we talk about the labels, because they represent familiar ideas (influence, et al), but the models represent what we really care about (such as positive word of mouth, leads, and sales).

If you understand that the label is just a convenient shorthand for a model that takes too many words to describe in conversation, it's not a problem. If the model generates useful information, it's doing its job. Just don't assume that any one usage of the label is the correct usage. Modeling requires judgment, interpretation, and prioritization in context, which are incompatible with standardization.

Photo by gilhooly studio.

As the measurement clubs start to work out their competing standardization efforts for measuring social media, the battle to define influence is flaring up in all the usual places. And while I won't attempt to settle the debate over how to measure influence, I want to point out that the topic is more interesting than whether Klout scores mean anything. A growing group of companies is experimenting with different approaches. Influence, apparently, is the new gold rush.

At Defrag this year, I saw several new companies with new variations on analyzing influence and profiling people. One startup founder described an entirely new—and promising—approach that he's about to take into alpha testing. To his credit, he preferred that I not use the influence buzzword to describe his business.

We call it influence, because that's what it's not

Dance like no one's watching. Sing like no one's listening. Tweet like no algorithm is coldly deciding your social worth.
—Chris Sacca (@sacca)

I'm not comfortable with the influence label, because it's not really what anyone measures. Influence—the real thing, not the black-box metric—isn't hard to define, but it's practically impossible to measure. So everyone uses proxy data, and the proxies vary by company.

A few years ago, I heard Barak Libai speak about the use of agent-based modeling to calculate the value of word of mouth, and I suspect that influence is essentially the same question. But I haven't heard anybody going down that path in the commercial market. It's probably too hard for practical use. Instead, everyone uses some combination of network connections, topic analysis, and audience reaction, which—obviously—equals influence when combined with pixie dust in the correct proportions.

As I started this post, I reached the chapter on influence in Duncan Watts's recent book, Everything Is Obvious: *Once You Know the Answer, and he fairly demolishes the whole idea of measuring influence. In all but the most trivial, contrived scenario, influence is just too complex. It seems the influence controversy isn't limited to the social media discussion. Even in the sociology lab, they use proxies.

If people want "influence," let's sell it to them
If we dial back the expectation that metrics represent precisely what the label says, we might find some use in the growing crop of "influence" tools. We have a selection of single-purpose tools, of course, but it's also common for these companies to provide hooks to connect into other programs. They provide a filter for finding people who have more followers, or whose words seem to lead to more action online, and so one or more of the influence proxies frequently shows up in social media tools.

Here's what I've seen so far. Where available, I've linked to useful information about APIs, FAQs, and how the scores are generated for each company. As always, once you start looking for more companies, you find that they're different in interesting ways.

  • Appinions
    Find and profile influencers relevant to topics defined by Boolean queries. Uses text analytics to understand statements by, and about, influencers and specific topics. (api, faq)

  • Connect.Me
    A reputation-scoring system based on individuals recommending each other. Tags link recommendations to specific topics. Connect.Me promises not to mine or sell user data, so it's not an option for developers looking for influence scores.

  • Fluencr (new Mar 2013)
    Perks for consumers, endorsements for marketers.

  • Identified
    A career-oriented marketability score based on how well Facebook profiles match what employers search for on social network sites. (how)

  • Klout
    A single-score influence metric based on social network activity. "The standard for influence," at least in the sense that it's the one everyone's arguing about. (api, faq, how)

  • Kred
    PeopleBrowsr's single-metric scoring system based on online influence and outreach. (api, how, intro)

  • Little Bird (new Oct 2012)
    Identify influencers on a topic—both established and emerging—and also followers, compare specific people's connections with the influencer set, and see what the influencers are sharing. (intro)

  • PeekYou
    A search engine for people with a single-score influence metric based on online activity. (api, faq, how)

  • PeerIndex
    Influence analysis with scores broken out by topic and activity, audience, and authority subscores. (api, faq, how)

  • PeerReach (new Mar 2013)
    Influencer scoring within broad topics, audience analysis, upstream influencer topics. (api, how)

  • PROskore
    Business-oriented reputation and experience score based on social network activity, career profiles entered on the site, and on-site engagement. (faq, how)

  • Spot Influence (now SpotRight )
    Contextual influencer identification and analysis based on reach, topicality, and impact. (api, faq, how)

  • Tellagence (new Oct 2012)
    Predict the spread of information in social networks to identify the critical members to reach. But don't call it influence, because they don't. (intro, faq, how)

  • Traackr
    Influencer search and profiling based on reach, resonance, and relevance. Traackr can also monitor and measure online activity by influencers for campaign management.
In addition to the specialists, influencer analysis and profiles are a common feature in social media analysis platforms. Have you seen my directory of companies in that business?

Lack of a standard never stopped companies from selling their stuff. If we're going to argue about the value of "influence," let's at least consider more of the options.

More posts in the "Build or Buy?" series:

About Nathan Gilliatt

  • ng.jpg
  • Voracious learner and explorer. Analyst tracking technologies and markets in intelligence, analytics and social media. Advisor to buyers, sellers and investors. Writing my next book.
  • Principal, Social Target
  • Profile
  • Highlights from the archive

Subscribe

Monthly Archives