The Social Data Industry's PRISM Problem

Secret agentTrust is an issue for an industry based on extracting meaning from what people share in social media. People don't have to use these services, and if they decide that their information might be used against them, they can stop. This week's revelations about the US intelligence agency monitoring social networks (among other sources) creates a massive trust issue for everyone who works with social media data. What now?

(This won't be an analysis of the NSA and Prism. I'm working from the same sources you are, and we'll probably have new information by the time I finish writing, anyway.)

The world reacts to US actions
David Meyer points out a threat to cloud computing vendors as customers and governments react to the news. US-based vendors can expect special challenges selling in Europe, where privacy is more protected and signs of a blowback from Prism are already appearing. In cloud computing, the trust issue relates to custody of the data—do you trust your vendor to keep your data safe and secure?—and the government version translates as a question of US-based vendors' ability to keep commitments to foreign governments.

But cloud computing is essentially just data center outsourcing. What does it mean for an industry that exists because of people's willingness to share publicly?

Access to the data is everything
The challenge to the social data industry is different. It's indirect, but potentially existential. What happens to your business as a result of the reaction to Prism? Will social networks tighten their terms of use to block data mining? Will EU safe harbor agreements create new requirements to protect user data (possibly by keeping it outside the US)? Will new legislation designed to limit government abuses include new limits on private-sector users?

Secret collection of private data by government agencies is fundamentally different from social media monitoring outside government. In business, we're working with publicly available data, which anyone can access without breaking the law or hacking a system. It's not espionage, but the facts aren't the problem.

The problem, as ever, is perception. The NSA is all over the news, and in the heated environment of a breaking story, subtle distinctions can get lost. The risk to the social data industry is that a reaction to government surveillance could become a problem for anyone doing the less intrusive type of monitoring.

How will you respond? What's your plan for minimizing the overreaction if it starts to get out of hand?

Responding as an industry
At its Big Boulder conference this week, Gnip announced the Big Boulder Initiative, which is an effort to start an industrywide discussion of the issues it faces. Trust is one of five issues they highlighted as starting points for discussion. The other news this week highlights the wisdom of the choice.

I'd go farther and ask a question I've asked before: should the companies who work with social data form an association to coordinate these discussions, codify standards, and speak for the group?

The ethics of social data
Let's go back to trust and consider the ethics of working with social data. Bob Gourley at CTOVision recently gave me a copy of Ethics of Big Data, a short e-book that lays out a process for establishing ethical limits to the use of big data. It's a worthy challenge, but I think the first step in the process—exploring an organization's values—will lose everyone. The Friedmanesque view that a business exists only to make a profit is common, which leaves only the law as a restraint on what can be done. "Be profitable" isn't the sort of value that will drive a hearty discussion of ethics.

I do think it's possible to have ethics of listening, but I don't see an existing standard that really applies. I don't see, for example, how ethical standards for social scientists, with their strict limits on personally identifiable information (PII) apply to social media monitoring in customer service. The standard for competitive intelligence boils down to "don't break the law," which appears to be the relevant limit on secret government programs, too.

Here's a starting point for discussion
I suggested a set of ethical standards for listening vendors in 2010 as a starting point, but the discussion went nowhere. Maybe it's time to try again. Comments are closed on the old post, but I'd welcome any discussion of the draft here.

The usual defense of social media monitoring in the private sector is that we're working with publicly available data, but monitoring public data can still be creepy. What's the plan for protecting public-source data mining from an overreaction to something far more invasive?

Photo by Marsmettnn Tallahassee.


About Nathan Gilliatt

  • ng.jpg
  • Voracious learner and explorer. Analyst tracking technologies and markets in intelligence, analytics and social media. Advisor to buyers, sellers and investors. Writing my next book.
  • Principal, Social Target
  • Profile
  • Highlights from the archive

Subscribe

Monthly Archives