Mark Zuckerberg's first unwinnable fight

Mark Zuckerberg's first unwinnable fight

Image Credit: Brad Jonas for Pando

By Kevin Kelleher , written on November 22, 2016

From The Disruption Desk

This member-only article was unlocked for you by Sarah Lacy. Unlock expires 1 day, 23 hours from now. For unlimited access to all of Pando, become a Pando member for just $10 a month.

It turns out everyone is entitled to their own facts.

Or at least they are on Facebook, which is suffering one of those rashes of bad news that afflicts it every year or so. This time the flare-up, involving fake news, is a pretty nasty one and may take some time to go away for good, if it ever goes away at all.

The controversy has raged since the presidential election, after which some accused Facebook of swinging the vote in favor of Donald Trump. Blaming Facebook as the biggest reason for the surprise outcome is overstating things, as there were many other factors at play. This allowed Mark Zuckerberg to dismiss the accusations early on. “That fake news on Facebook … influenced the election in any way, I think is a pretty crazy idea,” he said at a tech conference.

This dismissive approach worked with previous privacy controversies. Early on, Zuckerberg bet that, in time, people would be fine sharing their personal data with advertisers, and sure enough most of its users today don't care. But soon it became clear that, if Facebook didn't swing the election, it did influence it with its handling of fake news, which was clearly a lot bigger presence in its news feeds than Zuckerberg had argued. Even loyal employees felt it was a problem worth tackling.

Since then, Zuckerberg has taken ever slightly more forceful stances on the issue, and yet one can't help think this is a CEO in denial about the scope and severity of what he's up against. On Nov. 12, he wrote that “We have already launched work enabling our community to flag hoaxes” right before acknowledging that many news stories “express an opinion that many will disagree with and flag as incorrect even when factual.” In other words, the flags will quickly become as confusing as news on Facebook is right now.

Last Friday evening, Zuckerberg posted another update, which now refers to the problem as “misinformation.” In my opinion, this is a better word, because fake news suggests 100% false stories, like the pope endorsing Trump. Misinformation is the intent to deceive or mislead the public, which could involve stories based on a nugget of truth and then spun so wildly, using incendiary and meme-friendly language, they have the same misleading effect as 100% fake stories.

But if misinformation is defined by its intent, how is Facebook going to weed it out? If the completely false stories are tricky, the equally pernicious ones that function as propaganda are exponentially more so. In his update, Zuckerberg outlined several relatively stronger measures, which, as Paul Carr pointed out, may be aimed more at appeasing German regulators than American voters.

Few of these ideas left me encouraged that they will be strong enough for the job. Take the “technical systems to detect what people will flag as false before they do it themselves.” These may involve neural networks that learn through processing countless flagged stories. If so, a fake news purveyor could evolve past them by, for example, deploying neural networks to churn out misleading story after misleading story, an endless round of A/B testing to sift out the ones that sneak past Facebook's filters from the ones that don't, thereby learning what kind of misinformation still works.

Similarly, third-party verification through fact checking sites could be undermined by fake fact-checking sites that, if blocked, could leave Facebook facing accusations of political bias. It was similar charges of political bias that prompted Facebook to replace the human editors curating its Trending Stories with algorithms, a move that may have made it easier for fake news stories to influence voters through Facebook's news feed.

And this gets at a simple if strange choice that Facebook needs to make: whether to allow its users to be entitled to their own facts. The news media isn't divided by left-wing and right-wing anymore, it's divided into those who take facts seriously and those who consider them an impediment to their ends. Insist on veracity and Facebook risks facing charges of bias and censorship. Include all views and it continues to let misinformation take over its news feeds.

With each new update on the issue, Zuckerberg is backing into a stance he has steadfastly avoided. “Identifying the 'truth' is complicated,” Zuckerberg said. “I believe we must be extremely cautious about becoming arbiters of truth ourselves.” But by wading into the growing debates over what is true and what is false, what is information and what is misinformation, what is banned and what is allowed, Facebook is doing exactly that.

Facebook is a social network used by 1.66 billion people. For many of them, it's a primary source of news. Facebook may not be a media company, in the classic sense, but it serves the core function of an old-school media company: as the gatekeeper of everyday information people need to know. Its algorithms are designed to sift through an endless flow of updates and posts to show them what they'd likely be interested in seeing. The end result is an experience addictive to many because it's so meme-friendly.

The thing about memes, though, is they work because they're viral. And the one thing viruses are good at is mutating. Any type of viral misinformation that's banned by Facebook will mutate, evolving into something just as misleading but allowed by Facebook filters. In a way, the history of the Internet is an epic game of whack-a-mole. Spam, porn, illicit e-commerce, fraud and misinformation have all long been a part of it, because they are the best at adapting to efforts to block them. If Zuckerberg is serious about declaring war on Facebook, he's entering a fight that may be unwinnable.

This leaves Facebook with three choices, none of them good: shut the thing down (not going to happen), endlessly manage a problem that will always mutate its way out of repair, or give in and let Zuckerberg's creation keep making the world a worse place by being complicit in future propaganda campaigns.

Zuckerberg talks a lot about making the world a better place. His philanthropic work is unquestionably directed toward that goal, but increasingly the company founded to “make the world more open and connected” isn't doing that so much. Personal networks have become silos that confirm biases and foment mistrust of the other. The social networks that helped organize protests that sparked the Middle East revolutions in 2011 now struggle to keep ISIS from organizing on their sites. The agent for good is a double agent for bad.

Technology is by nature morally agnostic. It's in the application of technology where morality comes into play. Whether a hammer or a social network, technology has always been capable of creating something good or knocking it down. The difference is it's a lot easer to wield a hammer than control a social network, especially one with 1.66 billion members. That's the most worrisome thing about Zuckerberg's response to fake news stories and misinformation in general. He doesn't seem to realize how hard the task before him is, a task that may well be impossible.

Want access to all of Pando?

Pando members get full access to every article published on Pando, including our full archive. You’ll also get free access to Pando Monthly events, each event’s live video stream and our full video archive. Become a Pando Member


SHARE THIS
Previous Post
Next Post