If the Russian-bought election interference ads hadn’t been bought by fraudulent accounts, “Most of them would be allowed to run” Facebook COO Sheryl Sandberg said this morning. “The responsibility of an open platform is to allow people to express themselves” she said during the first of an Axios interview series with Facebook execs.
“The thing about free expression is when you allow free expression you allow free expression” Sandberg said, noting that “we don’t check what people post” and that she doesn’t think people should want Facebook to.
The perspective maintains Facebook’s neutrality across the political spectrum and absolves it from being the truth police.
But it also means that it’s knowingly creating a platform where people can misinform each other. That raises the question of how free speech scales to user-generated content sharing networks that lack the curation and editorial oversight of traditional news distribution systems. Sandberg dodged Axios editor Mike Allen’s question about whether Facebook is a media company, and wasn’t pressed about how it accepts money for ads like other media companies.
During her talk, Sandberg also confirmed that Facebook will support congressional investigators probing election interference when they release the Russian bought ads to the public. She said she met with congress yesterday, Facebook is fully cooperating, and that it will provide congress any content investigators want. That includes non-ads. She also said targeting information about the ads will be released to the public as well.
As for the accusation that Facebook causes filter bubbles by surrounding us with information shared by our social graph instead of a more impartial news source, Sandberg said Facebook actually broadens our perspective through exposure to our weak ties and acquaintances. She cited studies showing we see a wider view of the news through the lens of Facebook than traditional sources.
You can watch the full talk with Sandberg below:
Sandberg’s comments come alongside newly exposed information about the effectiveness of Facebook’s fight against fake news. In an email obtained by BuzzFeed, Facebook’s manager of news partnerships Jason White wrote to one of the company’s third-party fact checkers:
“Once we receive a false rating from one of our fact checking partners, we are able to reduce future impressions on Facebook by 80 percent . . . we are working to surface these hoaxes sooner. It commonly takes over 3 days, and we know most of the impressions typically happen in that initial time period.”
But while Facebook is willing to demote the News Feed prominence of a news story that’s uniquivocally established as false by third-parties, it still allows this content on its platforms.
This is all boils down to the fact that Facebook’s News Feed is sorted by engagement. Normally, low quality content simply receives too few Likes or comments to be seen by many people. But fake news is so tantalizing in how it stokes our biases and political leanings that it breaks this system. People will click through, Like, and share this content because they agree with or are entertained by it, not because it’s high quality.
This in turn incentivizes publishers of false news hoaxes. Facebook demotes hoaxes when identified, and is blocking monetization and ad buys from these publishers. But these mechanics also incentivize publishing of highly polarized opinion, exageration, and sensationalism. And when advertisers pay to boost the reach of fake news, its click-baityness attains these ads a level of engagement that wins them a lower price in Facebook’s auction system.
That’s how Facebook profits from fake news and polarization, even as it vows to work harder to protect us from it. While Facebook might want to offer an open platform where it’s not the opinion police or even the truth police, it’s simultaneously earning money from some of the most malicious uses of free expression.