Facebook announced today that the company began fact-checking political photos and videos on Wednesday in an attempt to root out fake news. The company announced in a blog post that the changes come as a result of Facebook’s plan to review “ongoing election efforts.”
“By now, everyone knows the story: during the 2016 US election, foreign actors tried to undermine the integrity of the electoral process,” Guy Rosen, vice president of product management at Facebook, wrote. “Their attack included taking advantage of open online platforms — such as Facebook — to divide Americans, and to spread fear, uncertainty and doubt.” Rosen said although the clock cannot be turned back, “we are all responsible for making sure the same kind of attack [on] our democracy does not happen again.” He said Facebook is taking its role in the effort “very, very seriously.”
He outlined several steps Facebook is taking to combat the problem:
- First, combating foreign interference,
- Second, removing fake accounts,
- Third, increasing ads transparency, and
- Fourth, reducing the spread of false news.
Alex Stamos, the company’s chief security officer, added to Rosen’s remarks by attempting to define fake news. He listed as the most common issues:
- Fake identities– this is when an actor conceals their identity or takes on the identity of another group or individual;
- Fake audiences– so this is using tricks to artificially expand the audience or the perception of support for a particular message;
- False facts – the assertion of false information; and
- False narratives– which are intentionally divisive headlines and language that exploit disagreements and sow conflict. This is the most difficult area for us, as different news outlets and consumers can have completely different on what an appropriate narrative is even if they agree on the facts.
Stamos singled out “organized, professional groups” whose motivation is money. “These cover the spectrum from private but ideologically motivated groups to full-time employees of state intelligence services,” he said. “Their targets might be foreign or domestic, and while much of the public discussion has been about countries trying to influence the debate abroad, we also must be on guard for domestic manipulation using some of the same techniques.”
Facebook product manager Samidh Chakrabarti said the company has gotten better at finding and deleting fake accounts. “We’re now at the point that we block millions of fake accounts each day at the point of creation before they can do any harm,” he said. “We’ve been able to do this thanks to advances in machine learning, which have allowed us to find suspicious behaviors — without assessing the content itself.”
Another produce manager, Tessa Lyons, revealed more details on how Facebook’s partnership with third-party fact-checkers will work:
- We use signals, including feedback from people on Facebook, to predict potentially false stories for fact-checkers to review.
- When fact-checkers rate a story as false, we significantly reduce its distribution in News Feed — dropping future views on average by more than 80%.
- We notify people who’ve shared the story in the past and warn people who try to share it going forward.
- For those who still come across the story in their News Feed, we show more information from fact-checkers in a Related Articles unit.
- We use the information from fact-checkers to train our machine learning model, so that we can catch more potentially false news stories and do so faster.
The company had previously announced partnerships with several left-leaning fact-checking organizations, including Snopes and PolitiFact.
Tucked in near the end of the lengthy blog post, almost as an afterthought, Lyons revealed that on Wednesday, Facebook began “fact-checking photos and videos, in addition to links.” She said, “We’re ramping up our fact-checking efforts to fight false news around elections. We’re scaling in the US and internationally, expanding beyond links to photos and videos, and increasing transparency.”
The company has been on the defensive in recent days after revelations that political data firm Cambridge Analytica sold information obtained via Facebook quizzes to third-party vendors, violating the privacy of millions of Facebook users. Facebook claims that the practice was a violation of the company’s terms of service, but didn’t bother to verify that the company had deleted the information that was collected as required as the terms of service stipulated. Lawmakers on both sides of the Atlantic are calling for more regulation of Facebook and other social media platforms in the wake of the scandal.
Follow me on Twitter @pbolyard