Facebook is in crisis mode. The tools it created to help advertisers promote their products have been hijacked by conspiracy theorists, bots, and pretty much any organization that’s trying to advance a position. There’s nothing wrong with advancing a viewpoint using testimonials, news stories, and reviews. The problem is when it’s done using made-up stories, lies, and fake posters. Facebook has been so focused on advancing its advertising revenue that they messed up big time in preventing these tools from being hijacked for nefarious purposes. And much the same could be said for YouTube and Google.
These companies keep responding with the same answer: “We’ll try to do better next time.” But next time brings even more outrage because they’re incapable of reining in what they’ve unleashed.
Last month Facebook promoted conspiracy theories shared by users following the recent Amtrak crash, and the company said it was “going to work to fix the product.” Google posted a conspiracy theory in its search results after a Texas mass shooting last year, and they said it would “continue to look at ways to improve.”
This past week YouTube and Facebook did it again. They each promoted conspiracy theories about David Hogg, one of the student survivors from the Florida high school shooting.
YouTube’s top trending story claimed that the student, who was one of the voices appealing for banning assault rifles, was an actor and not a student. YouTube responded by saying its system “misclassified” the video because it featured footage from a reputable news broadcast, which is another way of saying that they’re clueless when it comes to screening the videos on their site. They said, “We are working to improve our systems moving forward.”
The hoax was also a trending topic on Facebook, with links to stories that claimed he was not even a student, but a paid actor. A Facebook spokesman called the posts “abhorrent” and said Facebook was removing the content.
It doesn’t matter what side of the political spectrum you’re on. These sites are designed to elevate the most outrageous stories by design, left or right. In a world where conspiracy theories need visibility to take hold, these sites are great magnifiers. As a result, these technology platforms provide more visibility than these conspiracy theories ever could obtain using conventional media. The spreading of misinformation and hoaxes has now become the major feature of these platforms because they’re magnets for the crazies of the world.
It’s not clear that these social networks can solve this issue. Take YouTube: 300 hours of video are uploaded each minute, 24 hours a day. Every day five billion videos are watched by 30 million visitors.
Facebook’s answer is adding 20,000 people to police the postings by the end of this year. But they have over two billion users, so that’s one person checking posts for every 100,000 users.
If there is a solution, it may just be artificial intelligence (AI) that can somehow automate the process of detecting hoaxes and conspiracy theories by looking at such things as word patterns, hits, and sources. If Facebook can’t solve this, they may just find users and advertisers choosing to abandon them in droves. In fact, Unilever threatened to pull out of Facebook and Google if they didn’t clean up their swamps.
One suggestion from early Facebook investor and now vocal critic Roger McNamee suggests Facebook make users pay:
Facebook’s advertising business model is hugely profitable, but the incentives are perverse. Using a variety of psychological techniques derived from propaganda and the design of gambling systems, Facebook grabs and holds user attention better than any advertising platform before it. Intensive surveillance enables customization for each of its 2.1 billion users. Algorithms maximize engagement by appealing to emotions such as fear and anger. Facebook groups intensify preexisting beliefs, increasing polarization.
Facebook could adopt a business model that does not depend on surveillance, addiction and manipulation: subscriptions. Facebook is uniquely positioned to craft the online equivalent of cable television, combining basic services with a nearly unlimited number of premium offerings that could include news, television and even movies. Facebook could be the Comcast of Cord Cutters.
AI might be used to address this problem, an area where Facebook has been greatly expanding its resources.
Despite recent failures, Facebook recently announced that they are using Artificial Intelligence (AI) to prevent suicide by examining posts from its users to look for language expressing suicidal thoughts.
The AI software, supported by trained staff, looks for word patterns, frequencies and other patterns that might indicate a problem. As the highly technical article explains, the challenge is to identify those at risk while reducing false positives. And then doing this across all languages.
While some might view this as creepy and others as useful, frankly, they have a much bigger problem to solve, eliminating the hoaxes and conspiracy theories. They’ve become so outrageous that they might drive one to drink, to madness—if not suicide! Another good reason to abandon Facebook.