Facebook Hires Team of Psychologists to Help Employees Cope with Objectionable Content
Earlier this week in a blog post, Facebook revealed some of the inner workings of the social media platform's content review process. Apparently, the job is so stressful that the company keeps a team of psychologists onsite to help their employees cope.
Facebook admits that content review on this scale has never been attempted before. "After all," Facebook said, "there has never been a platform where so many people communicate in so many different languages across so many different countries and cultures."
"We recognize the enormity of this challenge and the responsibility we have to get it right," they added.
Facebook claims that they employ 7,500 content reviewers, "a mix of full-time employees, contractors and companies we partner with," who cover every time zone and are proficient in over 50 languages.
So what criteria are they using to screen for successful reviewers? According to Facebook:
- Language proficiency is key and it lets us review content around the clock.
- We also look for people who know and understand the culture.
- We also screen for resiliency. This means that we look at a candidate’s ability to deal with violent imagery for example.
- And of course, we meet all applicable local employment laws and requirements.
To help content reviewers cope with "graphic and objectionable" content (does that include conservative and pro-Trump posts?) Facebook employs a team of clinical psychologists who are tasked with running "resiliency programs" to support those employees.
"This group also works with our vendor partners and their dedicated resiliency teams to help build industry standards," says Facebook. The company doesn't say whether those partners include left-wing fact-checking organizations like PolitiFact and Snopes. (I'm guessing those folks have a high propensity for being triggered by conservative content, so in all likelihood they're in need of psychological help.)
Facebook says that "All content reviewers — whether full-time employees, contractors, or those employed by partner companies — have access to mental health resources, including trained professionals onsite for both individual and group counseling." And, of course, Facebook is quick to point out, all employees "have full health benefits."
The blog post explains how the fact-checking process works. Once a post is reported, it's routed to a content team "based on language or the type of violation." Facebook says they have teams that are trained in specific types of violations. If needed, those teams can "escalate it to subject matter experts on the Community Operations Escalations or the content policy teams." (So a nudity team, a violence team, and a Trump team?)
Sometimes all that's needed is a quick look at a post to determine whether it should be removed. Nudity is fairly easy to deal with— all that's needed is for a reviewer peek at and delete. Others are more complicated, says Facebook. "A word that’s historically been used as a racial slur might be shared as hate speech by one person but can be a form of self-empowerment if used by another," the blog post explains. "Context helps reviewers apply our standards and decide whether something should be left up or taken down." (Which I think means they look at the skin color of the Facebook user to determine if the individual is allowed to use a racial slur.)
Facebook assures users that they're doing everything possible to eliminate bias — to take it "out of the equation entirely." They claim they audit a sample of reviewer decisions every week and say they even audit the auditors.
The reviewers come from "many backgrounds, reflect the diversity of our community, and bring a wide array of professional experiences, from veterans to former public sector workers."
Of course, what they don't say is that their employees live and work inside a liberal Silicon Valley bubble. Facebook likes to crow about "diversity," but what they really mean is that they want diversity that remains safely within their left-wing bubble: women, people of color, and LGBTQ people. Don't believe me? Check out their 2018 "Diversity Report," where they brag about the number of employees with vaginas, brown faces, and LGBQA+ or Trans + proclivities (exhausting keeping up with the acronyms, isn't it).
This is all fine and good — it's always nice when the workforce reflects the demographics of the country — but what Facebook really needs is an ideologically diverse team of content reviewers. It's great that they've given us a window into their process, but what people really want to know is whether Facebook is doing anything to make sure they have reviewers from a variety of ideological viewpoints judging whether a post should be removed or permitted.
Facebook admits that technology can’t catch everything, "including things where context is key like hate speech and bullying," so they have to rely on content reviewers to make those calls. But Facebook refuses to define hate speech and bullying. They like to virtue signal about hate speech, but they never really get around to explaining exactly what that is. The burning question in everyone's mind is, where does Facebook draw the line? Is disagreeing with the LGBQA+ or Trans+ (exhausting) agenda a deletable offense? Is saying mean things about Hillary Clinton considered bullying? Is praising the Second Amendment considered hate speech?
Until Facebook answers those questions, users will continue to be in the dark about their content review policies and will continue to question the objectivity of the platform.
Follow me on Twitter @pbolyard