Facebook said today that they’ve removed “8.7 million pieces of content on Facebook that violated our child nudity or sexual exploitation of children policies” in the past quarter alone, including kid photos that they said could be “benign” but were removed out of an abundance of caution.
The social media giant said that almost all of the content was noticed by Facebook detection systems before any users reported the content to the company.
Antigone Davis, Facebook’s global head of safety, said the company has been employing photo-matching technology, artificial intelligence and machine learning to “proactively detect child nudity and previously unknown child exploitative content when it’s uploaded.”
Content is reported to the National Center for Missing and Exploited Children, which has fielded more than 28 million reports of online child sexual abuse from all sources on its CyberTipline since 1998.
“We know that the sexual exploitation of children is a global problem that demands a multi-faceted global solution,” said NCMEC senior vice president and COO Michelle DeLaune. “Facebook has consistently demonstrated their leadership and willingness to be a proactive leader in the fight to keep the internet safer for children.”
Along with flagging, reporting and removing pieces of content, Davis said Facebook is working “to find accounts that engage in potentially inappropriate interactions with children on Facebook so that we can remove them and prevent additional harm.”
Davis said that “to avoid even the potential for abuse, we take action on nonsexual content as well, like seemingly benign photos of children in the bath.”
The social media firm is using “specially trained teams with backgrounds in law enforcement, online safety, analytics, and forensic investigations, which review content and report findings” to NCMEC, and is helping the center develop software to better manage and prioritize the influx of reports.
The WeProtect Global Alliance to battle online sexual exploitation of children, which works with tech companies including Facebook, said in its 2018 threat assessment that “the current scale of offending has been further facilitated through the ubiquity of mobile devices, anonymous access and encryption, which has enabled child sexual abuse material (CSAM) at a previously inconceivable scale.”
“There are hidden services sites with over one million persistent profiles, where victims are re-victimised many hundreds of times a day,” the report noted. “…Increasingly, offending is taking place online and includes coercing or extorting children into producing indecent images of themselves or engaging in sexual activity via webcams, which can be captured and distributed by offenders. The nature and scale of these crimes continues to evolve rapidly inline with technology.”
“Harm may arise where vulnerable children are brought into contact with offenders via social media and other online services where they are exposed to that risk by a caregiver, as well as where the internet is used as the means of sharing CSAM or as a secure virtual meeting place for offenders to share information,” the report added. “In the past three years the level of active offender organisation, facilitated by technology, has created new, safe havens online for offenders to share, discuss and plan coordinated OCSE offences. The scale, complexity and danger of the threat has escalated. This must not be allowed to continue.”