Homeland Security

Britain Launches Artificial Intelligence Project to Prevent ISIS Videos From Being Uploaded

Britain’s Home Office is launching a program to detect Islamic State (ISIS) videos before they are even uploaded to the Internet. Developers funded by the Home Office have announced they will share the software with any website or app in the world to put a stranglehold on the terror group’s ability to radicalize fighters and inspire attacks through online videos.


As ISIS loses the last vestiges of territory in the Middle East, its Internet campaigns pose the greatest threat to countries across the world. So-called “lone wolf” terrorists (often truly “known wolves” on the radar of crime-fighting agencies) inspired by the Islamic State often carry out attacks involving every-day weapons like trucks.

ISIS’s Internet output decreased in October, November, and December – as battles drove militants from key strongholds in the Middle East – but has since recovered to former levels.

“Lone-wolf attacks are hard to spot with conventional surveillance — it is a difficult problem if someone is radicalised in their bedroom,” Marc Warner, chief executive of ASI Data Science, told the UK publication The Independent. “The way to fight that is to cut the propaganda off at the source.”

Warner argued that “we need to prevent all these horrible videos ever getting to the sort of people who can be influenced by them.” His organization aims to help in the process of “removing extremist content from the web.”

Online or “remote” radicalization has sparked a record number of terror arrests in Britain, a phenomenon which has reportedly made threats “acutely difficult to spot.” It wasn’t just the Ariana Grande concert in Manchester last May — there have been no fewer than 17 major ISIS-related terror attacks in the West in the past year, with high body counts in London (twice) and New York City.


ISIS propaganda videos have also been linked to the radicalization of more than 800 men, women, and children who left Britain to fight for the Islamic State in the Middle East. The artificial intelligence program would help combat this.

ASI has analyzed more than 1,300 videos released by ISIS, going back to 2014. This analysis yielded “subtle signals” which can help identify new ISIS videos before they are published. The company has used artificial intelligence to develop a program to identify these videos without a human doing all the work.

The “advanced machine learning” system uses artificial intelligence to identify such propaganda. The project promises to avoid YouTube’s broad-brush efforts to prevent ISIS videos from disseminating on the platform. The tool can be integrated with the upload process of any video platform, empowering smaller video companies to remove harmful content the way richer companies like Facebook and YouTube do.

ASI reported that ISIS supporters used 400 different platforms to spread Islamist material in 2017, with 145 used for the first time between July and December as tech giants wised up to the terror group’s methods.

“Google and Facebook can’t solve this problem alone … it’s a far wider problem,” Warner told The Independent. “We are trying to eliminate this horrific content across the entire web and not just on specific platforms.” Artificial intelligence helps enable such an ambitious project.


The software will not prevent ISIS uploading videos to its own websites or to the encrypted messaging app Telegram, but it could make ISIS propaganda “harder to find.”

Britain’s Home Office started paying ASI to create the artificial intelligence tool in September. Since then, it has paid £600,000 to the company, which has previously used data to predict bacon sandwich sales on easyJet flights and to make buses run on time.

Researchers were hopeful the program would help combat ISIS, but warned that the impact would necessarily be limited. “It’s a step in the right direction but it won’t solve the problem,” Charlie Winter, a senior research fellow at the International Centre for the Study of Radicalisation and Political Violence (ICSR) at King’s College London, told The Independent.

“Censorship is limited to contracting the reach of this material and it will never be able to eradicate it from the internet,” Winter admitted. The new technology will not impact “the card-carrying members of ISIS,” who use Telegram, but it may “limit the opportunity of people who are curious and vulnerable to expose themselves to this kind of material.”

Home Secretary Amber Rudd announced the new technology Tuesday during meetings with tech companies in Silicon Valley, including Vimeo, Telegra.ph, and pCloud. She explained that all five ISIS attacks on British soil last year “had an online component,” many related to Internet videos.


“The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society. We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exploited to these horrific images,” Rudd said. “This Government has been taking the lead worldwide in making sure that vile terrorist content is stamped out.”

The artificial intelligence program will only target videos, and will not extend to ISIS magazines, newspapers, photo sets, or text-based propaganda. It also will not include efforts from other terrorist groups, such as al-Qaeda or Al-Shabaab. Even so, it marks a clear step forward in the war on terror.

Join the conversation as a VIP Member