When the government sets about to marry its limitless appetite for social control with the seemingly limitless computing power of artificial intelligence, the end result is destined to be Orwell’s “thoughtcrime” – a concept that was once exclusively in the domain of speculative fiction — becoming reality. Life imitates art, it seems.
Customs and Border Protection (CBP), under the umbrella of the Department of Homeland Security (DHS), has reportedly been partnering with an AI tech firm called Fivecast to deploy social media surveillance software that, according to its proprietor, purportedly detects “problematic” emotions of social media users and subsequently reports them to law enforcement for further action.
Related: An AI a Day: Google and Microsoft Rush to Bring AI to Medical Field
Outlet 404, through FOIA requests, uncovered various Fivecast marketing documents elaborating on its software’s utility for law enforcement.
Via 404:
Customs and Border Protection (CBP), part of the Department of Homeland Security, has bought millions of dollars worth of software from a company that uses artificial intelligence to detect “sentiment and emotion” in online posts, according to a cache of documents…
CBP told 404 Media it is using technology to analyze open source information related to inbound and outbound travelers who the agency believes may threaten public safety, national security, or lawful trade and travel. In this case, the specific company called Fivecast also offers “AI-enabled” object recognition in images and video, and detection of “risk terms and phrases” across multiple languages, according to one of the documents.
Fivecast, according to its mission statement, is “used and trusted by leading defense, national security, law enforcement, corporate security and financial intelligence organizations around the world” and “deploys unique data collection and AI-enabled analytics to solve the most complex intelligence challenges.” It claims to work with the intelligence agencies of all five nations that comprise the so-called “Five Eyes” — the United Kingdom, United States, New Zealand, Australia, and Canada.
Among the many red flags that Fivecast claims to be able to detect with its software are the emotions of the social media user. Charts contained in the marketing materials uncovered by 404 show metrics regarding various emotions such as “sadness,” “fear,” “anger,” and “disgust” on social media over time. “One chart shows peaks of anger and disgust throughout an early 2020 timeframe of a target, for example,” 404 reports.
Logistical difficulties of AI assessing human emotion aside, this would theoretically open the door for the government to surveil and censor not just the substance of speech, but also the alleged emotion behind that speech (which could potentially at some point be admissible in court to impune the intent/motive of defendants). It’s almost impossible to overestimate the dystopian applications of this technology, which for obvious reasons governments around the world are eager beavers to adopt.
Join the conversation as a VIP Member