Usually, as I start my mornings, I have something on in the background. Often is it Glenn Beck. I'm not a huge Beck fan, but having worked in radio for almost 30 years, silence makes me nervous since it reminds me of dead air.
Beck has been on an AI jag off and on for a while now. While it helps DOGE uncover government waste and makes interesting pictures, there is, of course, a downside — a big one. I'm not talking about something on the level of Skynet, Colossus: The Forbin Project, or the Demon Seed (younger readers will have to look those up), although we could end up in one of those scenarios someday. The danger from AI is that it could become so ubiquitous that it affects us in ways in which we don't even notice its presence.
It is no secret that the almighty algorithms have been at work directing our purchasing choices, viewing habits, and political views, but who is responsible when an algorithm is a factor in suicide? In 2022, Chase Nasca took his own life by walking in front of a train near his home in Bayport, N.Y. Chase's parents say that TikTok's algorithm targeted the 16-year-old's account with thousands of images involving suicide.
The New York Post reports that in 2023, Chase's parents filed suit against the platform. In December, the company asked to have the suit thrown out, citing the First Amendment and the fact that since it does not provide a "tangible product," it is not subject to product liability laws. The Nascas moved to block the motion this month.
The Nascas say that Chase's account was hit with content encouraging viewers to kill themselves by stepping in front of a train. Chase lived within a quarter-mile of the Long Island Rail Road tracks. The Nascas allege that TikTok tailored the videos directed to Chase around the fact that the family lived so close to the railroad. The parents say that Chase had started out using the platform to watch “uplifting and motivational videos.”
The filing states, “TikTok used Chase’s geolocating data to send him … railroad-themed suicide videos both before and after his death.” The filing goes on to say that TikTok used “engagement through a progression of extreme videos which exploited his underdeveloped neurology and emotional insecurity.” The motion states that Chase's death was no coincidence and attributes the tragedy to "intentional design decisions." It also states that TikTok has admitted in the past to using location data to send users content. The motion also states that the platform had a duty to protect the teen from "foreseeable injuries."
Of course, someone will say, "Where were the parents? Why weren't they monitoring Chase's internet activity?" I understand the sentiment, but this isn't the 1980s. The old-school parental blocks don't cut it anymore, especially with a generation of people as tech-savvy as Chase's. And this is not as simple as locking someone out of an account. Suffice it to say that while TikTok claims that it does not provide a "tangible project," it clearly views its users as such. It may not have intended for Chase to take his life by stepping in front of a train; it just didn't care.
The algorithm that prompted Chase to take his own life and even apparently suggested a location will be similar to the algorithms that will be manipulating all of our lives sooner than we would like to think. In his 1950 publication, "I, Robot," Isaac Asimov created the Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Although the laws are fictional, we would be wise to note that as far as we know, nothing similar exists for AI, although the time has come for some sort of code of ethics. We may have nothing to fear from a helpful AI program (aside from lost jobs, privacy, and an increasing amount of free agency), but what about the people who created it? And what will we have to fear when it begins to program itself?
Join the conversation as a VIP Member