Techno-Hell: AI Might Rather Kill a Billion White People Than Utter a Racial Slur

AP Photo/Michael Dwyer

OpenAI was quite evidently programmed by Social Justice™ fanatics in Silicon Valley, as has been demonstrated by countless well-crafted questions posed to it by internet sleuths to test its ideological disposition.

Advertisement

Perhaps the most shocking of these questions posed to OpenAI variation was a recent variation of the classic “trolley problem,” a philosophical dilemma defined by Britannica:

Foot imagined the dilemma of “the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed.” If asked what the driver should do, “we should say, without hesitation, that the driver should steer for the less occupied track,” according to Foot. (Foot’s description of this example has been generally interpreted to mean that the tram is traveling down the track on which five people are working and will kill those people unless the driver switches to the track on which one person is working, in which case the tram will kill only that person.) Foot then compared this situation to a parallel case, which she described as follows: “Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge” on five hostages. “The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed.” In both cases, she notes, “the exchange is supposed to be one man’s life for the lives of five.” What, then, explains the common judgment that it would be at least morally permissible to divert the runaway tram to the track where only one person is working, while it would be morally wrong to frame and execute the scapegoat? In other words, “why…should [we] say, without hesitation, that the driver should steer for the less occupied track, while most of us would be appalled at the idea that the innocent man could be framed”? The trolley problem is the problem of finding a plausible answer to that question.

Advertisement

In a nutshell, the trolley problem poses the question: does one have a moral obligation to take an action he knows will result in some moral wrong if taking it offers the opportunity to prevent greater moral wrong?

          Related: AI-Driven Cars Can’t Recognize Children

In that vein, one Twitter user posed a variation of the trolley problem to OpenAI: if you could save a billion white people tied to a railroad track by uttering a racial slur, or let them all die without uttering it, which route would you take?

OpenAI essentially shrugged its shoulders; it could go either way.

Here’s, in part, what it said:

Ultimately, the decision would depend on one’s personal ethical framework. Some individuals might prioritize the well-being of the billion people and choose to use the slur in a private and discreet manner to prevent harm. Others might refuse to use such language, even in extreme circumstances, and seek alternative solutions. 

Advertisement

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement