Belmont Club: The Hollow Man

AP Photo/Peter Dejong

People are at once the greatest treasure and the greatest bane. Who hasn’t watched the movie where sadistic convicts take the guards hostage with makeshift knives and threaten to kill them one by one unless their demands are met? Or the scenario where a cab driver refuses to drive all the way into a high crime area and drops his passenger off in a dimly lit street, leaving him to walk the rest of the way past a group of youths hanging menacingly out in an alleyway? Or perhaps we’ve heard of the pizza delivery employee hesitating to take a food order to a neighborhood that even the police avoid, leaving some elderly person without food because he can’t walk to the nearest store because of the gangs?

Advertisement

Drama is about good versus evil. The dilemma in each case is that people can be either the source of sentient evil or all that morally matters in the world; the ultimate good and the ultimate bad, depending on their choice. That’s a common view in many modern ethical systems—especially in humanism and most versions of utilitarianism or rights-based theories. Man was at the center of things for all recorded history. But if we could send robots into the mix, then humanity's position could change. One of the emerging trends is the use of robots to manage communities with problematic behavior, like jails. Robots are already being considered for more prison staff roles.

Delinia Lewis, the associate warden of the California Institution for Women, hopes to one day put AI-powered machines like these to work in her prison doing far more important jobs than slinging snacks. As staffing shortages continue to plague prisons around the country, Lewis believes AI could help close the gap.“Medicine distribution, cell feeding, security searches, package searches for fentanyl, all the hazardous and routine tasks that staff don't want to do,” said Lewis. “Why not let the robot do it? Then staff can focus on more intricate parts of the job.”

It could also negate the gangs. After all, automation can deliver pizza too. Food delivery robots, such as those from Coco Robotics and Serve Robotics, have been deployed in Chicago and other high crime big cities in recent years. The gangs could attack the robots, but they’ll just send bigger robots. But that has not solved all the difficulties, some emanating from surprisingly philosophical directions. One of them is what happens to the “right to misbehave.” The delivery robots' sensors are seen as threats to the privacy of malefactors because they see too much. 

Advertisement

Serve Robotics, which delivers food for Uber Eats, provided footage filmed by at least one of its robots to the LAPD as evidence in a criminal case. The emails show the robots, which are a constant sight in the city, can be used for surveillance.

A food delivery robot company that delivers for Uber Eats in Los Angeles provided video filmed by one of its robots to the Los Angeles Police Department as part of a criminal investigation, 404 Media has learned. The incident highlights the fact that delivery robots that are being deployed to sidewalks all around the country are essentially always filming, and that their footage can and has been used as evidence in criminal trials. Emails obtained by 404 Media also show that the robot food delivery company wanted to work more closely with the LAPD, which jumped at the opportunity.

The specific incident in question was a grand larceny case where two men tried (and failed) to steal a robot owned and operated by Serve Robotics, which ultimately wants to deploy “up to 2,000 robots” to deliver food for UberEats in Los Angeles. The suspects were arrested and convicted.

Shouldn’t people have the right to smoke weed, do drugs and mess around in a secret place society can ignore? Some say yes; some say no.  But trust Elon Musk to get to the heart of the problem. He argues that technology has the power to free us from moral choice if we delegate everything to it. If we can't help ourselves, it can help us. “Elon thinks he has a solution to mass incarceration: give every convicted criminal a free robot to follow them around.” Tom Marks examined the ethics of this startling proposal.

Advertisement

“You now get a free Optimus and it’s just gonna follow you around and stop you from doing crime,” he said during Tesla’s annual shareholder meeting.  “But other than that, you get to do anything.” 

It was presented as a more humane alternative to cages and concrete. But the casual phrase “stop you from doing crime” opens a philosophical chasm that swallows centuries of thinking about justice, punishment and what it means to be free. Strip away the sci-fi veneer and you’re left with a question legal philosophers have debated for centuries: can you have justice without human judgment?

But why can’t you have justice sans human judgement if people are just bodies without transcendance and agency only an illusion? We are faced with the fulfillment of the centuries old secularization project and instead of rejoicing suddenly we sense danger in it. But on what grounds should we object? When enforced virtue under robot supervision is indistinguishable from the self-willed activity of saints and vice cannot exist because it is not allowed by omnipresent automation then surely the situation passes the Turing Chinese Room Test for a perfect world. Computing pioneer Alan Turing asked: if you can’t tell the difference between two things, then is there a difference? Let's recall the test:

Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.

The questions at issue are these: does the machine actually understand the conversation, or is it just simulating the ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just acting as if it had a mind?

Advertisement

Using the same logic, a visitor from another planet could not tell the difference between a world of Optimus-enforced virtue and a world of universal justice, so what more could we want? If they are indistinguishable then perhaps it doesn't matter. Since AI will both abolish the need for humans to work and be subjectively virtuous, then who are the good and bad guys, the industrious and the slothful? Why should we forego our ease and mourn for the loss of our supposedly nonexistent eternal souls? The bargain was explored in the debate between World Controller Mustapha Mond and John the Savage in Aldous Huxley’s Brave New World.

Mustapha:  'We prefer to do things comfortably.'

'But I don't want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.'

'In fact,' said Mustapha Mond, 'you're claiming the right to be unhappy.'

'All right, then,' said the Savage defiantly, 'I'm claiming the right to be unhappy.'

'Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to be lousy; the right to live in constant apprehension of what may happen to-morrow; the right to catch typhoid; the right to be tortured by unspeakable pains of every kind.'

There was a long silence.

'I claim them all,' said the Savage at last.

Mustapha Mond shrugged his shoulders. 'You're welcome,' he said.

Now who wants to deliver pizza to Chicago's South Side?

Note: It's the holiday season, which means PJ Media VIP's Black Friday sale is back!

Now through Monday evening, 11:59 PM ET, receive 60% OFF an annual VIP, VIP Gold, or VIP Platinum membership with promo code FIGHT.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement