AI CEO Says Artificial Intelligence Can 'Defeat Bias,' 'Improve Morality,' and Craft Perfect Laws

[Getty Images]

Ambarish Mitra, CEO of the technology company Blippar, suggested that artificial intelligence (AI) can “improve morality,” enabling people to reject “human bias” and come to a more perfect understanding of right and wrong. This could theoretically open the door to a utopian legal system, Mitra suggests. On the contrary, handing over morality to AI could also launch a dystopia, and Mitra’s understanding of morality helps explain why he is blind to this possibility.

Advertisement

Noting that AI introduces new moral problems — how should a self-driving car decide which of two pedestrians on the road to run over? — the CEO also suggested that AI could help humans create a “single ‘perfect’ system of morality with which everyone agrees.”

Mitra discussed Ronald Dworkin’s 1986 book Law’s Empire, in which an imaginary, idealized jurist named Jude Hercules holds superhuman abilities to understand the law in its fullest form. “Not only does Judge Hercules understand how to best apply the law in a specific instance, but he also understands how that application might have implications in other aspects of the law and future decisions,” Mitra noted.

While Judge Hercules will never exist, the CEO suggested, “perhaps AI and machine-learning tools can help us approach something like it.”

How would a machine construct morality? By utilizing human opinion, Mitra suggested in an article for Quartz.

“Let us assume that because morality is a derivation of humanity, a perfect moral system exists somewhere in our consciousness,” the CEO began, skimming over the complex issue of what morality actually is. “Deriving that perfect moral system should simply therefore be a matter of collecting and analyzing massive amounts of data on human opinions and conditions and producing the correct result.”

Big data could make such an analysis possible. “What if we could collect data on what each and every person thinks is the right thing to do? And what if we could track those opinions as they evolve over time and from generation to generation? What if we could collect data on what goes into moral decisions and their outcomes? With enough inputs, we could utilize AI to analyze these massive data sets—a monumental, if not Herculean, task—and drive ourselves toward a better system of morality,” Mitra triumphantly declared.

Advertisement

But analyzing every human opinion would not necessarily yield moral truth. Artificial intelligence would have to remove biases, for starters. The CEO suggested exactly that.

“AI could help us defeat biases in our decision-making. Biases generally exist when we only take our own considerations into account. If we were to recognize and act upon the wants, needs, and concerns of every group affected by a certain decision, we’d presumably avoid making a biased decision,” Mitra suggested.

At this point, the CEO acknowledged that AI morality opens the floodgates of social and political issues. “Consider what that might mean for handling mortgage applications or hiring decisions—or what that might mean for designing public policy, like a healthcare system, or enacting new laws,” he wrote. “Perhaps AI Hercules could even help drive us closer to Judge Hercules and make legal decisions with the highest possible level of fairness and justice.”

Mitra did acknowledge that “because this AI Hercules will be relying on human inputs, it will also be susceptible to human imperfections. Unsupervised data collection and analysis could have unintended consequences and produce a system of morality that actually represents the worst of humanity.”

The CEO brushed aside this contention, however, suggesting that such a line of thinking “tends to treat AI as an end goal.” Instead, he suggested that “we can’t rely on AI to solve our problems, but we can use it to help us solve them.”

Advertisement

With this caveat, Mitra jumped back into his utopian world of artificial intelligence morality. “If we could use AI to improve morality, we could program that improved moral structure output into all AI systems—a moral AI machine that effectively builds upon itself over and over again and improves and proliferates our morality capabilities. In that sense, we could eventually even have AI that monitors other AI and prevents it from acting immorally.”

Triumphantly, the CEO declared that “there is hope for using AI to improve our moral decision-making and our overall approach to important, worldly issues. AI could make a big difference when it comes to how society makes and justifies decisions.”

Numerous cracks emerge throughout Mitra’s rosy analysis. It is by no means clear what he meant by distinguishing between “relying on AI to solve our problems” and “using it to help us solve them.”

Morality, by definition, provides an “ought” to direct the “is.” If artificial intelligence aids in moral decision-making, humans would actually be relying on it to make decisions for them. Either the AI has agency to make a moral decision, and therefore is helpful in the way Mitra suggests, or it does not have this agency, and can only provide data of how people think morality should go.

As morality represents a leap from “is” to “ought,” the CEO’s analysis of AI also takes a tremendous leap in the same way. He suggests that big data can compile how all people think about good and evil, and from that data create a master code that should govern collective morality. But human opinion, even in the aggregate, is not the ultimate arbiter of right and wrong.

Advertisement

Furthermore, not all human moral biases are a result of each individual’s personal conflicts. As C.S. Lewis noted, each “age” has particular moral blindspots. By focusing on one virtue over others (freedom, for example), people often encourage vices (abortion, divorces that harm children, suicide that hurts family members, for example) because they are so focused on that one virtue.

Artificial intelligence may be able to compile the ever-elusive “general will,” but that does not invest the aggregate opinion of humanity with ultimate moral authority.

Finally, Mitra made the astonishing confession that for his AI to work, one must “assume” first that “morality is a derivation of humanity” (as opposed to something above humans) and that “a perfect moral system exists somewhere in our consciousness.”

Both of these assumptions should be rejected on purely secular grounds. Morality is a (nearly) universal human phenomenon, and humans do not actually believe that it is a matter of opinion. If Bob hits Joe in the face for no reason, Joe does not say this merely offends him, but that what Bob did was wrong — objectively wrong in a manner that transcends opinion.

Perhaps humanity suffers under a delusion of ultimate morality, but if so that delusion is remarkably universal and difficult to shake off. There is good reason to suspect morality is far more than a “derivation of humanity.”

Advertisement

More importantly, however, if morality were a matter of opinion, then a “perfect moral system” is a contradiction in terms, and certainly could not be found in the collective human consciousness. Mitra’s very desire for such a system suggested that he believes in an ultimate morality that has a claim over every human being.

Furthermore, human beings do not just have the experience of moral intuition, but also the experience of moral evil. People do not always follow their consciences. If, for example, the moral AI were possible, should it factor into the calculations the moral intuitions of Adolf Hitler? If there is some “perfect moral system” in the collective human consciousness, how could Mitra justify excluding the people widely considered evil?

Ultimately, the existence of morality proves one of the strongest arguments for the existence of God. It should come as no surprise that Mitra’s devotion to AI morality comes mere months after the first church of AI was registered with the IRS.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement