AI CEO Says Artificial Intelligence Can 'Defeat Bias,' 'Improve Morality,' and Craft Perfect Laws
Ambarish Mitra, CEO of the technology company Blippar, suggested that artificial intelligence (AI) can "improve morality," enabling people to reject "human bias" and come to a more perfect understanding of right and wrong. This could theoretically open the door to a utopian legal system, Mitra suggests. On the contrary, handing over morality to AI could also launch a dystopia, and Mitra's understanding of morality helps explain why he is blind to this possibility.
Noting that AI introduces new moral problems — how should a self-driving car decide which of two pedestrians on the road to run over? — the CEO also suggested that AI could help humans create a "single 'perfect' system of morality with which everyone agrees."
Mitra discussed Ronald Dworkin's 1986 book Law's Empire, in which an imaginary, idealized jurist named Jude Hercules holds superhuman abilities to understand the law in its fullest form. "Not only does Judge Hercules understand how to best apply the law in a specific instance, but he also understands how that application might have implications in other aspects of the law and future decisions," Mitra noted.
While Judge Hercules will never exist, the CEO suggested, "perhaps AI and machine-learning tools can help us approach something like it."
How would a machine construct morality? By utilizing human opinion, Mitra suggested in an article for Quartz.
"Let us assume that because morality is a derivation of humanity, a perfect moral system exists somewhere in our consciousness," the CEO began, skimming over the complex issue of what morality actually is. "Deriving that perfect moral system should simply therefore be a matter of collecting and analyzing massive amounts of data on human opinions and conditions and producing the correct result."
Big data could make such an analysis possible. "What if we could collect data on what each and every person thinks is the right thing to do? And what if we could track those opinions as they evolve over time and from generation to generation? What if we could collect data on what goes into moral decisions and their outcomes? With enough inputs, we could utilize AI to analyze these massive data sets—a monumental, if not Herculean, task—and drive ourselves toward a better system of morality," Mitra triumphantly declared.
But analyzing every human opinion would not necessarily yield moral truth. Artificial intelligence would have to remove biases, for starters. The CEO suggested exactly that.
"AI could help us defeat biases in our decision-making. Biases generally exist when we only take our own considerations into account. If we were to recognize and act upon the wants, needs, and concerns of every group affected by a certain decision, we’d presumably avoid making a biased decision," Mitra suggested.