Via The Register:
Microsoft has eliminated its entire team responsible for ensuring the ethical use of AI software at a time when the Windows giant is ramping up its use of machine learning technology.
The decision to ditch the ethics and society team within its artificial intelligence organization is part of the 10,000 job cuts Microsoft announced in January, which will continue rolling through the IT titan into next year.
The hit to this particular unit may remove some guardrails meant to ensure Microsoft’s products that integrate machine learning features meet the mega-corp’s standards for ethical use of AI. And it comes as discussion rages about the effects of controversial artificial intelligence models on society at large. [emphasis added]
As the proprietor of OpenAI and ChatGPT, the company is at the forefront of AI development:
Microsoft is investing billions of dollars into OpenAI – a startup whose products include Dall-E2 for generating images, GPT for text (OpenAI this week introduced its latest iteration, GPT-4), and Codex for developers.
Frankly, it’s amazing they ever bothered to create an ethics team in the first place.
But let’s not give them too much undue credit. The whole thing was a clear exercise in public relations, not any genuine commitment to understanding the potentially negative implications of their AI development project.
As evidenced by its disbandment, the AI ethics team was obviously ceremonial and disposable. Yet, all the same, that they ever bothered with creating the façade of responsibility in the first place was unexpected.
This is how Microsoft summarizes its approach to ethics compliance vis-à-vis its AI:
The pace at which artificial intelligence (AI) is advancing is remarkable. As we look out at the next few years for this field, one thing is clear: AI will be celebrated for its benefits but also scrutinized and, to some degree, feared. It remains our belief that, for AI to benefit everyone, it must be developed and used in ways which warrant people’s trust. Microsoft’s approach, which is based on our AI principles, is focused on proactively establishing guardrails for AI systems so that we can make sure that their risks are anticipated and mitigated, and their benefits are maximized.
Of course, Microsoft’s mission is not “to benefit everyone” but to maximize profit for its shareholders as well as to achieve whatever corporate-state social control objectives are set in private, outside of public scrutiny.
Were an ethical constraint to come into conflict with these objectives, Microsoft would readily abandon the former in favor of more profits and social control.
Join the conversation as a VIP Member