Mon-Sat: 8.00-10.30,Sun: 8.00-4.00
Microsoft Disbands AI Ethics Team: A Concerning Decision for Responsible AI Development
Home » GPT4 TESTING  » Microsoft Disbands AI Ethics Team: A Concerning Decision for Responsible AI Development
0 Shares

Share NL Voice

blue gpt4 coffee mug with big googly eyes

Microsoft Disbands AI Ethics Team: A Concerning Decision for Responsible AI Development

Who's Online

There are no users currently online

The elimination of a crucial AI ethics team raises questions about Microsoft's commitment to responsible AI innovation.

On March 14, 2023, it was reported that Microsoft had disbanded a key AI ethics team during a recent layoff of 10,000 employees. This team was responsible for ensuring Microsoft's AI products were designed with safeguards to minimize potential social harms. According to former employees, the ethics and society team played a critical role in Microsoft's strategy to mitigate risks associated with incorporating OpenAI technology into its products.

The responsible innovation toolkit and GPT-4's release

Before the team was disbanded, they had developed a "responsible innovation toolkit" to assist Microsoft engineers in predicting potential AI harms and implementing strategies to reduce those risks. The news of the team's dissolution came just as OpenAI released its most powerful AI model yet, GPT-4, which is now helping power Bing search.

Ask Our AI Anything

[mwai_chat]

SEE LATEST FROM APRIL 2023 - GPT4's 1st Crack @ 'Police Poetry'

Microsoft's response and continued commitment to responsible AI

In response to the report, Microsoft stated that they remain "committed to developing AI products and experiences safely and responsibly." The company highlighted its Office of Responsible AI, which has seen increased investment and growth over the past six years, as well as its other responsible AI working groups, the Aether Committee and Responsible AI Strategy in Engineering.

Experts Criticize Microsoft's Decision

Emily Bender, a University of Washington expert in computational linguistics and ethical issues in natural language processing, joined other critics in denouncing Microsoft's decision to dissolve the ethics and society team. Bender described the decision as "short-sighted" and "damning" given the importance and difficulty of AI ethics work.

The origins and decline of the ethics and society team

Microsoft first started focusing on responsible AI teams in 2017. By 2020, this effort included the 30-member ethics and society team. However, as competition with Google intensified, Microsoft began transferring most of the team's members to specific product teams, leaving only seven people to implement their ambitious plans. According to former employees, the remaining team struggled with the workload, and Microsoft did not always act on their recommendations.

The need for external pressure and regulation

Bender argues that self-regulation is insufficient and that external pressure is necessary for tech companies to invest in responsible AI teams. She advocates for regulators to step in and establish transparency, as well as protections for users. Without regulation, users risk adopting popular tools without understanding the potential harms associated with them.

The future of responsible AI and user caution

With the disbandment of the ethics and society team, the responsibility for ensuring responsible AI development at Microsoft now lies with the Office of Responsible AI. However, Bender suggests that users should be cautious when using AI for sensitive applications, such as medical advice, legal advice, or psychotherapy, until proper regulations are in place.

Microsoft's decision to dissolve its key AI ethics team raises concerns about the future of responsible AI development. Experts like Emily Bender argue that self-regulation is not enough and that external pressure and regulation are necessary to ensure the ethical use of AI technologies.

This website is currently testing Open AI's GPT4 for content creation and coding application that will enable residents to vote and file public documents in New London using a custom blockchain implementation aimed at immutably storing complaint forms and facilitate safe voting/local decision making across a new web3 platform. The idea of using blockchain for this purpose is not new. Here are examples of content created with the new AI release. Se April 2023 example of GPT4's poem about a mysterious police hero with two sides.

0 Shares

Leave a Reply

Your email address will not be published. Required fields are marked *