Connect with us

Entertainment

OpenAI’s AGI readiness team has been dissolved

Published

on

hero


OpenAI has once again trimmed its safety-focused operations, dissolving its AGI Readiness team — a group dedicated to preparing for so-called artificial general intelligence. Miles Brundage, a senior advisor on the team, broke the news in a post on his Substack Wednesday, which also confirmed his departure from the company.

SEE ALSO:

OpenAI funding round values company at $157 billion

In the post, Brundage hinted at a growing need for independence in his work, suggesting that his exit reflects a desire for more freedom as he continues to explore the rapidly evolving AI landscape.

Advertisement

“I decided that I want to impact and influence AI’s development from outside the industry rather than inside,” Brundage wrote, adding, “I’ve done much of what I set out to do at OpenAI.”

Mashable Light Speed

Brundage went on to express broader concerns, stating, “Neither OpenAI nor any other frontier lab is ready, and the world is also not ready.” According to Brundage, this isn’t a lone sentiment — many within OpenAI’s senior ranks share these reservations. As for the AGI Readiness team, former members are set to be reassigned across other divisions within OpenAI.

SEE ALSO:

Sam Altman steps down as head of OpenAI’s safety group

Advertisement

A company spokesperson told CNBC they back Brundage’s choice to move on. Still, the timing is rough for OpenAI, which has been navigating an exodus of senior leadership at a moment when stability is key. Although it was able to snag a top AI researcher from Microsoft, that addition doesn’t fill the many recent gaps in OpenAI’s upper ranks.

The leadership shifts and team dissolutions aren’t helping to quell mounting concerns around OpenAI’s push toward AGI, particularly since its controversial announcement of a plan to become a fully for-profit company after beginning its life as a nonprofit.

Back in May, the company disbanded its SuperAlignment team — a group tasked with pioneering “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” Around the same time, OpenAI also reassigned its top AI safety leader, raising eyebrows within the AI ethics community and beyond.




Advertisement
Continue Reading
Advertisement

Trending