In the summer of 2023, OpenAI created a “Superalignment” team whose goal is to guide and control future artificial intelligence systems that may be powerful enough to cause human extinction. Less than a year later, the team disappeared.
OpenAI tells Bloomberg The company is “integrating the team more deeply into its research efforts to help the company achieve its security goals.” But a series of tweets from Jan Leike, one of the team leaders who recently resigned, exposed the security team and internal tensions between large corporations.
in a statement Posted on X On Friday, Leike said the Superalignment team has been pushing for resources to complete the study. “Building machines that are smarter than humans is inherently dangerous work,” Lake writes. “OpenAI has a huge responsibility on behalf of all of humanity. But over the past few years, safety culture and processes have given way to shiny products. OpenAI did not immediately respond to Engadget’s request for comment.
Leike’s departure earlier this week came just hours after OpenAI chief scientist Sutskevar announced he was leaving the company. Not only is Sutskevar one of the leaders of the Superalignment team, but he also helped co-found the company. Sutskvall’s move comes six months after he participated in the decision to fire CEO Sam Altman over concerns that Altman had been “consistently candid” with the board. Altman’s brief ouster sparked a revolt within the company, with nearly 800 employees signing a letter threatening to resign if Altman was not reinstated. Five days later, Altman returned as OpenAI CEO after Sutskevar signed a letter saying he regretted his actions.
When OpenAI announced the establishment of the Superalignment team, it stated that it would invest 20% of its computer power in the next four years to solve the problem of controlling future powerful artificial intelligence systems. “[Getting] This right is critical to achieving our mission,” the company wrote at the time. On “Over the past few months, my team has been sailing against the wind,” he wrote, adding that he and OpenAI’s leadership had reached a “breaking point” due to disagreements over the company’s core priorities.
There have been more departures from the Superalignment team over the past few months. According to reports, in April, OpenAI fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information.
OpenAI tells Bloomberg Its future security efforts will be led by another co-founder, John Schulman, whose research focuses on large language models. Jakub Pachocki, the director who led the development of GPT-4, one of OpenAI’s flagship large language models, will succeed Sutskevar as chief scientist.
Superalignment is not the only team at OpenAI focused on artificial intelligence security. In October, the company established a new “readiness” team to prevent potential “catastrophic risks” from artificial intelligence systems, including cybersecurity issues and chemical, nuclear and biological threats.
Updated, May 17, 2024, 3:28 pm ET: In response to a request for comment on Lake’s accusations, an OpenAI publicist directed Engadget to a tweet from Sam Altman indicating that he would have something to say in the coming days.
This article contains affiliate links; if you click on such links and make a purchase, we may earn a commission.
3 Comments
Pingback: The OpenAI team tasked with protecting humanity no longer exists – Tech Empire Solutions
Pingback: The OpenAI team tasked with protecting humanity no longer exists – Paxton Willson
Pingback: The OpenAI team tasked with protecting humanity no longer exists – Mary Ashley