The committee’s role is crucial for reviewing the safety of OpenAI’s models and ensuring any security concerns are addressed before their release. It was noted that the group had already conducted a safety review of OpenAI’s latest model, o1, after Altman had stepped down
read more
Sam Altman, CEO of OpenAI, has stepped down from his role on the internal Safety and Security Committee, a group established in May to oversee critical safety decisions related to OpenAI’s projects.
OpenAI announced this in a recent blog post, highlighting that the committee will now function as an independent oversight board.
The newly independent body will be chaired by Zico Kolter, a professor from Carnegie Mellon, and will include notable figures such as Quora CEO Adam D’Angelo, retired US Army General Paul Nakasone, and former Sony executive Nicole Seligman — all of whom already serve on OpenAI’s board of directors.
The committee’s role is crucial for reviewing the safety of OpenAI’s models and ensuring any security concerns are addressed before their release. It was noted that the group had already conducted a safety review of OpenAI’s latest model, o1, after Altman had stepped down.
The committee will continue to receive regular updates from OpenAI’s safety and security teams and will retain the authority to delay the release of AI models if safety risks remain unaddressed.
Altman’s departure from the committee comes after heightened scrutiny from US lawmakers. Five senators had previously raised concerns about OpenAI’s safety policies in a letter addressed to Altman.
Additionally, a significant number of staff members focused on AI’s long-term risks have left the company, and some ex-researchers have publicly criticised Altman for opposing stricter AI regulations that might conflict with OpenAI’s commercial interests.
This criticism aligns with the company’s growing investment in federal lobbying efforts. OpenAI’s lobbying budget for the first half of 2024 has reached $800,000, compared to $260,000 for all of 2023. Furthermore, Altman has joined the Department of Homeland Security’s AI Safety and Security Board, a role that involves providing guidance on AI’s development and deployment within US’ critical infrastructure.
Despite Altman’s removal from the Safety and Security Committee, there are concerns that the group may still be reluctant to take actions that could significantly affect OpenAI’s commercial ambitions. In a May statement, the company emphasised its intention to address “valid criticisms,” although such judgments may remain subjective.
Some former board members, including Helen Toner and Tasha McCauley, have voiced doubts about OpenAI’s ability to self-regulate, citing the pressure of profit-driven incentives.
These concerns arise as OpenAI reportedly seeks to raise more than $6.5 billion in funding, which could value the company at over $150 billion.
There are rumours that OpenAI might abandon its hybrid nonprofit structure in favour of a more traditional corporate approach, which would allow for greater investor returns but could further distance the company from its founding mission of developing AI that benefits all of humanity.