top of page
  • Vishal Narayan

IAEA-like body must to regulate AI as it becomes bigger: OpenAI's Sam Altman

OpenAI CEO Sam Altman

A global body on scale and design of IAEA will be needed to regulate the AI research, when it’s become too big, said OpenAI CEO Sam Altman in a recent podcast with Bill Gates.

Altman spoke about threats of AI, its strengths, and his surprise with the success of ChatGPT-4, and much more in the latest episode of Gates' Unconfuse Me podcast.

Asked what he thinks would be "constructive" in terms of AI regulation, Altman said the world would need something corresponding to the International Atomic Energy Agency, which regulates nuclear research and stockpiling of weapons, since any oversight in the field is meant to have a global impact.

"It would be very easy to put way too much regulation on this space and you can look at lots of examples of where that's happened before. But also if we are right and we may turn out not to be. But if we are right and this technology goes as far as we think it's going to go, it will impact society," Altman said.

He added, "Geopolitical balance of power. So many things that for these still hypothetical but future extraordinarily powerful systems know not like gpt-4, but something with 100 thousand or a million times the compute power of that we have been.  

"Socializing the idea of a global regulatory body that looks at those super powerful systems because they do have such global impact and one model we talk about is something like the IAEA. So for nuclear energy we decided the same thing. This needs a global agency of some sort because of the potential for global impact."

On a lighter side, Altman, one of the founders of OpenAI, in which Microsoft is a major investor, said the app he uses the most on his phone is Slack, and not ChatGPT. "I'm on Slack all day. Incredible."

On Gates' question about how AI could change the blue collar jobs, especially with robots, Altman said OpenAI has begun investing in some robotic companies. He said it was an undertaking which was kept on the backburner, while his company solved the cognitive side of such innovations.

"We started robots too early and so we had to put that project on hold. It was hard for the wrong reasons. It wasn't helping us make progress with the difficult parts of the ML research and we were dealing with bad simulators and breaking tendons and things like that.

"And also we realized more and more over time that what we really first needed was intelligence and cognition and then we could figure out how to adapt it to physicality and it was easier to start with that with the way we've built these language models, but we have always planned to come back to it. We've started investing a little bit in robotics companies," said Altman.



bottom of page