Ethereum
Ethereum creator Vitalik Buterin warns against rushing into ‘very risky’ superintelligent AI
Vitalik Buterin, the mastermind behind Ethereum, has called for a more cautious approach to AI research, calling the current frenzy “very risky.” Responding to a critique of OpenAI and its leadership by Ryan Selkis, CEO of crypto intelligence firm Messari, creator of the world’s second most influential blockchain presented his point of view on AI alignment – the fundamental principles that should guide development.
“Superintelligent AI is very risky, and we should not rush into it, and we should fight against those who try,” Buterin said. “No $7 trillion server farms, please.”
Superintelligent AI is a theoretical form of artificial intelligence that outperforms human intelligence in virtually every area. While many view artificial general intelligence (AGI) as the final realization of the emerging technology’s full potential, superintelligent models would be the next step. Even though today’s cutting-edge AI systems have not yet reached these thresholds, advances in machine learning, neural networks, and other AI-related technologies continue to advance, and people are turning alternately enthusiastic and worried.
After Messari tweeted that “AGI is too important for us to deify another smooth-talking narcissist,” Buterin emphasized the importance of a diverse AI ecosystem to avoid a world where the immense value of AI is owned and controlled by very few people.
“A robust ecosystem of open models running on consumer hardware [is] an important hedge to protect against a future where the value captured by AI is hyper-concentrated and most human thoughts are read and mediated by a few central servers controlled by a few people,” he said. “Such models are also much less risky in terms of catastrophic risk than corporate and military megalomania.”
The creator of Ethereum has followed the AI scene closely, recently praising the open source LLM Llama3 model. He also has suggested that OpenAI’s GPT-4o multimodal LLM could have passed the Turing test after a study argued that human responses were indistinguishable from those generated by AI.
Buterin also addressed the categorization of AI models into “small” and “large” groups, with an emphasis on regulating “large” models as a reasonable priority. He expressed concern, however, that many current proposals could ultimately result in everything being classified as “large”.
Buterin’s remarks come amid heated debates around the alignment of AI and resignations key figures from OpenAI’s super alignment research team. Ilya Sutskever and Jan Leike left the company, with Leike accusing OpenAI CEO Sam Altman prioritizes “brilliant products” over responsible AI development.
It was separately revealed that OpenAI has strict non-disclosure agreements (NDAs) that prevent its employees from discussing the company after they leave.
High-level, long-term debates about superintelligence are becoming increasingly urgent, with experts expressing concern while making widely varying recommendations.
Paul Christiano, who previously led the language model alignment team at OpenAI, created the Alignment Research Center, a nonprofit organization dedicated to aligning AI and machine learning systems with “human interests”. As reported per Decrypt, Christiano suggested there could be “a 50/50 chance of catastrophe soon after we have human-level systems.”
On the other hand, Yann LeCun, chief researcher at Meta, believes that such a catastrophic scenario is highly improbable. In April 2023, he said in a Tweeter, that the “brutal takeoff” scenario is “totally impossible”. LeCun asserted that short-term developments in AI significantly influence how AI evolves, shaping its long-term trajectory.
Buterin, on the contrary, considers himself a centrist. In a 2023 test -that he ratified today – he acknowledged that “it seems very difficult to have a ‘friendly’ world dominated by superintelligent AI where humans are anything other than pets”, but also argued that “often it It really happens that version N of our civilization’s technology has a problem and version N+1 fixes it. However, this does not happen automatically and requires intentional human effort. In other words, if superintelligence became a. problem, humans would probably find a way to fix it.
The departure of OpenAI’s more cautious alignment researchers and the company’s changing approach to security policy have raised broader, mainstream concerns about the lack of attention to ethical AI development among the main AI startups. Indeed, Google, MetaAnd Microsoft have also reportedly disbanded their teams responsible for ensuring that AI is developed safely.
Edited by Ryan Ozawa.