Vitalik Buterin Raises Alarms: ‘Superintelligent AI Is Very Risky’

Vitalik Buterin Raises Alarms: 'Superintelligent AI Is Very Risky'
Table of Contents

TL;DR

  • Vitalik Buterin warns about the risks of artificial superintelligence and advocates a cautious approach.
  • Promotes the development of open models that run on consumer hardware as a protection measure.
  • Expresses concerns about the categorization and regulation of AI models in current proposals.

Vitalik Buterin, co-founder of Ethereum, has shared his concerns about the risks associated with super-intelligent artificial intelligence (AI), amid recent leadership changes at OpenAI.

Buterin maintains that the development of superintelligent AI is extremely risky and that its progress should not be rushed.

It calls for resisting initiatives that seek to rapidly push this technology forward, especially those that require massive infrastructure, such as $7 trillion data centers.

In this context, Vitalik Buterin highlights the importance of creating a robust ecosystem of open AI models that can run on consumer hardware.

According to him, these models are a crucial measure to protect us against a future where the value generated by AI is concentrated in a few companies, and the majority of human thought is read and mediated by central servers controlled by a small group of people.

Furthermore, he argues that open models operating on consumer hardware have a much lower risk of causing catastrophes compared to scenarios where AI is under the control of corporations or military entities.

Vitalik Buterin also addresses the question of AI regulation, suggesting that a distinction between “small” and “large” models might be reasonable in principle.

He proposes exempting small models and regulating large ones, noting that a model with 405 billion parameters is considerably larger than consumer hardware can handle (by comparison, 70 billion parameters are manageable; he runs himself models of this size).

However, he expresses concern that many current proposals tend to eventually classify all models as “large”, which could lead to overregulation and inhibition of the development of accessible and safe technologies.

Vitalik Buterin Alert: 'Superintelligent AI is Very Risky'

The importance of decentralization and balanced regulation according to Vitalik Buterin

Vitalik Buterin emphasizes that the development of open AI models not only democratizes access to this technology, but also significantly reduces the risks associated with its misuse.

By allowing AI models to run on commodity hardware, you disperse power and prevent a few entities from controlling the majority of blockchain technology and data.

This approach could prevent dystopian scenarios where AI is used to monitor and control human thinking on a large scale.

Decentralizing AI is also a strategy to mitigate the “doom risk” that can arise when the technology is monopolized by large corporations or military institutions.

History has shown that the concentration of technological power in the hands of a few can lead to abuses and unintended consequences.

Therefore, fostering an open and accessible AI development community is essential to ensure the safe and ethical use of this technology.

Regarding regulation, Vitalik Buterin insists that it must be flexible and adaptive.

Rigidly classifying all models as “big” could stifle innovation and make it difficult for independent developers and small businesses to contribute to the AI ​​field.

Balanced regulation should allow leeway for smaller, more experimental projects, while ensuring that really large, potentially dangerous models are subject to strict oversight.

RELATED POSTS

Follow us on Social Networks

Crypto Tutorials

Crypto Reviews

Ads