OpenAI's latest trends: Form a new team, bet on superintelligence, and tilt 20% of computing resources

Are you still lamenting the revolutionary changes brought about by generative AI? Lament the power of GPT-4? Still guessing what new features GPT-5 will have? OpenAI has seen further afield, and is already in action.

Image source: Generated by Unbounded AI

The OpenAI team released their latest move yesterday, forming a new team, Superalignment, co-led by Ilya Sutskever (OpenAI co-founder and chief scientist) and Jan Leike (alignment lead). This is a team about superintelligence, focusing on superintelligence alignment research. OpenAI claims that it will spend 20% of its computing resources on solving super-intelligent alignment problems in the next four years, which is enough to show the importance of OpenAI.

What is Superintelligence?

Superintelligence refers to an intelligent system that surpasses human intelligence. It is not just a simple computer program, but an intelligent body, which has the characteristics of cognitive ability, emotional ability, self-awareness and intention, and can independently think, judge and Decision-making can even surpass human intelligence to a certain extent. The concept of superintelligence is relative to the current artificial intelligence system. It refers to a more advanced artificial intelligence that can be applied in a wider range of fields and complex environments.

Currently, superintelligence is still a theoretical concept and has yet to be realized. Achieving superintelligence requires solving many technical problems, including challenges in machine learning, natural language processing, computer vision, reasoning, and decision-making. In addition, the emergence of superintelligence will also bring many ethical, social and political issues, such as superintelligence control, security and privacy issues.

Superintelligence is an exciting and compelling concept because it can generate various important applications in various fields, such as medical diagnosis, new drug development, intelligent transportation, automated production and security defense, etc. At the same time, superintelligence also has some risks and challenges. If it gets out of control, it may pose a huge threat to human beings.

Achieving superintelligence requires solving many technical problems. One of the most important issues is how to enable intelligent systems to have the ability to learn independently, that is, to continuously learn new knowledge and skills from data and experience, so as to continuously improve their intelligence level. At the same time, superintelligence also needs to have strong reasoning, decision-making and innovation capabilities, and be able to think independently and solve various complex problems.

"As we approach superintelligence", we need to think more about the existential risks of AI. — Zuckerberg

Superintelligence will be a reality in ten years? At least OpenAI thinks so

OpenAI believes that superintelligence will be the most impactful technology ever invented, helping us solve many of the world's most important problems. But the immense power of a superintelligence can also be very dangerous and could lead to the disempowerment of humanity, or even its extinction. While **superintelligence may seem remote now, we believe it may be within a decade. **

###

How to solve the risks of superintelligence?

Although superintelligence has huge energy, the risks are also huge. OpenAI said that managing these risks will require new governance institutions and address the problem of superintelligence coordination, namely, how to ensure that artificial intelligence systems are much smarter than humans but also follow human intentions?

OpenAI did not give an answer. They believe that humans cannot reliably supervise those artificial intelligence systems that are smarter than humans. The current alignment technology cannot be extended to superintelligence. Therefore, new scientific and technological breakthroughs need to be sought.

However, OpenAI's path to solve the superintelligence alignment problem has been clear, that is, to establish an automatic alignment researcher roughly at the human level, to improve the ability of artificial intelligence systems to learn from human feedback and assist humans in evaluating artificial intelligence. The goal is to build an AI system aligned enough that it can help us solve all other alignment problems.

In the last year, OpenAI has published a blog post that clarifies their line of research on alignment:

OpenAI's new team Superalignment

The team will be a team of top machine learning researchers and engineers, with the goal of solving the core technical challenges of superintelligence alignment within four years, from developing new methods to scaling up to deployment. OpenAI Chief Scientist Ilya Sutskever has made the superintelligence alignment problem a core research focus. Superintelligence alignment is fundamentally a machine learning problem, and OpenAI is recruiting excellent machine learning experts. References:

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)