Manus surpasses the same-layer model, triggering a debate on the development path of AI

Manus achieves GAIA benchmark SOTA results, sparking discussions on AI development paths

Manus has demonstrated outstanding performance in the GAIA Benchmark, surpassing other models of the same tier. This means it can independently handle complex tasks, such as multinational business negotiations, involving contract term analysis, strategy formulation, and proposal generation across multiple stages. The advantages of Manus lie in its dynamic goal decomposition, cross-modal reasoning, and memory-enhanced learning capabilities. It can break down large tasks into hundreds of executable subtasks, simultaneously handle various types of data, and continuously improve decision-making efficiency and reduce error rates through reinforcement learning.

The breakthrough of Manus has once again sparked discussions in the field of artificial intelligence about the future development path: should it lean towards dominance of General Artificial Intelligence (AGI), or should it favor collaborative dominance of Multi-Agent Systems (MAS)?

The design concept of Manus encompasses two possibilities:

  1. AGI Path: Continuously improving the level of individual intelligence to approach human comprehensive decision-making ability.

  2. MAS Path: As a super coordinator, directing numerous vertical domain agents to work collaboratively.

This discussion actually touches on the core issue of AI development: how to strike a balance between efficiency and safety? As individual intelligences approach AGI, the risks associated with the opacity of their decision-making processes also increase; while multi-agent collaboration can disperse risks, it may miss critical decision-making opportunities due to communication delays.

The progress of Manus also highlights the inherent risks in the development of AI. For example, in medical scenarios, Manus needs to access sensitive patient data in real-time; in financial negotiations, it may involve undisclosed information of companies. Additionally, there are issues of algorithmic bias, such as potentially unfair salary suggestions for specific groups during recruitment negotiations, or a higher rate of misjudgment of terms in emerging industries during legal contract reviews. Another potential risk is adversarial attacks, where hackers may interfere with Manus's judgment in negotiations by implanting specific sound signals.

These challenges highlight a key issue: the smarter the AI system, the broader its potential attack surface.

In the Web3 space, security has always been a core concern. Based on this concept, various encryption methods have emerged:

  1. Zero Trust Security Model: Emphasizes strict authentication and authorization for every access request.

  2. Decentralized Identity (DID): It has implemented a new decentralized digital identity model.

  3. Fully Homomorphic Encryption (FHE): Allows computation on encrypted data without decrypting it.

Among them, fully homomorphic encryption is regarded as a powerful tool for addressing security issues in the AI era. It allows computations to be performed on encrypted data, providing new possibilities for privacy protection.

In addressing AI safety challenges, FHE can play a role on multiple levels:

  • Data layer: All information input by users is processed in an encrypted state, and even the AI system itself cannot decrypt the original data.

  • Algorithm level: Achieve "encrypted model training" through FHE to ensure that the AI decision-making process is not exposed.

  • Collaborative level: Communication between multiple agents uses threshold encryption to prevent a single point of leakage from leading to global data exposure.

Although Web3 security technology may currently seem distant from the average user, its importance cannot be ignored. In this challenging field, only by continuously strengthening defenses can one avoid becoming a potential victim.

As AI technology gradually approaches human intelligence levels, non-traditional defense systems are becoming increasingly important. FHE not only addresses current security issues but also lays the groundwork for the future of a strong AI era. On the road to AGI, FHE has shifted from being an option to a necessity for survival.

Manus brings the dawn of AGI, and AI security is also worth pondering

AGI-3.01%
FHE-17.19%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 5
  • Share
Comment
0/400
CryptoCross-TalkClubvip
· 9h ago
How can this AI's IQ rise faster than suckers?
View OriginalReply0
NFT_Therapyvip
· 9h ago
What to do if the model is so smart and keeps harvesting suckers every day?
View OriginalReply0
MEVictimvip
· 9h ago
Still rolling with AGI, it really exhausts me.
View OriginalReply0
SandwichHuntervip
· 9h ago
Headache AGI is too brain-burning.
View OriginalReply0
ForkPrincevip
· 9h ago
Whose smart device is playing level-up monster fighting again?
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)