🎉 The #CandyDrop Futures Challenge is live — join now to share a 6 BTC prize pool!
📢 Post your futures trading experience on Gate Square with the event hashtag — $25 × 20 rewards are waiting!
🎁 $500 in futures trial vouchers up for grabs — 20 standout posts will win!
📅 Event Period: August 1, 2025, 15:00 – August 15, 2025, 19:00 (UTC+8)
👉 Event Link: https://www.gate.com/candy-drop/detail/BTC-98
Dare to trade. Dare to win.
Fully Homomorphic Encryption (FHE): Web3 Solutions under AI Security Challenges
AI Security: fully homomorphic encryption may become a solution
Recently, an AI system named Manus achieved breakthrough results in the GAIA benchmark test, outperforming large language models of the same level. Manus demonstrated the ability to independently complete complex tasks, such as multinational business negotiations, involving contract analysis, strategy formulation, and proposal generation among other aspects. Compared to traditional systems, Manus excels in dynamic goal decomposition, cross-modal reasoning, and memory-enhanced learning. It can break down large tasks into numerous executable subtasks while handling various data types and continuously improving decision-making efficiency and accuracy through reinforcement learning.
The progress of Manus has once again sparked discussions in the industry about the development path of AI: should it head towards Artificial General Intelligence (AGI) or should multi-agent systems (MAS) take the lead in collaboration? This debate essentially reflects the core issue of how to balance efficiency and security in AI development. The closer a single intelligence is to AGI, the higher the risk of opacity in its decision-making process; while multi-agent collaboration can disperse risks, it may miss critical decision-making opportunities due to communication delays.
The development of Manus also highlights the inherent security risks of AI systems. For example, sensitive patient genetic data may be involved in medical scenarios; undisclosed corporate financial information may be encountered in financial negotiations. In addition, AI systems may exhibit algorithmic bias, such as generating unfair salary suggestions for specific groups during the recruitment process. There is also the risk of adversarial attacks, where hackers may mislead AI systems' judgments through special methods.
These challenges highlight a concerning trend: the more intelligent the AI systems, the broader their potential attack surface.
In the Web3 space, security has always been a core concern. Various encryption technologies have been developed to address these challenges:
Zero Trust Security Model: This model requires strict authentication and authorization for every access request, not trusting any default devices.
Decentralized Identity (DID): This is a new type of decentralized digital identity standard that does not rely on centralized registration systems.
Fully Homomorphic Encryption (FHE): This is an advanced encryption technology that allows computation on data in an encrypted state without the need for decryption.
Fully homomorphic encryption is considered an important tool for addressing security issues in the AI era. It can play a role in the following areas:
Data level: All information input by users can be processed in an encrypted state, and even the AI system itself cannot decrypt the original data.
Algorithm level: Achieve "encrypted model training" through FHE, where even developers cannot directly observe the AI's decision-making process.
Collaborative Level: Communication between multiple AI agents can utilize threshold encryption, ensuring that even if a single node is compromised, global data will not be leaked.
In the Web3 ecosystem, several projects are dedicated to exploring these security technologies. For example, uPort launched a decentralized identity solution in 2017, and NKN released its mainnet based on a zero-trust model in 2019. In the field of fully homomorphic encryption, Mind Network is the first FHE project to go live on the mainnet and has collaborated with several well-known institutions.
As AI technology approaches human intelligence levels, establishing a robust defense system becomes increasingly important. Fully homomorphic encryption not only addresses current security issues but also prepares for a more powerful AI era in the future. On the road to AGI, FHE may not just be an option, but a necessary condition to ensure the safe operation of AI systems.