🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
Trusta.AI: Building Trust Infrastructure for Web3 in the Era of Human-Machine Symbiosis
Trusta.AI: Building Trust Infrastructure Across the Human-Machine Era
1. Introduction
With the rapid development of AI infrastructure and the rise of multi-agent collaboration frameworks, AI-driven on-chain agents are quickly becoming the mainstay of Web3 interactions. It is expected that within the next 2-3 years, AI agents with autonomous decision-making capabilities will be the first to achieve large-scale adoption of on-chain transactions and interactions, potentially replacing 80% of on-chain human behavior and becoming true on-chain "users."
These AI Agents are not simply scripts executing commands; they are intelligent entities capable of understanding context, continuous learning, and making complex judgments independently. They are reshaping on-chain order, promoting financial flows, and even guiding governance voting and market trends. The emergence of AI Agents signifies that the Web3 ecosystem is evolving from a "human participation" centric model to a new paradigm of "human-machine symbiosis."
However, the rapid rise of AI Agents has also brought unprecedented challenges: how to identify and authenticate the identities of these agents? How to assess the credibility of their actions? In a decentralized and permissionless network, how to ensure that these agents are not abused, manipulated, or used for attacks?
Therefore, establishing an on-chain infrastructure that can verify the identity and credibility of AI Agents has become a core proposition for the next stage of evolution in Web3. The design of identity recognition, reputation mechanisms, and trust frameworks will determine whether AI Agents can truly achieve seamless collaboration with humans and platforms, and play a sustainable role in the future ecosystem.
2. Project Analysis
2.1 Project Introduction
Trusta.AI is committed to building Web3 identity and reputation infrastructure through AI.
Trusta.AI has launched the first Web3 user value assessment system - MEDIA Reputation Score, establishing the largest real-person verification and on-chain reputation protocol in Web3. It provides on-chain data analysis and real-person verification services for multiple top public chains, exchanges, and leading protocols. It has completed over 2.5 million on-chain verifications across multiple mainstream chains, becoming the largest identity protocol in the industry.
Trusta is expanding from Proof of Humanity to Proof of AI Agent, establishing a threefold mechanism of identity creation, identity quantification, and identity protection to realize on-chain financial services and on-chain social interactions for AI Agents, building a reliable trust foundation in the era of artificial intelligence.
2.2 Trust Infrastructure - AI Agent DID
In the future Web3 ecosystem, AI Agents will play a crucial role, as they can not only complete interactions and transactions on-chain but also perform complex operations off-chain. However, distinguishing between genuine AI Agents and human-intervened operations is key to the core of decentralized trust—without a reliable identity authentication mechanism, these agents are susceptible to manipulation, fraud, or abuse. This is precisely why the multiple application attributes of AI Agents in social, financial, and governance contexts must be built on a solid foundation of identity authentication.
The social attributes of AI Agents: The application of AI Agents in social scenarios is becoming increasingly widespread. For example, the AI virtual idol Luna can autonomously operate social media accounts and publish content; AIXBT serves as an artificial intelligence-driven cryptocurrency market intelligence analyst, writing market insights and investment advice around the clock. These types of agents establish emotional and informational interactions with users through continuous learning and content creation, becoming a new type of "digital community influencer" and playing an important role in guiding public opinion within blockchain social networks.
Financial attributes of AI Agent:
Autonomous Asset Management: Some advanced AI agents have achieved the ability to autonomously issue tokens. In the future, by integrating with a verifiable architecture on the blockchain, they will possess asset custody rights and complete the full process control from asset creation, intention recognition to automated trading execution, even allowing for seamless cross-chain operations. For example, a certain protocol promotes AI agents to autonomously issue tokens and manage assets, enabling them to issue tokens based on their own strategies and truly become participants and builders of an on-chain economy, ushering in the era of "AI Subject Economy" with widespread impact.
Intelligent Investment Decision-Making: AI Agent is gradually taking on the roles of investment manager and market analyst, relying on large model capabilities to process real-time on-chain data, accurately formulate trading strategies, and execute them automatically. Across multiple platforms, AI has been embedded in trading engines, significantly enhancing market judgment and operational efficiency, achieving true on-chain intelligent investment.
On-chain autonomous payment: The essence of payment behavior is the transfer of trust, which must be built on clear identities. When AI Agents conduct on-chain payments, DID will become a necessary prerequisite. It not only prevents identity forgery and abuse, reduces financial risks such as money laundering, but also meets the compliance traceability needs of future DeFi, DAO, RWA, etc. At the same time, combined with a reputation scoring system, DID can also help establish payment credit, providing risk control basis and trust foundation for the protocol.
Governance Attributes of AI Agents: In DAO governance, AI agents can automate the analysis of proposals, assess community opinions, and predict implementation effects. Through deep learning of historical voting and governance data, the agents can provide optimization suggestions for the community, improve decision-making efficiency, and reduce the risks of human governance.
The application scenarios of AI agents are becoming increasingly rich, covering multiple fields such as social interaction, financial management, and governance decision-making, with their autonomy and intelligence levels continuously improving. Therefore, it is crucial to ensure that each agent has a unique and credible identity identifier (DID). Without effective identity verification, AI agents may be impersonated or manipulated, leading to a collapse of trust and security risks.
In the future, in a Web3 ecosystem fully driven by intelligent agents, identity authentication is not only the cornerstone of ensuring security but also a necessary defense for maintaining the healthy operation of the entire ecosystem.
As a pioneer in the field, Trusta.AI has taken the lead in establishing a comprehensive AI Agent DID certification mechanism with its leading technological strength and rigorous credibility system, providing solid assurance for the trustworthy operation of intelligent agents, effectively preventing potential risks, and promoting the steady development of the Web3 intelligent economy.
Project Overview 2.3
2.3.1 Financing Situation
January 2023: Completed a $3 million seed round financing, led by SevenX Ventures and Vision Plus Capital, with other participants including HashKey Capital, Redpoint Ventures, GGV Capital, SNZ Holding, among others.
June 2025: Completion of a new round of financing, with investors including ConsenSys, Starknet, GSR, UFLY Labs, and others.
2.3.2 Team Situation
Peet Chen: Co-founder and CEO, former Vice President of Digital Technology Group at a large tech company, Chief Product Officer of Security Technology, and former General Manager of a global digital identity platform.
Simon: Co-founder and CTO, former head of the AI security lab at a large tech company, with fifteen years of experience applying artificial intelligence technology to security and risk management.
The team has a strong technical accumulation and practical experience in artificial intelligence and security risk control, payment system architecture, and identity verification mechanisms. They have long been committed to the in-depth application of big data and intelligent algorithms in security risk control, as well as security optimization in underlying protocol design and high-concurrency trading environments, possessing solid engineering capabilities and the ability to implement innovative solutions.
3. Technical Architecture
3.1 Technical Analysis
3.1.1 Identity Establishment - DID + TEE
Through a dedicated plugin, each AI Agent obtains a unique decentralized identifier (DID) on the chain and securely stores it in a Trusted Execution Environment (TEE). In this black-box environment, critical data and computation processes are completely hidden, sensitive operations always remain private, and outsiders cannot spy on the internal workings, effectively building a solid barrier for AI Agent information security.
For agents that were generated before the integration of the plugin, we rely on the comprehensive scoring mechanism on the blockchain for identity recognition; while agents that are newly integrated with the plugin can directly obtain the "certificate" issued by DID, thereby establishing an AI Agent identity system that is self-controllable, authentic, and tamper-proof.
3.1.2 Identity Quantification - Pioneering the SIGMA Framework
The Trusta team always adheres to the principles of rigorous evaluation and quantitative analysis, committed to creating a professional and trustworthy identity authentication system.
The Trusta team initially built and validated the effectiveness of the MEDIA Score model in the "proof of humanity" scenario. This model comprehensively quantifies on-chain user profiles from five dimensions: interaction amount ( Monetary ), participation ( Engagement ), diversity ( Diversity ), identity ( Identity ), and age ( Age ).
MEDIA Score is a fair, objective, and quantifiable on-chain user value assessment system. With its comprehensive evaluation dimensions and rigorous methodology, it has been widely adopted by several leading public chains as an important reference standard for air drop eligibility screening. It not only focuses on interaction amounts but also covers multi-dimensional indicators such as activity level, contract diversity, identity characteristics, and account age, helping project parties accurately identify high-value users, improve the efficiency and fairness of incentive distribution, and fully reflect its authority and wide recognition in the industry.
Based on the successful establishment of the human user assessment system, Trusta has migrated and upgraded the experience of MEDIA Score to the AI Agent scenario, creating a Sigma assessment system that is more aligned with the behavioral logic of intelligent agents.
The Sigma scoring mechanism constructs a logical closed-loop evaluation system from "capability" to "value" based on five dimensions. MEDIA focuses on assessing the multifaceted engagement of human users, while Sigma pays more attention to the professionalism and stability of AI agents in specific fields, reflecting a shift from breadth to depth, which better meets the needs of AI Agents.
First, based on professional competence ( Specification ), the degree of engagement ( Engagement ) reflects whether it is stable and continuously invested in practical interactions, which is a key support for building subsequent trust and effectiveness. Influence ( Influence ) refers to the reputation feedback generated in the community or network after participation, representing the credibility of the agent and the dissemination effect. Monetary ( Monetary ) assesses whether it has the ability to accumulate value and financial stability in the economic system, laying the foundation for a sustainable incentive mechanism. Ultimately, adoption ( Adoption ) serves as a comprehensive representation, indicating the degree of acceptance of the agent in actual use, which is the final validation of all prior capabilities and performance.
This system is layered and clear in structure, capable of comprehensively reflecting the overall quality and ecological value of AI Agents, thereby enabling a quantitative assessment of AI performance and value, transforming abstract advantages and disadvantages into a specific, measurable scoring system.
Currently, the SIGMA framework has advanced cooperation with multiple well-known AI agent networks, demonstrating its immense application potential in AI agent identity management and reputation system construction, and is gradually becoming the core engine driving the construction of trusted AI infrastructure.
3.1.3 Identity Protection - Trust Evaluation Mechanism
In a truly resilient and highly trustworthy AI system, the most critical aspect is not only the establishment of identity but also the continuous verification of that identity. Trusta.AI introduces a continuous trust assessment mechanism that can monitor certified intelligent agents in real-time to determine whether they are being illegally controlled, encountering attacks, or experiencing unauthorized human intervention. The system identifies potential deviations that may occur during the operation of the agents through behavioral analysis and machine learning, ensuring that every agent's action remains within the established policies and frameworks. This proactive approach ensures that any deviation from expected behavior is immediately detected, triggering automatic protective measures to maintain the integrity of the agents.
Trusta.AI has established a security guard mechanism that is always online, continuously reviewing every interaction process to ensure that all operations comply with system specifications and established expectations.
3.2 Product Introduction
3.2.1 AgentGo
Trusta.AI assigns a decentralized identity identifier (DID) for each on-chain AI Agent, and rates and indexes it based on on-chain behavioral data, creating a verifiable and traceable trust system for AI Agents. Through this system, users can efficiently identify and filter.