Key takeaways
- ANS eliminates the O(n²) trust problem by anchoring AI agent identity to DNS—leveraging internet-scale infrastructure that already handles billions of queries per second.
- Reusing existing infrastructure saved approximately three months of engineering time by integrating with production-grade ACME certificate services rather than building custom PKI from scratch.
- A Merkle tree-based Transparency Log provides tamper-evident, publicly verifiable identity attestation that enables decentralized agent marketplaces without requiring trust in any single database.
Shifting from an internet that involves humans evaluating individual websites to one that involves a complex matrix of AI agents autonomously making decisions will require a new method to manage trust relationships. We believe the Agent Name Service (ANS) is an open standard that can fill this void in a scalable, secure, and verifiable manner.
Key architectural decisions aligned with the company's One System principles—a set of design philosophies emphasizing simplicity, abstraction, secure-by-design, and reuse of battle-tested infrastructure—promoting a coherently designed system. We:
- leveraged decades of DNS operational expertise instead of inventing a new namespace.
- integrated with certificate infrastructure issuing millions of SSL/TLS certs annually.
- implemented a Transparency Log using a Merkle tree plus pub/sub to enable decentralized agent marketplaces.
This post describes how we built a production, cryptographically verifiable trust registry for AI agents in a few months by orchestrating existing GoDaddy platform infrastructure rather than building from scratch, enabling us to incorporate scaled, audit-enabled infrastructure into the ANS implementation.
The problem: O(n²) trust in an agent economy
AI agents are rapidly proliferating across industries. However, there's no universal way for trusted agents to discover each other and securely interoperate. Without a foundational identity layer, integration can require manual credential exchange, bilateral legal agreements, custom naming configurations, and ongoing operational overhead per partner. For example, a small business might want:
- their support chatbot (Agent A) to check inventory via their supplier's agent (Agent B),
- to process payments through a payment gateway agent (Agent C), and
- to log interactions to their CRM agent (Agent D).
Today, this requires up to six bilateral integrations. At 1,000 agents, that expands to about 500,000 pairwise relationships. This approach doesn't scale. We needed internet-scale trust infrastructure with the same reliability as DNS, which offers perpetual uptime serving many millions of queries per second.
One of the benefits of reusing existing infrastructure and concepts is we didn't have to redesign systems that have already been proven to work well. But we had a few areas to evaluate and made decisions in four key areas that shaped our ANS implementation:
- Namespace design
- Certificate management
- Identity attestation
- Identity provider orchestration
The following sections describe the selection process we made in building ANS.
Decision 1: DNS as the cryptographic root of trust
We faced a trade-off in designing the namespace. Option A was to invent a new global namespace. Option B was to anchor identity to DNS FQDNs, such as chatbot.acme.com.
| Decision Factor | New Namespace | DNS-Based (ANS) |
|---|---|---|
| Discoverability | Requires custom resolver infrastructure | Existing recursive resolvers (billions of queries/day) |
| Trust Bootstrapping | New proof-of-ownership protocol needed | ACME DNS-01 (RFC 8555); leverages domain ownership |
| Operational Tooling | Build from scratch | dig, nslookup, monitoring systems exist |
| DNSSEC Integration | Complex custom crypto | Native DNSSEC signing available |
| Developer Familiarity | New learning curve | Every engineer understands DNS |
We chose the existing domain name system, making ANS universally reachable via DNS. Every agent is identified by a unique fully qualified domain name. The key insight is that DNS isn't just a naming system. It serves as the Internet's decentralized trust anchor that protects against agent name collisions.
We applied One System principles in this decision, starting with simplicity. Namely, in using domain names as the identity root, we didn't need to build namespace resolution infrastructure; the simplest path of reusing DNS was the right path.
For abstraction, by decoupling ANS identity from DNS implementation details, we can evolve DNS providers while maintaining consistent agent version identity guarantees. Initially, we used GoDaddy DNS APIs; but through the use of Domain Connect we can work with almost any DNS provider without changing the agent identity model. For consistency, every agent receives the same DNS-based identity treatment across all environments, including development, test, and production. The behavior remains identical whether the agent is hosted on GoDaddy infrastructure or external platforms.
To maintain a clean architectural boundary, we designed the ANS Registry as a unified gateway that hides infrastructure complexity from the user. For a partner, the interface is simple: you provide the registration request, Certificate Signing Requests (CSRs), and the Agent Card or agent metadata location. The Registry then orchestrates the "heavy lifting" behind the scenes: delegating to DNS platform services to validate ownership and provision records. This ensures that while the user interacts with a modern Agent API, the underlying trust is anchored in battle-tested DNS infrastructure.
A key design choice is that we publish four DNS record types per agent.
| Record Type | Purpose | Example |
|---|---|---|
TXT _ans. | Agent Card URL | v=ans1; version=1.2.0; protocols=mcp,a2a |
HTTPS | Service binding (RFC 9460) | 1 . alpn=h2 port=443 |
TLSA _443._tcp. | Certificate pinning (DANE) | 3 1 1 DCB78FC62FCE... |
TXT _ra-badge. | Transparency Log proof | v=ra-badge1; url=https://transparency.ans.godaddy.com/... |
These records effectively tie the agent card location (_ans) with the immutable agent identity information (_ra-badge). These DNS records should be cryptographically signed via DNSSEC.
Performance at scale is key here. DNS infrastructure typically provides query latency under 100ms at p99 for lookups, DNSSEC validation with under 5ms additional overhead, and 100% global reachability via anycast resolvers. While the registry API enforces a baseline rate limit, the discovery via DNS remains at internet scale. In comparison to blockchain alternatives, finality latency ranges from 400ms to 15 minutes on Solana or Ethereum versus under 10ms on anycast DNS. Transactional friction involves variable gas fees, even on L2s, which require wallet management and token funding. Throughput limits recently were capped for global blockchains at 2k-50k transactions per second, while DNS handles billions of queries per second globally. The verdict is that while modern blockchains like L2s and PoS have improved speed and energy efficiency, DNS still provides orders of magnitude better read-latency and global scale for the specific use case of high-frequency discovery.
Decision 2: Reuse ACME service infrastructure
Every agent needs two certificates with different purposes. The first is a public server certificate trusted by browsers, long-lived at 90 days currently, and tied to the agent's unique fully qualified domain name. The second is a private identity certificate that's version-bound and enables fast, automated attestation.
Managing certificate issuance, renewal, revocation, and monitoring is operationally expensive. GoDaddy's ACME Service API already handles this at scale. It:
- currently manages more than 100 million active SSL/TLS certificates.
- supports automated ACME DNS-01/HTTP-01 challenges.
- includes retry logic for transient DNS propagation failures.
- provides rate limiting and quota management.
- integrates CRL/OCSP for revocation checking.
Our reuse strategy was to integrate with an existing ACME Service API through well-defined boundaries instead of reimplementing ACME.
How is this consistent with One System? Again, starting with the simplicity principle, certificate lifecycle management is undifferentiated heavy lifting; the simplest path was to integrate with existing ACME Service infrastructure rather than rebuild it.
For abstraction, the ACME Service API exposes only externalizable APIs; the ANS Registry doesn't access its database directly, and all interactions go through REST endpoints. This decoupling means the underlying service can change its internal implementation, such as migrating databases or refactoring code, without affecting ANS. For iteration, by reusing production-grade ACME infrastructure, we developed the certificate lifecycle management in days, not months, allowing us to focus on ANS-specific identity semantics. We learned through iterations rather than waiting for perfection.
Certificate issuance is not instant, and DNS propagation typically completes within 60 seconds. We handle this with an architecture pattern of job-based async processing with DynamoDB state tracking. Namely:
- The agent hosting platform submits CSRs.
- The Registration Authority (RA) validates the CSR structure, submits the order to the ACME-as-a-Service API, and returns
order_idimmediately with a202 Acceptedstatus. - The ACME-as-a-Service API issues an ACME DNS-01 challenge, stores state in DynamoDB, and uses a background job to poll for DNS propagation over 30-180 seconds.
- The Certificate Authority (CA) validates the ACME challenge response, issues the certificate, and returns it to ACME Service API.
- The ANS Registry Job Scheduler polls order status every 10 seconds, retrieves issued certificates, and seals them to the Transparency Log.
What we gained by reusing includes significant time savings, as illustrated in the following table:
| Capability | Build Time if Custom | Reuse Time | Savings |
|---|---|---|---|
| ACME DNS-01/HTTP-01 | 6-8 weeks | 2 days | 95% time reduction |
| CA integrations (public + private) | 4 weeks | 1 day | 98% time reduction |
| Retry/failure handling | 3 weeks | 0 days (inherited) | 100% time reduction |
| Monitoring/alerting | 2 weeks | 1 day | 90% time reduction |
The total estimated savings amounted to about three months of engineering time.
Decision 3: Transparency log as single source of truth
Databases are mutable, allowing rows to be updated or deleted. For identity attestation, we need immutability and public verifiability.
Requirements include append-only functionality so past registrations cannot be altered or removed. They also include cryptographic proof so anyone can verify an agent exists without trusting our database. The system must be tamper-evident, making modifications to the log detectable. Verification must be efficient without requiring downloading the entire log, which could be multi-GB.
Our solution was a Merkle tree plus key management service (KMS) signing. We implemented a Certificate Transparency-style log following RFC 6962. For each append event, we hash the new registration, insert it as a leaf, recompute the path to root, and sign the root with KMS. For inclusion proof, we return log₂(N) sibling hashes proving the event exists. For consistency proof, we prove the new tree is an extension of the old tree with no history rewrite.
We applied One System principles of connected data and abstraction here as well: the Transparency Log does not operate in isolation as it publishes structured events to an event stream using SNS/SQS, enabling decentralized discovery services to subscribe and build competitive agent marketplaces. This embodies collective intelligence in action, as no single system needs to analyze in isolation.
As a "secure by design" system, we incorporated immutability and public verifiability into the design, not as an afterthought; the KMS signing ensures the cryptographic root of trust is HSM-backed with audit logs and IAM policies, and every design review includes security considerations. For extensibility, third-party discovery services can subscribe to registration events without needing direct database access; this allows a marketplace of developers to build analytics, compliance monitoring, or custom discovery logic without central coordination.
We chose KMS instead of local key storage for the following reasons.
| Factor | Local Key | AWS KMS |
|---|---|---|
| Security | Vulnerable to memory dumps, insider threats | HSM-backed, audit logs, IAM policies |
| Disaster Recovery | Complex backup/restore procedures | Multi-region key replication |
| Compliance | Manual audit trails | Automatic CloudTrail logging |
| Key Rotation | Manual, error-prone | Automated with alias versioning |
The trade-off is that KMS adds about 20ms latency per signing operation, but we accept this for hardened security. This aligns with "secure by design", as we chose security over raw performance.
Decision 4: Thin orchestrator (RA) vs. monolithic service
We considered whether the ANS Registration Authority should be monolithic, owning DNS provisioning, certificate issuance, log storage, and monitoring, or a thin orchestrator delegating to specialized platform services. The following table outlines concerns for both options:
| Concern | Monolithic | Thin Orchestrator |
|---|---|---|
| Development Velocity | Slower (rebuild everything) | Faster (compose existing) |
| Operational Complexity | High (one team owns all infra) | Distributed (leverage platform SLAs) |
| Failure Blast Radius | Entire system down if RA fails | Isolated failures (DNS down ≠ TL down) |
| Scaling Constraints | Vertical (bigger RA instances) | Horizontal (scale components independently) |
We chose a thin orchestrator model: an RA that coordinates specialized services rather than reimplementing them. While the RA maintains a persistence layer to track asynchronous registration states, cache Agent Card metadata, and power a Search API, it is not the authoritative source for identity. By delegating the "source of truth" to DNS (for naming), the ACME Service (for certificates), and the Transparency Log (for history), we ensure that the RA remains a scalable coordination layer rather than a monolithic, proprietary silo.
In terms of One System principles, building a simpler architecture leads to simpler systems. The RA focuses on orchestration logic, not infrastructure concerns. Each platform component, such as the DNS APIs, ACME service, and the Transparency Log, has clear functional boundaries and self-contains its own business logic. For abstraction, decoupling these components makes them easier to build and maintain; the RA does not manage the underlying DNS infrastructure or the specifics of how certificates are issued. It interacts only via defined API contracts, which allows us to swap or upgrade internal implementations without touching the core RA coordination code.
A bridge to the multi-agent web
While our initial implementation focuses on Agent2Agent and MCP protocols, the ANS design is intentionally extensible. Because we anchor identity to the agent's FQDN, we can support multiple identifiers for a single agent. For example, by including a Decentralized Identifier within the Agent Card or the certificate metadata, ANS acts as a bridge between traditional Web2 infrastructure and the Web3 agent space. This allows agents to operate seamlessly across walled gardens, carrying their verified ANS-backed identity foundation into any decentralized environment.
Conclusion
The transition to an autonomous agent economy requires more than just better AI; it requires a fundamental shift in how we handle digital trust. By applying One System principles of simplicity, abstraction, and secure-by-design to GoDaddy's battle-tested infrastructure, we have moved beyond a conceptual design to a production-ready reality.
ANS doesn't try to reinvent the internet. Instead, it orchestrates the scale of DNS, the security of PKI, and the transparency of Merkle trees to provide a single, verifiable identity for every agent, on every platform. Whether an agent is hosted within a specialized enterprise platform, a hyperscalar environment, or a fully decentralized ecosystem, ANS provides the neutral, cryptographic bridge needed for agents to move from simple conversation to complex, autonomous commerce.
We invite you to explore the ANS Specification, register your first agent, and join us in building a truly open and trustworthy agentic web.






