AI Agents are all the rage in 2025, and a new framework gets released every week. At their core, these frameworks share a common foundation of essential concepts and powerful abstractions. Everything else is an exercise in idiosyncrasy by the framework developer. Starting from the first principles of any concept establishes a strong mental-model, enables productive technical discussions and engenders sound implementations. First principles thinking promotes experimentation on the eventual solution with maximum flexibility. In this post, I outline the design principles of our approach to build AI agents, interesting outcomes, and future potential.
Background
AI Agents need no detailed introduction because of their ubiquity. Briefly, AI Agents are LLM-centric applications that delegate execution logic, tool selection to the underlying LLM. In oversimplified terms, a LLM, equipped with tools, is invoked in a loop until an objective or an exit criterion is satisfied. Anthropic's blog on Building Effective Agents provides valuable insights and detailed information agent building, and I recommend reading it for anyone who has interest in this topic.
Core principles
Our approach is based on the premise that LLMs will become increasingly more capable and efficient with tool usage. It is based on the principles of:
- tool centricity
- decoupling decision from delivery
We deliberately left out implementation and communication protocol related aspects, as we believe they will naturally follow from these two principles. Moreover, the bulk of the heavy-lifting is done by the LLM and it pays to keep the scaffolding around it as light as possible. This meant that we spent more time educating and evangelizing the principles to engineers than we would have if we had chosen any of the publicly available agent developments frameworks. Finally, the GoDaddy developer ecosystem uses different programming languages whereas most frameworks are written in Python . Focusing on concepts other than the programming language helped with the learning and adaptation for engineers.
Tool centricity
LLMs have become increasingly good at tool calling, it's only natural to leverage this capability. Being tool-centric means placing emphasis on tool definitions and their granularity. It goes beyond considering tools to be atomic functions and realizing that tools are the gateway for agents to interact with software systems. This realization is evident in the exponential increase in popularity and adoption of Model Context protocol (MCP).
This principle was made viable by two key features offered by LLM providers:
Structured outputs - One of the biggest unlocks of last 18 months has been the "structured output" or "function calling" feature. This allows applications to receive reliable outputs from LLMs. Most of the initial use of this feature had been to map a structured output to a simple function call in code, before returning the output to a human. This has now evolved further, wherein most model providers allow or even encourage applications to return the output of tool call back to the LLM for reflection on next steps.
Multiple tool calls - Model providers offer multiple tool calls, where a LLM returns more than one tool call in its response. Closely inspecting the behavior would reveal that this happens when the LLM considers the underlying actions to be parallel or independent in nature. For example, calling a memory tool to save some user preference and calling a knowledge base tool to retrieve some other information can be done in parallel.
As a result of these unlocks, this principle extends to treating every action - including user communication - as a tool call.
Decoupling decision from delivery
The primary purpose of tools is to enable agents to interact with various systems that make up business logic. A natural side-effect of being tool-centric is the encapsulation of complex business logic within tools. This decouples the agent from the vagaries of business logic, and lets it focus only on deciding which tool to choose next. The decoupling principle dictates that the agent should only care about the "what", leaving the "how" to the tool.
This principle stems from two realizations:
Determinism belongs in a tool - A common challenge new engineers face when they build AI Agents is balancing determinism of outcomes with the dynamism that agents offer. But the determinism necessitated by business is mostly in orchestrating granular workflows. These can be encapsulated within a tool and every time an agent chooses to execute this tool, determinism is guaranteed by the tool.
Complexity belongs in a tool - Another key insight is that complex business logic and workflows should be encapsulated within tools, not spread across the agent's decision-making process. This not only makes the agent's code simpler and more maintainable, but also allows business logic to evolve independently of the agent's architecture. When business requirements change, only the relevant tools need to be updated, while the agent's core decision-making remains stable.
Learnings
Having implemented these principles in practice, we've gathered several key insights that validate and extend our initial approach. The two most important insights we learned are:
Everything is a tool - The fundamental realization was that almost any action can be formulated as a tool at the right granularity. Take user interaction as an example. This is typically formulated as a "text response" from the LLM and is separated from "tool response". Instead, we defined an UserInteractionTool
that supports actions such as inform, confirm, request, and respond. The modality of user interaction (the "how") is now decoupled from the decision to invoke a particular action (the "what").
LLM verbosity is a boon - The nature of tool calling allows engineers to specify arbitrary schemas that the LLM dutifully adhere to while responding. We took advantage of this by inserting a "reason" field in our tool definition. As a result, we not only get the tool call, but also a justification for the decision. This is a great way to understand and debug why this particular tool was chosen by the LLM. Naturally, this allows better debugging of agent loops and prompt tuning.
Conclusion
Our initial experiments with AI agents at GoDaddy have shown promising results with the first principles approach. The tool-centric design and decoupling of decisions from delivery are emerging as valuable patterns for creating flexible systems that can adapt to both business needs and LLM capabilities.
As LLMs continue to evolve and new frameworks emerge, these core principles will remain relevant. They provide a solid foundation for building AI agents that are not just powerful, but also maintainable and scalable. The future of AI agents lies not in choosing the right framework, but in understanding and applying these fundamental principles to create systems that can adapt and grow with the technology.