AI Agent Debugger is a visual debugging tool for AI Agent developers.Unlike debugging approaches that focus solely on model input and output, AI Agent Debugger extends the debugging scope to the complete Agent execution process. It clearly displays each round of dialogue, every model call, MCP tool call, custom Skill execution, and final output, helping developers observe the Agent's operational chain and quickly locate issues in prompts, model configurations, tool calls, or business logic.AI Agent Debugger is applicable to the following scenarios:Debugging AI large model tool call chains, troubleshooting tool parameters, execution results, or exception causes
Comparing the performance of different models executing the same task, evaluating key metrics such as response time, Token consumption, and cost
Verifying whether MCP Server integration with AI large models meets expectations
Iteratively optimizing system prompts and observing the impact of different configurations on execution results
It is recommended to use the latest Apidog client to experience the full functionality of AI Agent Debugger.
Create New Agent Debug Session#
Navigate to AI Agent Debugger from the top tab bar in Apidog.The upper section of the page is used for configuring the model and run status:Select the model provider on the left, such as OpenAI or Anthropic.
Select the model in the middle, such as gpt-5.5
After selecting the provider and model, the corresponding Base URL will be automatically matched, such as https://api.openai.com/v1, no manual entry required
Click Run to start debugging
Configure the Agent's input content in the Prompts tab.The page is divided into two input areas:System Prompt: Used to define the Agent's role, goals, constraints, and tool usage rules, belonging to Agent configuration
User Prompt: Used to fill in the test input for this session, such as "What's Apidog?"
After completing the configuration, click Run in the upper right corner to start debugging.If you wish to automatically clear the input box after sending, you can check Clear after Send.In the Tools tab, you can select the tools available for the Agent to call during runtime. The number on the tab indicates the current number of available or configured tools.Tools are divided into two categories:AI Agent Debugger provides commonly used built-in tools for AI large models to read files, search content, execute commands, or fetch web content.| Tool | Description |
|---|
bash | Execute commands in a persistent Shell session |
web_fetch | Fetch web content and convert it to Markdown, text, or HTML |
read | Read text, image, or PDF files |
edit | Perform precise string replacement on files |
write | Create or overwrite files |
grep | Search file content using regular expressions |
glob | Find files using glob patterns |
kill_shell | Reset the current Shell session |
You can enable or disable individual tools as needed. When disabled, the Agent will be unable to call that tool during runtime.If you need the Agent to call external systems or custom capabilities, you can add MCP Servers in the Tools tab.AI Agent Debugger supports the following MCP connection methods:STDIO: Launch a local MCP Server process
HTTP: Connect to an MCP Server that supports Streamable HTTP
SSE: Connect to an MCP Server based on Server-Sent Events
For MCP Servers requiring authentication, you can configure request Headers or complete authorization using OAuth 2.0. After successful connection, you can select the tools to expose to the Agent from the tool list.In the Skills tab, you can configure reusable Skills for the Agent. The number on the tab indicates the current number of loaded skills.Skills are applicable to the following scenarios:Providing fixed workflows within a project for the Agent
Reusing operation specifications for common tasks
Reducing repetitive long text descriptions in system prompts
During Agent runtime, relevant Skills will be read as needed based on the task, thereby obtaining more complete operation guidance.Configure authentication information required by model services or MCP services in the Authentication tab.In the Settings tab, you can configure model runtime parameters, such as Temperature, Max Tokens, Top P, etc. Different model providers may support different parameters; please refer to the parameters actually supported by your model provider.View Session List#
Each time you click Run, a new session record will be generated on the left.The session list displays summary information for that run, such as:Number of dialogue rounds
Number of execution steps
Session 3
1 turn Β· 1 step Β· 10s Β· 3.1k tokens Β· $0.02
gpt-5.5
You can click different sessions on the left to view the corresponding turns and call traces.View Turns#
The Turns panel in the middle is used to display multi-round dialogues in the current session.When a session contains multiple user inputs, each round will be displayed as an independent dialogue round. After clicking a dialogue round, you can view the corresponding call process on the right.View Traces#
The Traces panel on the right is used to display the Agent's complete execution process.Call traces are displayed in execution order, showing:User prompts and system prompts
Model thinking process (if supported by the model)
MCP tool calls and custom Skill executions
Tool input parameters, execution results, time consumed, and error messages
AI large model final output
When tool calls fail or the model returns exceptions, you can locate the specific step in the call traces and view the input parameters and returned content, facilitating troubleshooting.You can use the same prompt and tool configuration to select different models to run tasks, and compare model performance through the session list.Session summaries display key metrics such as response time, Token consumption, and estimated cost, helping you evaluate the trade-offs between different models in terms of effectiveness, performance, and cost.For example, you can compare:Whether the number of execution steps differs for the same task under different models
Which model can select tools more accurately
Which model has lower response time
Which model has more controllable Token consumption and cost
FAQ#
Please check the following configurations:1.
Whether the tool has been enabled in the Tools tab.
2.
Whether the system prompt clearly describes the usage scenarios for the tool.
3.
Whether the MCP Server is successfully connected and the target tool is not disabled.
4.
Whether there are model thinking processes or tool call records in the call traces.
5.
Whether the currently used AI large model supports tool calls.
You can view failed tool calls in the call traces, focusing on checking input parameters, output results, and error messages. Common causes include:MCP Server not connected or connection disconnected
Parameter format does not meet tool requirements
OAuth, API Key, or Header authentication configuration incorrect
Local STDIO service startup command unavailable
What can be evaluated by running the same task multiple times?#
Agents are non-deterministic systems. The same prompt may produce different execution paths under different models, different parameters, or different tool configurations. It is recommended to observe execution steps, call results, time consumed, Token consumption, and final output through multiple runs and session comparisons, thereby evaluating more suitable configurations.