Apidog Docs
πŸ‡ΊπŸ‡Έ English
  • πŸ‡ΊπŸ‡Έ English
  • πŸ‡―πŸ‡΅ ζ—₯本θͺž
πŸ‡ΊπŸ‡Έ English
  • πŸ‡ΊπŸ‡Έ English
  • πŸ‡―πŸ‡΅ ζ—₯本θͺž
πŸ‡ΊπŸ‡Έ English
  • πŸ‡ΊπŸ‡Έ English
  • πŸ‡―πŸ‡΅ ζ—₯本θͺž
HomeLearning Center
Support CenterAPI ReferencesDownloadChangelog
HomeLearning Center
Support CenterAPI ReferencesDownloadChangelog
  1. API Debugging
  • Apidog Learning Center
  • Getting Started
    • Introduction to Apidog
    • Basic Concepts in Apidog
    • Navigating Apidog
    • Quick Start
      • Overview
      • Creating an Endpoint
      • Making a Request
      • Adding an Assertion
      • Creating Test Scenarios
      • Sharing API Documentation
      • Explore More
    • Migration to Apidog
      • Overview
      • Manual Import
      • Scheduled Import (Bind Data Sources)
      • Import Options
      • Export Data
      • Import From
        • Import from Postman
        • Import OpenAPI Spec
        • Import cURL
        • Import Markdowns
        • Import from Insomnia
        • Import from apiDoc
        • Import .har File
        • Import WSDL
  • Design APIs
    • Overview
    • Create a New API Project
    • Endpoint Basics
    • APl Design Guidelines
    • Module
    • Configure Multiple Request Body Examples
    • Components
    • Common Fields
    • Global Parameters
    • Endpoint Change History
    • Comments
    • Batch Endpoint Management
    • Custom Protocol API
    • Spec-first Mode (Beta)
    • Schemas
      • Overview
      • Create a New Schema
      • Build a Schema
      • Generate Schemas from JSON Etc
      • oneOf, allOf, anyOf
      • Using Discriminator
    • Security Schemes
      • Overview
      • Create a Security Scheme
      • Use the Security Scheme
      • Security Scheme in Online Documentation
    • Advanced Features
      • Custom Endpoint Fields
      • Associated Test Scenarios
      • Endpoint Status
      • Appearance of Parameter Lists
      • Endpoint Unique Identification
  • Develop and Debug APIs
    • Overview
    • Generating Requests
    • Sending Requests
    • Debugging Cases
    • Test Cases
    • Dynamic Values
    • Validating Responses
    • Design-First vs Request-First
    • Generating Code
    • API Debugging
      • AI Agent Debugger
      • A2A Debugger
    • Environments & Variables
      • Using Variables
      • Environment Management
      • Overview
    • Vault Secrets
      • Overview
      • HashiCorp Vault
      • Azure Key Vault
      • AWS Secrets Manager
    • Pre and Post Processors
      • Assertion
      • Extract Variable
      • Wait
      • Overview
      • Using Scripts
        • Overview
        • Pre Processor Scripts
        • Post Processor Scripts
        • Public Scripts
        • Postman Scripts Reference
        • Calling Other Programming Languages
        • Using JS Libraries
        • Visualizing Responses
        • Script Examples
          • Assertion Scripts
          • Using Variables
          • Modifying Requests
          • Other Examples
      • Database Operations
        • Overview
        • MySQL
        • MongoDB
        • Redis
        • Oracle Client
    • Dynamic Values Modules
  • Mock API Data
    • Overview
    • Smart Mock
    • Custom Mock
    • Mock Priority Sequence
    • Mock Scripts
    • Cloud Mock
    • Self-Hosted Runner Mock
    • Mock Language (Locales)
  • API Testing
    • Overview
    • Test Scenarios
      • Create a Test Scenario
      • Pass Data Between Requests
      • Flow Control Conditions
      • Sync Data from Endpoints and Endpoint Cases
      • Import Endpoints and Endpoint Cases from Other Projects
      • Export Test Scenarios
    • Run Test Scenarios
      • Run a Test Scenario
      • Run Test Scenarios in Batch
      • Data-Driven Testing
      • Shared Test Data
      • Scheduled Tasks
      • Manage Runtime Environment of APIs from Other Projects
    • Test Suite
      • Overview
      • Create A Test Suite
      • Orchestrate Test Suite
      • Run Test Suites Locally
      • Run Test Suites Via CLI
      • Scheduled tasks
    • Test Reports
      • Test Reports
    • Test APIs
      • Integration Testing
      • Performance Testing
      • End-to-End Testing
      • Regression Testing
      • Contract Testing
    • Apidog CLI
      • Overview
      • Installing and Running Apidog CLI
      • Apidog CLI Options
    • CI CD
      • Overview
      • Integrate with Github Actions
      • Integrate with Gitlab
      • Integrate with Jenkins
      • Trigger Test by Git Commit
  • Publish API Docs
    • Overview
    • API Technologies Supported
    • Quick Share
    • Viewing API Documentation
    • Markdown Documentation
    • Publishing Documentation Sites
    • Custom Login Page
    • Custom Layouts
    • Custom CSS, JavaScript, HTML
    • Custom Domain
    • AI Features
    • SEO Settings
    • Advanced Settings
      • Documentation Search
      • CORS Proxy
      • Integrating Google Analytics
      • Folder Tree Settings
      • Visibility Settings
      • Embedding Values in Document URLs
    • API Versions
      • Overview
      • Creating API Versions
      • Publishing API Versions
      • Sharing Endpoints with API Versions
  • Send Requests
    • Overview
    • SSE Debugging
    • MCP Client
    • Socket.IO
    • WebSocket
    • Webhook
    • SOAP or WebService
    • GraphQL
    • gRPC
    • Use Request Proxy Agents for Debugging
    • Create Requests
      • Request History
      • Request Basics
      • Parameters and Body
      • Request Headers
      • Request Settings
      • Debug Requests
      • Saving Requests as Endpoints
      • HTTP/2
    • Response and Cookies
      • Viewing API Responses
      • Managing Cookies
      • Overview
    • Authentication and Authorization
      • Overview
      • CA and Client Certificates
      • Authorization Types
      • Digest Auth
      • OAuth 1.0
      • OAuth 2.0
      • Hawk Authentication
      • Kerberos
      • NTLM
      • Akamai EdgeGrid
  • Branches
    • Overview
    • Creating a Sprint Branch
    • Testing APIs in a Branch
    • Designing APIs in a Branch
    • Merging Sprint Branches
    • Managing Sprint Branches
  • AI Features
    • Overview
    • Enabling AI Features
    • Generating Test Cases
    • Modifying Schemas with AI
    • Endpoint Compliance Check
    • API Documentation Completeness Check
    • AI-Powered Field Naming
    • FAQs
  • Apidog MCP Server
    • Overview
    • Connect Apidog Project to AI
    • Connect Published Documentation to AI
    • Connect OpenAPI Files to AI
  • Best Practices
    • Handling API Signatures
    • Accessing OAuth 2.0 Protected APIs
    • Collaboration Workflow
    • Managing Authentication State
  • Offline Space
    • Overview
  • Administration
    • Onboarding Checklist
      • Basic Concepts
      • Onboarding Guide
    • Managing Projects
      • Managing Projects
      • Managing Project Members
      • Notification Settings
      • Project Resources
        • Database Connection
        • Git Connection
    • Managing Teams
      • Managing Teams
      • Managing Team Members
      • Team Activities
      • Team Roles & Permissions
      • Team Resources
        • General Runner
        • Team Variables
        • Request Proxy Agent
      • Real-time Collaborations
        • Team Collaboration
    • Managing Organization
      • Managing Organization
      • Organization Role & Permissions
      • Single Sign-On (SSO)
        • SSO Overview
        • Configuring Microsoft Entra ID
        • Configuring Okta
        • Configuring SSO for an Organization
        • Managing User Accounts
        • Mapping Groups to Teams
      • SCIM Provisioning
        • Introduction to SCIM Provisioning
        • Microsoft Entra ID
        • Okta
      • Plans Management
        • Billing Managers in Organizations
      • Organization Resources
        • Self-Hosted Runner
  • Billing
    • Overview
    • Credits
    • Upgrading Your Plan
    • Alternative Payment Methods
    • Managing Subscriptions
    • Moving Paid Teams to Organizations
  • Data & Security
    • Data Storage and Security
    • User Data Privacy and Security
    • Request Routing and Data Security
  • Add-ons
    • API Hub
    • Apidog Intellij IDEA Plugin
    • Browser Extension
      • Chrome
      • Microsoft Edge
    • Request Proxy
      • Request Proxy in Web
      • Request Proxy in Shared Docs
      • Request Proxy in Client
  • Account & Preferences
    • Account Settings
    • Generating OpenAPI Access Token
    • Notification
    • Language Settings
    • Hot Keys
    • Network Proxy Configuration
    • Backing Up Data
    • Updating Apidog
    • Deleting Account
    • Experimental Features
  • References
    • API Design-First Approach
    • Apidog OpenAPI Specificaiton Extensions
    • JSONPath
    • XPath
    • Regular Expressions
    • JSON Schema
    • CSV File Format
    • Installing Java Environment
    • Runner Deployment Environment
    • Apidog Markdown Syntax
    • Apidog Swagger Extensions
      • Overview
      • x-apidog-folder
      • x-apidog-status
      • x-apidog-name
      • x-apidog-maintainer
    • Apidog JSON Schema Extensions
      • Overview
      • x-apidog-mock
      • x-apidog-orders
      • x-apidog-enum
  • Apidog Europe
    • Apidog Europe
  • Support Center
  1. API Debugging

AI Agent Debugger

AI Agent Debugger is a visual debugging tool for AI Agent developers.
Unlike debugging approaches that focus solely on model input and output, AI Agent Debugger extends the debugging scope to the complete Agent execution process. It clearly displays each round of dialogue, every model call, MCP tool call, custom Skill execution, and final output, helping developers observe the Agent's operational chain and quickly locate issues in prompts, model configurations, tool calls, or business logic.
AI Agent Debugger is applicable to the following scenarios:
Debugging AI large model tool call chains, troubleshooting tool parameters, execution results, or exception causes
Comparing the performance of different models executing the same task, evaluating key metrics such as response time, Token consumption, and cost
Verifying whether MCP Server integration with AI large models meets expectations
Iteratively optimizing system prompts and observing the impact of different configurations on execution results
It is recommended to use the latest Apidog client to experience the full functionality of AI Agent Debugger.

Create New Agent Debug Session#

Navigate to AI Agent Debugger from the top tab bar in Apidog.
The upper section of the page is used for configuring the model and run status:
Select the model provider on the left, such as OpenAI or Anthropic.
Select the model in the middle, such as gpt-5.5
After selecting the provider and model, the corresponding Base URL will be automatically matched, such as https://api.openai.com/v1, no manual entry required
Click Run to start debugging

Configure Prompts#

Configure the Agent's input content in the Prompts tab.
The page is divided into two input areas:
System Prompt: Used to define the Agent's role, goals, constraints, and tool usage rules, belonging to Agent configuration
User Prompt: Used to fill in the test input for this session, such as "What's Apidog?"
After completing the configuration, click Run in the upper right corner to start debugging.
If you wish to automatically clear the input box after sending, you can check Clear after Send.

Configure Tools#

In the Tools tab, you can select the tools available for the Agent to call during runtime. The number on the tab indicates the current number of available or configured tools.
Tools are divided into two categories:

Built-in Tools#

AI Agent Debugger provides commonly used built-in tools for AI large models to read files, search content, execute commands, or fetch web content.
ToolDescription
bashExecute commands in a persistent Shell session
web_fetchFetch web content and convert it to Markdown, text, or HTML
readRead text, image, or PDF files
editPerform precise string replacement on files
writeCreate or overwrite files
grepSearch file content using regular expressions
globFind files using glob patterns
kill_shellReset the current Shell session
You can enable or disable individual tools as needed. When disabled, the Agent will be unable to call that tool during runtime.

MCP Tools#

If you need the Agent to call external systems or custom capabilities, you can add MCP Servers in the Tools tab.
AI Agent Debugger supports the following MCP connection methods:
STDIO: Launch a local MCP Server process
HTTP: Connect to an MCP Server that supports Streamable HTTP
SSE: Connect to an MCP Server based on Server-Sent Events
For MCP Servers requiring authentication, you can configure request Headers or complete authorization using OAuth 2.0. After successful connection, you can select the tools to expose to the Agent from the tool list.

Configure Skills#

In the Skills tab, you can configure reusable Skills for the Agent. The number on the tab indicates the current number of loaded skills.
Skills are applicable to the following scenarios:
Providing fixed workflows within a project for the Agent
Reusing operation specifications for common tasks
Reducing repetitive long text descriptions in system prompts
During Agent runtime, relevant Skills will be read as needed based on the task, thereby obtaining more complete operation guidance.

Configure Authentication and Model Parameters#

Configure authentication information required by model services or MCP services in the Authentication tab.
In the Settings tab, you can configure model runtime parameters, such as Temperature, Max Tokens, Top P, etc. Different model providers may support different parameters; please refer to the parameters actually supported by your model provider.

View Session List#

Each time you click Run, a new session record will be generated on the left.
The session list displays summary information for that run, such as:
Number of dialogue rounds
Number of execution steps
Response time
Token consumption
Estimated cost
Model used
For example:
Session 3
1 turn Β· 1 step Β· 10s Β· 3.1k tokens Β· $0.02
gpt-5.5
You can click different sessions on the left to view the corresponding turns and call traces.

View Turns#

The Turns panel in the middle is used to display multi-round dialogues in the current session.
When a session contains multiple user inputs, each round will be displayed as an independent dialogue round. After clicking a dialogue round, you can view the corresponding call process on the right.

View Traces#

The Traces panel on the right is used to display the Agent's complete execution process.
Call traces are displayed in execution order, showing:
User prompts and system prompts
Every model call
Model thinking process (if supported by the model)
MCP tool calls and custom Skill executions
Tool input parameters, execution results, time consumed, and error messages
AI large model final output
When tool calls fail or the model returns exceptions, you can locate the specific step in the call traces and view the input parameters and returned content, facilitating troubleshooting.

Compare Model Performance#

You can use the same prompt and tool configuration to select different models to run tasks, and compare model performance through the session list.
Session summaries display key metrics such as response time, Token consumption, and estimated cost, helping you evaluate the trade-offs between different models in terms of effectiveness, performance, and cost.
For example, you can compare:
Whether the number of execution steps differs for the same task under different models
Which model can select tools more accurately
Which model has lower response time
Which model has more controllable Token consumption and cost

FAQ#

The Agent did not call the expected tool, how to troubleshoot?#

Please check the following configurations:
1.
Whether the tool has been enabled in the Tools tab.
2.
Whether the system prompt clearly describes the usage scenarios for the tool.
3.
Whether the MCP Server is successfully connected and the target tool is not disabled.
4.
Whether there are model thinking processes or tool call records in the call traces.
5.
Whether the currently used AI large model supports tool calls.

What to do when MCP tool calls fail?#

You can view failed tool calls in the call traces, focusing on checking input parameters, output results, and error messages. Common causes include:
MCP Server not connected or connection disconnected
Parameter format does not meet tool requirements
OAuth, API Key, or Header authentication configuration incorrect
Local STDIO service startup command unavailable

What can be evaluated by running the same task multiple times?#

Agents are non-deterministic systems. The same prompt may produce different execution paths under different models, different parameters, or different tool configurations. It is recommended to observe execution steps, call results, time consumed, Token consumption, and final output through multiple runs and session comparisons, thereby evaluating more suitable configurations.
Modified atΒ 2026-05-14 11:13:49
Previous
Generating Code
Next
A2A Debugger
Built with