Apidog Docs
πŸ‡ΊπŸ‡Έ English
  • πŸ‡ΊπŸ‡Έ English
  • πŸ‡―πŸ‡΅ ζ—₯本θͺž
HomeLearning Center
Support CenterAPI ReferencesDownloadChangelog
HomeLearning Center
Support CenterAPI ReferencesDownloadChangelog
Discord Community
Slack Community
X / Twitter
πŸ‡ΊπŸ‡Έ English
  • πŸ‡ΊπŸ‡Έ English
  • πŸ‡―πŸ‡΅ ζ—₯本θͺž
πŸ‡ΊπŸ‡Έ English
  • πŸ‡ΊπŸ‡Έ English
  • πŸ‡―πŸ‡΅ ζ—₯本θͺž
  1. Test Suite
  • Apidog Learning Center
  • Getting Started
    • Introduction to Apidog
    • Basic Concepts in Apidog
    • Navigating Apidog
    • Quick Start
      • Overview
      • Creating an Endpoint
      • Making a Request
      • Adding an Assertion
      • Creating Test Scenarios
      • Sharing API Documentation
      • Explore More
    • Migration to Apidog
      • Overview
      • Manual Import
      • Scheduled Import (Bind Data Sources)
      • Import Options
      • Export Data
      • Import From
        • Import from Postman
        • Import OpenAPI Spec
        • Import cURL
        • Import Markdowns
        • Import from Insomnia
        • Import from apiDoc
        • Import .har File
        • Import WSDL
  • Design APIs
    • Overview
    • Create a New API Project
    • Endpoint Basics
    • APl Design Guidelines
    • Module
    • Configure Multiple Request Body Examples
    • Components
    • Common Fields
    • Global Parameters
    • Endpoint Change History
    • Comments
    • Batch Endpoint Management
    • Custom Protocol API
    • Schemas
      • Overview
      • Create a New Schema
      • Build a Schema
      • Generate Schemas from JSON Etc
      • oneOf, allOf, anyOf
      • Using Discriminator
    • Security Schemes
      • Overview
      • Create a Security Scheme
      • Use the Security Scheme
      • Security Scheme in Online Documentation
    • Advanced Features
      • Custom Endpoint Fields
      • Associated Test Scenarios
      • Endpoint Status
      • Appearance of Parameter Lists
      • Endpoint Unique Identification
  • Develop and Debug APIs
    • Overview
    • Generate requests
    • Send requests
    • Debug cases
    • Test cases
    • Dynamic values
    • Validate responses
    • Design-first Mode & Request-first Mode
    • Generate code
    • Environments & variables
      • Overview
      • Environment Management
      • Using variables
    • Vault secrets
      • Overview
      • HashiCorp Vault
      • Azure Key Vault
      • AWS Secrets Manager
    • Pre and Post processors
      • Assertion
      • Extract variable
      • Overview
      • Wait
      • Database operations
        • Overview
        • MySQL
        • MongoDB
        • Redis
        • Oracle Client
      • Using scripts
        • Overview
        • Pre processor scripts
        • Post processor scripts
        • Public scripts
        • Postman scripts reference
        • Calling other programming languages
        • Using JS libraries
        • Visualizing responses
        • Script examples
          • Assertion scripts
          • Using variables in scripts
          • Using scripts to modify request messages
          • Other examples
    • Dynamic values Modules
  • Mock API data
    • Overview
    • Smart mock
    • Custom mock
    • Mock priority sequence
    • Mock scripts
    • Cloud mock
    • Self-hosted runner mock
    • Mock language (Locales)
  • API Testing
    • Overview
    • Test scenarios
      • Create a test scenario
      • Pass data between requests
      • Flow control conditions
      • Sync data from endpoints or endpoint cases
      • Import endpoints or endpoint cases from other projects
      • Export test scenarios
    • Run test scenarios
      • Run a test scenario
      • Run test scenarios in batch
      • Manage the runtime environment of APIs from other projects
      • Data-driven testing
      • Scheduled tasks
    • Test Suite
      • Overview
      • Create a test suite
      • Orchestrate test suite
      • Run test suite locally
      • Run test suites via CLI
      • Scheduled tasks
    • Test reports
      • Test reports
    • Test APIs
      • Integration testing
      • Performance testing
      • End-to-end testing
      • Regression testing
      • Contract testing
    • Apidog CLI
      • Overview
      • Installing and running Apidog CLI
      • Apidog CLI Options
    • CI CD
      • Overview
      • Integrate with Gitlab
      • Integrate with Jenkins
      • Trigger Test by Git Commit
      • Integrate with Github Actions
  • Publish API Docs
    • Overview
    • API Technologies Supported
    • Quick share
    • View the API documentation
    • Markdown documentations
    • Publish docs sites
    • Custom layouts
    • Custom CSS, JavaScript, HTML
    • Custom domain
    • LLM-friendly Features
    • SEO settings
    • Advanced Settings
      • Documentation Search
      • CORS Proxy
      • Integrating Google Analytics with Doc Sites
      • Folder tree settings
      • Visibility settings
      • Embedding values in document URLs
    • API Versions
      • Overview
      • Create API versions
      • Publish API versions
      • Share endpoints with API versions
  • Send requests
    • Overview
    • SSE debugging
    • Socket.IO
    • WebSocket
    • Webhook
    • SOAP or WebService
    • GraphQL
    • gRPC
    • Use request proxy agents for debugging
    • Create requests
      • Request History
      • Request basics
      • Parameters and body
      • Request headers
      • Request settings
      • Debug requests
      • Save the request as an endpoint
      • HTTP2
    • Response and cookies
      • Overview
      • API response in Apidog
      • Create and send cookies
    • Authentication and authorization
      • Overview
      • CA and client certificates
      • Authorization types supported by Apidog
      • Digest Auth
      • OAuth 1.0
      • OAuth 2.0
      • Hawk Authentication
      • Kerberos
      • NTLM
      • Akamai EdgeGrid
  • Branches
    • Overview
    • Create a new sprint branch
    • Test APIs in a branch
    • Design API in a branch
    • Merge sprint branches
    • Manage sprint branches
  • AI Features
    • Overview
    • Enable AI features
    • Generate Test Cases
    • Modify schemas with AI
    • Endpoint compliance check
    • API documentation completeness check
    • AI naming
    • FAQs
  • Apidog MCP Server
    • Overview
    • Connect API Specification within Apidog Project to AI via Apidog MCP Server
    • Connect Online API Documentation Published by Apidog to AI via Apidog MCP Server
    • Connect OpenAPI Files to AI via Apidog MCP Server
  • Best Practices
    • How to handle API signatures
    • How to access OAuth 2.0 protected APIs
    • Apidog collaboration workflow
    • Managing authentication state in Apidog
  • Offline Space
    • Overview
  • Administration
    • Onboarding Checklist
      • Basic Concepts
      • Onboarding Guide
    • Managing teams
      • Managing Teams
      • Managing Team Members
      • Member Roles & Permission Settings
      • Team Activities
      • Team Resources
        • General Runner
        • Team Variables
        • Request Proxy Agent
      • Real-time Collaborations
        • Team Collaboration
    • Managing Projects
      • Managing Projects
      • Managing Project Members
      • Notification Settings
      • Project Resources
        • Database Connection
        • Git Connection
    • Managing Organization
      • Managing Organization
      • Single Sign-On (SSO)
        • SSO Overview
        • Configure Microsoft Entra ID
        • Configure Okta
        • Configure SSO for an Organization
        • Managing user accounts
        • Mapping Groups to Teams
      • SCIM Provisioning
        • Intro to SCIM Provisioning
        • Microsoft Entra ID
        • Okta
      • Organization Resources
        • Self-hosted Runner
      • Plans management
        • Billing managers in organization
  • Billing
    • Overview
    • Credits
    • Unable to use credit cards
    • Managing subscriptions
    • Upgrade plan
    • How to move a paid team to a organization
  • Data & Security
    • Apidog data storage location and security
    • User data privacy and storage location
    • Request routing and data security
  • Add-ons
    • API Hub
    • Apidog Intellij IDEA plugin
    • Browser Extension
      • Chrome
      • Microsoft Edge
    • Request Proxy
      • Request proxy in Apidog web
      • Request proxy in shared docs
      • Request proxy in Apidog client
  • Account & preferences
    • Account settings
    • Generate OpenAPI access token
    • Notification
    • Language settings
    • Hot keys
    • Network proxy configuration
    • Data backup
    • Updating Apidog
    • Deleting account
    • Experimental Features
  • References
    • API-Design First Approach
    • Apidog OpenAPI Specificaiton Extensions
    • JSONPath
    • XPath
    • Regular Expressions
    • JSON Schema
    • CSV File Format
    • Install Java Environment
    • Runner deployment environment
    • Apidog flavored Markdown
  • Apidog Europe
    • Apidog Europe
  • Support Center
  • Schemas
    • MinifiedFestivalType
    • DateType
    • LocationType
    • ArtistType
    • FestivalType
    • GetAllFestivalsResponseType
    • GetFestivalByIdResponseType
    • UserType
    • GetUserResponseType
HomeLearning Center
Support CenterAPI ReferencesDownloadChangelog
HomeLearning Center
Support CenterAPI ReferencesDownloadChangelog
Discord Community
Slack Community
X / Twitter
πŸ‡ΊπŸ‡Έ English
  • πŸ‡ΊπŸ‡Έ English
  • πŸ‡―πŸ‡΅ ζ—₯本θͺž
πŸ‡ΊπŸ‡Έ English
  • πŸ‡ΊπŸ‡Έ English
  • πŸ‡―πŸ‡΅ ζ—₯本θͺž
  1. Test Suite

Orchestrate test suite

After creating a test suite, you need to add test content. Apidog provides flexible "Static" and "Dynamic" modes to meet different test management needs.

Importing Test Content#

In the test suite details page under the Orchestration tab, click + Add Endpoint Test Case or + Add Test Scenario. In the popup selection window, you can switch between Static or Dynamic mode.
image.png

1. Static Mode#

Static mode is used to precisely specify the test items to be executed.
🎯 Core Logic
The system records the specific test cases' IDs you select. Even if new test cases are added to the source category, the execution scope of this suite will not change, ensuring controllability of test results.
πŸš€ Best Use Scenarios
Bug Fix Verification (Hotfix): Select 3-5 test cases strongly related to the bug, form a "verification package", quickly verify the fix result, without wasting time running unrelated cases.
Core Business Stabilization (Core Path): For extremely core and stable processes like "order-payment". We don't want monitoring alerts triggered because a newcomer accidentally added an incomplete test case.
Old Version Compatibility Testing: Select a batch of old endpoint test cases specifically for verifying old version client compatibility.
⚠️ Maintenance Characteristics
High Maintenance Cost: If new cases need to be included in this specialized test, they must be manually added.

2. Dynamic Mode#

Dynamic mode is used to automatically filter test items to be executed through rules.
🎯 Core Logic
The system saves "Filter Rules" (Scope & Filter). Each time it runs, the system scans the entire project in real time and includes all the latest cases that meet the criteria into the execution plan.
πŸš€ Best Use Scenarios
Module-Level Regression Test: Set the "Trading Center" folder as the source folder. Testers only need to write new cases in the folder, and the suite will automatically include them when running.
Smoke Test: Create a dynamic suite with the rule Priority = P0. Run before each release to automatically cover all key cases marked as P0.
Version Iteration Verification: Use the tag feature, set the rule to Tag = v2.5.0. After development is complete, run this suite to verify all new features for this version.
⚠️ Maintenance Characteristics
Zero Maintenance Cost: Once the rules are configured, there is no need to maintain the suite itself afterwards, only maintain the case attributes (location, tags, priority).

Adjust Execution Order#

Imported content will be displayed in a list, and you can drag the list items to adjust the execution order.
For items added "statically", you can use Edit to delete test cases individually or delete the entire group.
image.png
For groups added "dynamically", you can only delete the entire group or edit filter criteria, and cannot delete individual items within the group.
image.png

Advanced Config#

On the right side of the test suite design page, you can expand Advanced Config to have more granular control over how the test suite runs.
image.png
image.png

Environment#

Definition: By default, inherits the run environment already set in the test suite. If an environment is specified here, that environment configuration will take priority during execution.
Use Case: Suitable for scenarios where you need to reuse the same set of test steps in different environments.

Test Data#

Used to specify whether to use test data during execution.
No Test Data: Test steps execute only once, not running data-driven testing.
Use Test Data: Run multiple times based on test data, commonly used for parameterized testing.

On Error#

Configure how the test should handle errors, which can include assertion failures, data format validation failures, endpoint request exceptions, server errors, etc.
Ignore: Continue executing subsequent steps when an error occurs, without interrupting the current run.
Continue: When an error occurs, skip the remaining steps of the current round and directly enter the next execution round.
End execution: Immediately terminate subsequent steps once an error occurs.

Iterations#

Definition: The number of times each thread loops through all steps.
Use Case: Commonly used for stability verification or simple stress testing scenarios.

Delay#

Definition: Set how many milliseconds (ms) to wait after each test step completes before executing the next step.
Use Case: Prevent triggering rate-limiting or circuit breaker mechanisms on the target server due to high request frequency, ensuring smooth test execution.

Save Request/Responses#

Definition: Control whether the test report includes detailed data of requests and responses (such as Header, Body, etc.).
Options:
All: Save complete details of all steps regardless of pass/fail. Large data volume, suitable for deep debugging.
Only Failed: Only save details for steps that failed during execution. Recommended, saves storage space and facilitates quick identification of failure reasons.
Do Not Save: Do not save any details; only record pass/fail status and duration.

Environment/Global Variable Values#

Environment/Global variable values specify which actual values to use for environment/global variables in this test scenario. There are two choices. Detailed information can be viewed here. When choosing to use variable values saved in Runner, you will be required to further select the variable scope to use.
The purpose of this scope is to help users better separate variables according to actual needs, avoiding situations where one scheduled task run causes other tasks to fail due to variable changes. After selecting the scope, you can also view the variable values within this scope through the entry that appears in the product interface.
Variable Scope in RunnerRead/Write Environment/Global VariablesDescription
Share only in the current test scenario
  • In the current specified Runner, this test scenario has a dedicated file to store its environment/global variables persistently.
  • Only the current test scenario can read and write variables in this file.
The smallest variable scope with minimal impact. Suitable for cases where the results of the previous run of this test scenario need to be used in the next run.
Variable files for test scenarios, tasks, and task folders are all saved in the Runner container path /opt/runner/variables.
Share across all test scenarios in the current scheduled task
  • In the current specified Runner, the scheduled task has a file to store environment/global variables that can be used across all its test scenarios.
  • All test scenarios in the current scheduled task can read and write variables in this file.
A recommended variable scope with moderate impact. Suitable for cases where data needs to be shared between different test scenarios within the same scheduled task.
Share across all scheduled tasks in the current scheduled task folder
  • In the current specified Runner, the scheduled task folder has a file to store environment/global variables that can be used across all its scheduled tasks and test scenarios.
  • All test scenarios in all scheduled tasks within the current folder can read and write variables in this file.
The largest variable scope with the most significant impact. It is possible that running a certain scheduled task modifies the variable value, leading to the failure of other scheduled tasks. Suitable when data needs to be shared across multiple tasks in the same folder.
Modified atΒ 2026-01-13 10:04:10
Previous
Create a test suite
Next
Run test suite locally
Built with