AI can quickly generate a large number of test cases based on your current API specs. These cases help verify the functionality, compliance, stability, and security of a single endpoint. You can also manage test cases by grouping and type.On any endpoint documentation page, switch to the Test Cases
tab. There you'll find the Generate with AI
button. Click it to start.Selecting Test Case Categories#
When you click Generate with AI
, a settings panel will slide out on the right. Here, you can choose the types of test cases to generate β such as positive, negative, boundary, security, and more.Configuring Credentials#
If the endpoint requires credentials, the configuration will reference credentials
. You can modify the credential values as needed. Keys are encrypted locally before being sent to the AI LLM's provider and automatically decrypted after generating test cases. This ensures both quick validation of credentials and information security.Adding More Requirements#
Before generating, you can provide additional requirements in the text box at the bottom to improve accuracy. In the lower-left corner, you can configure how many test cases to generate β up to 80 cases at once. In the lower-right corner, you can switch the large language model and provider.Generating Test Cases#
ClickGenerate
, and AI will start creating test cases based on API specs and configuration. Once complete, you can click on a specific test case to view its request parameters, rename it, or adjust its category.ClickRun
to check if the test case matches expectations via response.
ClickAccept
to save the test case under the Test Cases
tab in your documentation.
ClickDiscard
to remove test cases you don't need.
Select multiple test cases at once to run bulk actions.
Pro Tip: Run multiple generation tasks at the same time with different AI models, providers, and configurations. This makes it easy to compare results, validate outputs, and quickly adopt the best test cases.