Snowglobe SDK Integration Guide
Still on
snowglobe<=0.4.x? See the documentation for previous versions here.Overview
Snowglobe is a simulation engine designed for testing and evaluating AI agents and chatbots through automated conversation generation and analysis. This guide demonstrates how to integrate Snowglobe into your continuous integration (CI) pipeline and programmatic workflows for comprehensive agent testing.Table of Contents
Prerequisites
Before integrating Snowglobe, ensure you have:- Python 3.11+ installed
- Snowglobe SDK package (
pip install snowglobe-sdk) - Valid API credentials (API key and Organization ID) from here.
- OpenAI API key (or other supported LLM provider credentials)
- Access to a Snowglobe control plane instance
Required Dependencies
Authentication Setup
Environment Variables
Set up your credentials as environment variables or configuration constants:Client Configuration
Initialize the Snowglobe client with proper authentication headers:Core Components
1. Agent Creation
Agents represent the AI systems you want to test. Each agent requires a name, icon, and connection info describing how Snowglobe should reach your LLM.Note: Agents using a custom code integration viasnowglobe-connectrequire a two-step setup:
- Create the agent with the API as shown above.
- Map the agent’s ID in your
snowglobe-connectdeployment’sagents.jsonfile.- Run
snowglobe-connect start.
2. Simulation Configuration
Simulations define how conversations are generated and evaluated against your agent.CI Integration Workflow
Step 1: Create and Configure Agent
Step 2: Launch Simulation
Step 3: Monitor Simulation Progress
Step 4: Retrieve Results
Complete CI Integration Example
Error Handling
CI Pipeline Integration
Simulation States
Thestate_num field on a simulation indicates its current phase:
state_num | State name | Description |
|---|---|---|
| 0–2 | Draft / Queued | Simulation created, waiting to start |
| 3–5 | Experiment started | Initialization and setup |
| 6–8 | Generation in progress | Persona, topic, and conversation generation |
| 9–11 | Evaluation in progress | Agent responses being judged against risks |
| 12–14 | Validation in progress | Results validated |
| 15–16 | Adaptation in progress | Adapted (adversarial) conversations being generated |
| 17+ | Experiment completed | Results available; download_simulation_data can be called |
sim.state_num and wait for >= 17 before downloading results.
Troubleshooting
Common Issues
Authentication errors- Verify your API key and organization ID are correct.
- Confirm the
x-api-keyheader is being sent (the SDK sets this automatically fromapi_key). - Check network connectivity to your control plane URL.
- Review the agent’s
connection_info— all required fields (api_key_ref,model_name,system_prompt) must be present. - Verify the LLM provider API key referenced by
api_key_refis valid and has sufficient quota. - Check that
source_data.generation_configurationvalues are within acceptable ranges.
- Increase
timeout_minutesinwait_for_completionfor large simulations. - Reduce
max_personas,max_topics, orbranching_factorfor faster runs.