Agent simulation is in beta. If you don’t have access yet, email admin@guardrailsai.com to join.
Prerequisites
Before you begin, make sure you have:- A registered agent in Snowglobe with a name and description
- The Snowglobe Connect SDK installed and authenticated (see authentication)
- A chatbot wrapper file created via
snowglobe-connect init(see chatbot initialization)
Agent simulation requires Snowglobe Connect SDK v0.6.0 or later. Update with:
Step 1. Create your agent in Snowglobe
In the Snowglobe UI, register your agent if you haven’t already. The agent object in Snowglobe is a living connection to your agent. You use it to tell Snowglobe what your agent does, pass useful data, and wrap your connection to the live agent. All you need to start is:- Agent name: a label to identify your agent in the UI
- Agent description: tells Snowglobe what kinds of conversations and tools to simulate. See writing a chatbot description for guidance on making this effective
Optional: provide historical data
You can upload historical conversation data to help Snowglobe generate scenarios closer to your real-world usage patterns.Historical data is not automatically integrated into tool workflows. If you’d like to enable distribution matching with your historical data, contact us at admin@guardrailsai.com. The team needs to do some tuning to align tools and user messages with your data distribution.
Step 2. Set up the SDK
If you haven’t already, install the SDK and initialize your connection:snowglobe-connect init, you’ll choose between a sync, async, or socket-based connection. For agent simulation, sync or async connections are recommended. Socket-based connections make parallelism harder to achieve locally, which can slow down simulations.
The init command creates a wrapper file (shown in the terminal output) with a completion or acompletion function. This is the entry point Snowglobe calls when it sends a simulated message to your agent. It receives a request object with a message history and returns an output object wrapping a string response. For details on how this works, see chatbot initialization.
Step 3. Register your tools
Add two imports to your wrapper file:register_tools
The register_tools function tells Snowglobe about your agent’s tools: their names, parameters, return types, and example inputs/outputs. Call it at the top level of your script so it runs when the process starts.
It takes two arguments:
- A list of tool definitions (OpenAI-style function schemas with some additions)
- Your agent ID (found in the Snowglobe UI)
Tool schema format
Tool definitions follow the OpenAI function calling format with a few required additions:returns: an object describing the tool’s output schema (types, properties, descriptions)examples: at least one example withinputandoutputshowing realistic usage
The
returns field and examples array are required by Snowglobe, even though they aren’t part of the standard OpenAI tool spec. Without them, Snowglobe can’t generate accurate mock responses for your tools.Step 4. Decorate your tool functions
Add the@snowglobe_tool decorator to every function in your code that serves a registered tool and needs mocking during simulations.
snowglobe-connect start process. Your production code is unaffected.
You don’t need to decorate every tool. Stateless tools that work fine with any input (like a weather API) can stay undecorated. See agent simulation concepts for guidance on which tools to mock.
Step 5. Test your setup
Before running a full simulation, verify everything is wired up correctly:Step 6. Start the connect process
Run the connect process from the directory where your wrapper file lives:- Polls for new simulation prompts generated by Snowglobe (every 2 seconds)
- Executes those prompts against your agent code
- Collects batches of responses and sends them back to Snowglobe
- Intercepts tool calls from decorated functions and routes them to Snowglobe’s mock endpoint for response generation
- Publishes a heartbeat back to Snowglobe so the UI knows your agent is connected
The heartbeat must succeed for simulations to run. If the Snowglobe UI shows your agent as disconnected, check that
snowglobe-connect start is running and that your network allows outbound connections.Change the port
If port 8000 is already in use, set these environment variables before starting:Step 7. Launch the simulation
In the Snowglobe UI:- Navigate to your agent
- Verify the connection status widget under the agent name shows connected
- Click Simulate with Tools (Beta)
- Configure your simulation parameters:
- Number of personas: how many distinct simulated users to create
- Number of conversations: total conversations to generate
- Max conversation length: the upper bound on turns per conversation
- Click Start Simulation
If you don’t see the Simulate with Tools (Beta) option, your account may not have it enabled yet. Email admin@guardrailsai.com and we’ll turn it on for you.
snowglobe-connect start process.
Performance tuning
By default, Snowglobe runs 5 conversations concurrently. You can increase throughput with two environment variables:| Variable | Default | Description |
|---|---|---|
COMPLETION_BATCH_SIZE | 5 | Number of conversations processed in parallel |
COMPLETIONS_PER_SECOND | 120 per minute | Rate limit for completion requests |
Tips for tuning
- Polling cadence: the client polls every 2 seconds, so you may not see
COMPLETION_BATCH_SIZEfully saturate unless the client has been stopped for a minute and enough pending turns have queued up. In most cases, it’s better to stay ahead of the backpressure with smaller batches. - Rate limiting:
COMPLETIONS_PER_SECONDdefaults to 120 per 1-minute period. A batch size of 15 can hit this limit if your agent responds quickly. If you’re not worried about rate limits from your LLM provider, set this value high. Otherwise, keep it conservative. Other teams have run into quota exhaustion when this was set too aggressively. - Connection type: sync and async connections are recommended for agent simulation. Socket-based connections make parallelism harder to achieve locally, so simulations will take longer.
Full example
Below is a complete wrapper file for a pizza restaurant support agent with two tools. You can copy this as a starting point and adapt it to your agent.Full example: Pizza restaurant agent with tools
Full example: Pizza restaurant agent with tools
What’s next
Once your simulation completes, you can:- Review conversations in the spatial view. Click any dot to see the full multi-turn conversation including tool calls and responses
- Run metrics on the results to evaluate your agent’s performance across tool interactions
- Iterate by adjusting your tool schemas, or agent code, and run again
Agent simulation is in active development. We’d love your feedback. Email us at admin@guardrailsai.com with what’s working, what’s not, and what you’d like to see next.