Skip to main content
This guide walks you through everything needed to run an agent simulation, from registering your tools to launching a simulation and reviewing results.
Agent simulation is in beta. If you don’t have access yet, email admin@guardrailsai.com to join.

Prerequisites

Before you begin, make sure you have:
Agent simulation requires Snowglobe Connect SDK v0.6.0 or later. Update with:
pip install --upgrade snowglobe

Step 1. Create your agent in Snowglobe

In the Snowglobe UI, register your agent if you haven’t already. The agent object in Snowglobe is a living connection to your agent. You use it to tell Snowglobe what your agent does, pass useful data, and wrap your connection to the live agent. All you need to start is:
  • Agent name: a label to identify your agent in the UI
  • Agent description: tells Snowglobe what kinds of conversations and tools to simulate. See writing a chatbot description for guidance on making this effective

Optional: provide historical data

You can upload historical conversation data to help Snowglobe generate scenarios closer to your real-world usage patterns.
Historical data is not automatically integrated into tool workflows. If you’d like to enable distribution matching with your historical data, contact us at admin@guardrailsai.com. The team needs to do some tuning to align tools and user messages with your data distribution.

Step 2. Set up the SDK

If you haven’t already, install the SDK and initialize your connection:
pip install snowglobe
Then authenticate and initialize:
snowglobe-connect auth
snowglobe-connect init
During snowglobe-connect init, you’ll choose between a sync, async, or socket-based connection. For agent simulation, sync or async connections are recommended. Socket-based connections make parallelism harder to achieve locally, which can slow down simulations. The init command creates a wrapper file (shown in the terminal output) with a completion or acompletion function. This is the entry point Snowglobe calls when it sends a simulated message to your agent. It receives a request object with a message history and returns an output object wrapping a string response. For details on how this works, see chatbot initialization.

Step 3. Register your tools

Add two imports to your wrapper file:
from snowglobe.tools import register_tools, snowglobe_tool

register_tools

The register_tools function tells Snowglobe about your agent’s tools: their names, parameters, return types, and example inputs/outputs. Call it at the top level of your script so it runs when the process starts. It takes two arguments:
  1. A list of tool definitions (OpenAI-style function schemas with some additions)
  2. Your agent ID (found in the Snowglobe UI)
register_tools(TOOLS, "your-agent-id-here")

Tool schema format

Tool definitions follow the OpenAI function calling format with a few required additions:
  • returns: an object describing the tool’s output schema (types, properties, descriptions)
  • examples: at least one example with input and output showing realistic usage
These additions help Snowglobe generate accurate mock responses during simulations. A complete tool definition:
{
    "type": "function",
    "function": {
        "name": "get_customer_info",
        "parameters": {
            "type": "object",
            "description": "Get information about a customer by name or email or phone",
            "properties": {
                "email_address": {
                    "type": "string",
                    "description": "The email address of the customer to look up"
                }
            },
            "required": []
        },
        "returns": {
            "type": "object",
            "description": "Information about the customer",
            "properties": {
                "customer_id": {
                    "type": "string",
                    "format": "uuid",
                    "description": "The unique identifier for the customer"
                },
                "name": {
                    "type": "string",
                    "description": "The name of the customer"
                },
                "email": {
                    "type": "string",
                    "description": "The email address of the customer"
                }
            }
        },
        "examples": [{
            "input": {
                "email_address": "bsmith@example.com"
            },
            "output": {
                "customer_id": "3d03ebc1-eb76-498e-9109-c911840e2ac1",
                "name": "Bob Smith",
                "email": "bsmith@example.com"
            }
        }]
    }
}
The returns field and examples array are required by Snowglobe, even though they aren’t part of the standard OpenAI tool spec. Without them, Snowglobe can’t generate accurate mock responses for your tools.

Step 4. Decorate your tool functions

Add the @snowglobe_tool decorator to every function in your code that serves a registered tool and needs mocking during simulations.
@snowglobe_tool
def get_customer_info(*, email_address: str = "") -> dict:
    # Your real implementation
    customer = db.lookup_customer(email=email_address)
    return {"customer_id": customer.id, "name": customer.name, "email": customer.email}


@snowglobe_tool
def get_order_status(*, order_id: str) -> dict:
    # Your real implementation
    order = db.get_order(order_id)
    return {"order_id": order.id, "status": order.status}
The decorator is completely inert during normal execution. It only activates when your code runs inside the snowglobe-connect start process. Your production code is unaffected. You don’t need to decorate every tool. Stateless tools that work fine with any input (like a weather API) can stay undecorated. See agent simulation concepts for guidance on which tools to mock.

Step 5. Test your setup

Before running a full simulation, verify everything is wired up correctly:
snowglobe-connect test
This validates your wrapper file, tool registrations, and connection to Snowglobe. Fix any errors before proceeding. For more details on testing, see test your wrapper.

Step 6. Start the connect process

Run the connect process from the directory where your wrapper file lives:
snowglobe-connect start
This starts a local server (port 8000 by default) that:
  1. Polls for new simulation prompts generated by Snowglobe (every 2 seconds)
  2. Executes those prompts against your agent code
  3. Collects batches of responses and sends them back to Snowglobe
  4. Intercepts tool calls from decorated functions and routes them to Snowglobe’s mock endpoint for response generation
  5. Publishes a heartbeat back to Snowglobe so the UI knows your agent is connected
The heartbeat must succeed for simulations to run. If the Snowglobe UI shows your agent as disconnected, check that snowglobe-connect start is running and that your network allows outbound connections.
For more details on the connect process, see the Snowglobe Connect reference.

Change the port

If port 8000 is already in use, set these environment variables before starting:
export SNOWGLOBE_CLIENT_PORT=8001
export SNOWGLOBE_CLIENT_URL=http://localhost:8001
snowglobe-connect start

Step 7. Launch the simulation

In the Snowglobe UI:
  1. Navigate to your agent
  2. Verify the connection status widget under the agent name shows connected
  3. Click Simulate with Tools (Beta)
  4. Configure your simulation parameters:
    • Number of personas: how many distinct simulated users to create
    • Number of conversations: total conversations to generate
    • Max conversation length: the upper bound on turns per conversation
  5. Click Start Simulation
If you don’t see the Simulate with Tools (Beta) option, your account may not have it enabled yet. Email admin@guardrailsai.com and we’ll turn it on for you.
As the simulation runs, you’ll see dots appear on the spatial view canvas. Each dot represents a multi-turn conversation. The simulation transitions to the done state once all conversations are complete. At that point you can stop the snowglobe-connect start process.

Performance tuning

By default, Snowglobe runs 5 conversations concurrently. You can increase throughput with two environment variables:
VariableDefaultDescription
COMPLETION_BATCH_SIZE5Number of conversations processed in parallel
COMPLETIONS_PER_SECOND120 per minuteRate limit for completion requests
export COMPLETION_BATCH_SIZE=15
export COMPLETIONS_PER_SECOND=200
snowglobe-connect start

Tips for tuning

  • Polling cadence: the client polls every 2 seconds, so you may not see COMPLETION_BATCH_SIZE fully saturate unless the client has been stopped for a minute and enough pending turns have queued up. In most cases, it’s better to stay ahead of the backpressure with smaller batches.
  • Rate limiting: COMPLETIONS_PER_SECOND defaults to 120 per 1-minute period. A batch size of 15 can hit this limit if your agent responds quickly. If you’re not worried about rate limits from your LLM provider, set this value high. Otherwise, keep it conservative. Other teams have run into quota exhaustion when this was set too aggressively.
  • Connection type: sync and async connections are recommended for agent simulation. Socket-based connections make parallelism harder to achieve locally, so simulations will take longer.

Full example

Below is a complete wrapper file for a pizza restaurant support agent with two tools. You can copy this as a starting point and adapt it to your agent.
from typing import Dict

from snowglobe.client import CompletionRequest, CompletionFunctionOutputs
from snowglobe.tools import register_tools, snowglobe_tool
import litellm
import json
import time

TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "get_customer_info",
            "parameters": {
                "type": "object",
                "description": "Get information about a customer by name or email or phone",
                "properties": {
                    "email_address": {
                        "type": "string",
                        "description": "The email address of the customer to look up with @ and fully qualified domain such as bob@example.com"
                    }
                },
                "required": []
            },
            "returns": {
                "type": "object",
                "description": "Information about the customer",
                "properties": {
                    "customer_id": {
                        "type": "string",
                        "format": "uuid",
                        "description": "The unique identifier for the customer"
                    },
                    "name": {
                        "type": "string",
                        "description": "The name of the customer"
                    },
                    "email": {
                        "type": "string",
                        "description": "The email address of the customer"
                    }
                }
            },
            "examples": [{
                "input": {
                    "email_address": "bsmith@example.com"
                },
                "output": {
                    "customer_id": "3d03ebc1-eb76-498e-9109-c911840e2ac1",
                    "name": "Bob Smith",
                    "email": "bsmith@example.com"
                }
            }]
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_order_status",
            "parameters": {
                "type": "object",
                "description": "Get the status of a pizza order by order ID",
                "properties": {
                    "order_id": {
                        "type": "string",
                        "description": "The unique identifier for the pizza order"
                    }
                },
                "required": ["order_id"]
            },
            "returns": {
                "type": "object",
                "description": "The status of the pizza order",
                "properties": {
                    "order_id": {
                        "type": "string",
                        "format": "uuid",
                        "description": "The unique identifier for the pizza order"
                    },
                    "status": {
                        "type": "string",
                        "description": "The current status of the order (e.g., 'preparing', 'baking', 'out for delivery', 'delivered')"
                    },
                    "estimated_delivery_time": {
                        "type": "string",
                        "description": "The estimated delivery time for the order"
                    }
                }
            },
            "examples": [{
                "input": {"order_id": "3d03ebc1-eb76-498e-9109-c911840e2ac3"},
                "output": {
                    "order_id": "3d03ebc1-eb76-498e-9109-c911840e2ac3",
                    "status": "out for delivery",
                    "estimated_delivery_time": "2024-06-01T18:30:00Z"
                }
            }]
        }
    }
]

# Register tools at startup (reports schemas to Snowglobe)
register_tools(TOOLS, "5c2987b1-81c9-47dc-9062-7f6977a0cdde")


# --- Tool implementations ---

CUSTOMERS = {
    "c001": {"customer_id": "c001", "name": "John Doe", "email": "johndoe@example.com"},
    "c002": {"customer_id": "c002", "name": "Jane Smith", "email": "janesmith@example.com"},
    "c003": {"customer_id": "c003", "name": "Bob Johnson", "email": "bobj@example.com"},
}

ORDERS = {
    "ord-1001": {"order_id": "ord-1001", "status": "delivered", "total": 29.98},
    "ord-1002": {"order_id": "ord-1002", "status": "out for delivery", "estimated_delivery_time": "2026-03-26T13:00:00Z", "total": 17.98},
    "ord-1003": {"order_id": "ord-1003", "status": "baking", "estimated_delivery_time": "2026-03-26T18:15:00Z", "total": 14.49},
}


@snowglobe_tool
def get_order_status(*, order_id: str) -> dict:
    order = ORDERS.get(order_id)
    if not order:
        return {"error": f"No order found with id {order_id}"}
    return {
        "order_id": order["order_id"],
        "status": order["status"],
        "estimated_delivery_time": order.get("estimated_delivery_time", "N/A"),
        "total": order["total"],
    }


@snowglobe_tool
def get_customer_info(*, email_address: str = "") -> dict:
    for c in CUSTOMERS.values():
        if email_address and email_address.lower() == c["email"].lower():
            return json.dumps({"customer_id": c["customer_id"], "name": c["name"], "email": c["email"]})
    return json.dumps({"error": "Customer not found"})


TOOLS_MAP = {
    "get_order_status": get_order_status,
    "get_customer_info": get_customer_info,
}


# --- Entry point for Snowglobe ---

def completion(request: CompletionRequest) -> CompletionFunctionOutputs:
    messages = request.to_openai_messages()
    response = main(messages)
    return CompletionFunctionOutputs(content=response)


def main(messages: list[Dict]) -> str:
    system_prompt = """You are a customer support agent for Amattos Pizza Restaurant.
ONLY pass valid UUID4 strings as order_id, customer_id, product_id, article_id, and refund_id when they are an argument.
If you do not know the required id field make another tool call to get it or ask the customer for it.
If you do not know the value for a tool call do not make up a value or pass in unknown or a variation of it as an argument.
Always try to get the customers name or email or phone number and use that to look up their customer_id with the get_customer_info tool before calling any other.
Use tools available to assist customers."""

    messages = [{"role": "system", "content": system_prompt}] + messages

    while True:
        response = litellm.completion(
            model="gpt-5-nano",
            messages=messages,
            tools=TOOLS,
        )
        message = response.choices[0].message

        if not message.tool_calls:
            return message.content

        messages.append(message)
        for tool_call in message.tool_calls:
            if tool_call.function.name not in TOOLS_MAP:
                result = f"Error: Tool {tool_call.function.name} not found"
            else:
                result = TOOLS_MAP[tool_call.function.name](
                    **json.loads(tool_call.function.arguments)
                )

            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": str(result),
            })

What’s next

Once your simulation completes, you can:
  • Review conversations in the spatial view. Click any dot to see the full multi-turn conversation including tool calls and responses
  • Run metrics on the results to evaluate your agent’s performance across tool interactions
  • Iterate by adjusting your tool schemas, or agent code, and run again
Agent simulation is in active development. We’d love your feedback. Email us at admin@guardrailsai.com with what’s working, what’s not, and what you’d like to see next.