Skip to main content
Galtea’s tracing feature captures every operation your agent performs—tool calls, retrieval operations, LLM invocations—with minimal code changes. This tutorial shows you how to instrument your agent and collect traces.
For detailed information about trace properties, node types, and hierarchy, see the Trace concept page.

1. The @trace Decorator

Add the @trace decorator to any function you want to track. It automatically captures: name, inputs, outputs, timing, errors, and parent-child relationships.
from galtea import trace, NodeType

@trace(name="my_operation", node_type=NodeType.TOOL)
def my_function(query: str) -> str:
    return "result"

2. Collect Traces

Method 1: Manual Collection

For full control over the trace lifecycle:
from galtea import Galtea, trace, NodeType

galtea = Galtea(api_key="YOUR_API_KEY")

# Define traced functions
@trace(name="search_docs", node_type=NodeType.RETRIEVER)
def search(query: str) -> list[dict]:
    return [{"id": "doc_1", "content": "..."}]

@trace(name="generate_response", node_type=NodeType.LLM)
def generate(context: list, query: str) -> str:
    return "Based on the context..."

@trace(name="main")
def run_agent(query: str) -> str:
    docs = search(query)
    return generate(docs, query)

# Setup
session = galtea.sessions.create(version_id=version.id)

# 1. Start trace collection
galtea.traces.start_collection_context()

# 2. Run your logic
user_input = "What's the price?"
response = run_agent(user_input)

# 3. Create inference result
inference_result = galtea.inference_results.create(
    session_id=session.id,
    input=user_input,
    output=response
)

# 4. (Optional) Preview traces before saving
traces = galtea.traces.get_all_from_context()
for t in traces:
    print(f"{t['name']}: {t['latency_ms']:.2f}ms")

# 5. Save traces and clean up
galtea.traces.save_context(
    session_id=session.id,
    inference_result_id=inference_result.id
)

Method 2: Automatic Collection

Use inference_results.generate() for hands-free trace management. This requires implementing the Agent abstract class:
from galtea import Galtea, Agent, AgentInput, AgentResponse, trace, NodeType

galtea = Galtea(api_key="YOUR_API_KEY")

class MyAgent(Agent):
    @trace(name="search_docs", node_type=NodeType.RETRIEVER)
    def search(self, query: str) -> list[dict]:
        return [{"id": "doc_1", "content": "..."}]
    
    @trace(name="generate_response", node_type=NodeType.LLM)
    def generate(self, context: list, query: str) -> str:
        return "Based on the context..."
    
    @trace(name="main", node_type=NodeType.CHAIN)
    def call(self, input: AgentInput) -> AgentResponse:
        query = input.last_user_message_str()
        docs = self.search(query)
        response = self.generate(docs, query)
        return AgentResponse(content=response, retrieval_context=str(docs))

# Setup
session = galtea.sessions.create(version_id=version.id)
agent = MyAgent()

# Everything is handled automatically
inference_result = galtea.inference_results.generate(
    agent=agent,
    session=session,
    user_input="What's the price?"
)
# Traces are collected, saved, and context is cleaned up automatically

Method 3: Conversation Simulator

When using the Conversation Simulator, tracing works out-of-the-box. Simply decorate your agent methods with @trace and run:
result = galtea.simulator.simulate(
    session_id=session.id,
    agent=agent,
    max_turns=10,
    log_inference_results=True  # Traces are saved automatically per turn
)

3. Advanced: Custom Context Management

For advanced use cases, the trace service exposes additional methods:
# Add a trace manually to the current context
galtea.traces.add_to_context({
    "name": "custom_operation",
    "node_type": "CUSTOM",
    "input_data": {"key": "value"},
    "output_data": {"result": "data"},
    "latency_ms": 50.0
})

# Clear context without saving (e.g., on error)
galtea.traces.clear_context()
See the Trace Service API reference for all available methods.

Summary

MethodControlBest For
ManualFullDebugging, testing, custom workflows
Automatic (generate())SimplifiedProduction single-turn inference
SimulatorBuilt-inMulti-turn conversation testing

Next Steps