For detailed information about trace properties, node types, and hierarchy, see the Trace concept page.
Setup
There are two primary ways to set up tracing in your agent. Choose the option that fits your needs.a) The @trace Decorator
Add the @trace decorator to any function you want to track. It automatically captures: name, inputs, outputs, timing, errors, and parent-child relationships.
b) The start_trace Context Manager
For fine-grained control over specific code blocks, use start_trace.
span.update() method lets you add output, metadata, or change the type after execution.
Both
@trace and start_trace automatically capture parent-child relationships between operations when they are nested inside each other, giving you a full hierarchical view of your agent’s behavior.Collection
Traces are built locally. To send them to Galtea, you need to associate them with aninference_result_id. There are two approaches:
a) Automatic Collection
Useinference_results.generate() or simulator.simulate() for hands-free trace management. These methods automatically:
- Set the trace context (with the appropriate setup)
- Execute your agent
- Flush all collected traces to Galtea
- Clean up the context
Agent abstract class and decorate your methods with @trace:
Single-Turn with generate()
When using generate(), the trace context is automatically set for the entire duration of the agent’s execution. Just call generate() with your agent and session:
Multi-Turn with simulate()
When using the Conversation Simulator, tracing works out-of-the-box. Decorate your agent methods with @trace and run:
b) Manual Collection
If you’re using Direct Inference (where Galtea calls your endpoint), you can pass
{{ inference_result_id }} in the input template and use set_context in your endpoint handler to collect traces automatically. See Collecting Traces During Direct Inference for the full walkthrough.set_context() and clear_context() to manually manage the trace lifecycle:
clear_context(token, flush=True) automatically flushes all pending traces for the inference result before clearing. Set flush=False if you want to discard traces without sending them.Remote Agent Tracing
When your agent runs on a remote server (e.g., deployed as a FastAPI service), OpenTelemetry’s thread-local context does not cross the HTTP boundary. The remote server cannot discover theinference_result_id to correlate traces.
To solve this, AgentInput includes an inference_result_id field that is automatically populated during generate() and simulate() calls. Forward this ID to your remote server so it can attach traces to the same inference result.
Agent / Client Side
In your agent function, readinput_data.inference_result_id and send it alongside the request payload:
Remote Server Side
On the remote server, useset_context() and clear_context() with the received inference_result_id:
The remote server must have the Galtea SDK installed (
pip install galtea) to use set_context() and clear_context().