What is a Trace?
A trace in Galtea represents a single operation or function call that occurs during an AI agent’s execution. Traces capture the internal workings of your agent—such as tool calls, retrieval operations, chain orchestrations, and LLM invocations—providing deep visibility into how your agent processes requests. Traces are always linked to an inference result, enabling you to understand not just what your agent responded, but how it arrived at that response. Every trace must belong to a specific inference result.Why Use Traces?
Debugging
Identify exactly where and why your agent failed or produced unexpected results.
Performance Optimization
Pinpoint slow operations with latency tracking at every step.
Compliance & Auditing
Maintain a complete audit trail of all operations for regulatory requirements.
Cost Analysis
Understand which operations consume the most resources.
Trace Hierarchy
Traces support parent-child relationships, allowing you to visualize the complete execution flow of your agent. When a traced function calls another traced function, the hierarchy is automatically captured.id: Unique identifier for the traceparent_trace_id: Reference to the parent trace (null for root traces)name: The operation nametype: Classification of the operation (TraceType)description: Human-readable description of what the operation does
Trace Types
Traces are classified by type to help you understand the nature of each operation and debug issues more effectively.| Type | Definition | Why This Matters for Tracing |
|---|---|---|
| SPAN | Generic durations of work in a trace. | Default type for general operations that don’t fit other categories. Useful for grouping related work. |
| GENERATION | AI model generations including prompts, token usage, and costs. | This is where cost (tokens) and latency come from. Clearly see these operations and identify expensive calls and bottlenecks. |
| EVENT | Discrete point-in-time events. | Capture important moments without duration, like user interactions or state changes. |
| AGENT | Agent that orchestrates flow and uses tools with LLM guidance. | High-level orchestration nodes that coordinate multiple operations and make decisions. |
| TOOL | Tool/function calls (e.g., external APIs, calculations). | Deterministic or external calls where inputs, outputs, and side effects determine correctness. |
| CHAIN | Links between different application steps. | Composite orchestration nodes that run multiple internal steps and pass data between stages. |
| RETRIEVER | Data retrieval steps (vector store, database). | Operations that fetch contextual data which directly affect prompt relevance and the context window. |
| EVALUATOR | Functions that assess LLM outputs. | Operations that evaluate quality, safety, or correctness of generated content. |
| EMBEDDING | Embedding model calls. | Vector embedding operations for semantic search or similarity. |
| GUARDRAIL | Components that protect against malicious content. | Safety checks that filter or validate inputs/outputs. |
The @trace Decorator
The @trace decorator automatically captures function inputs, outputs, timing, errors, and parent-child relationships.
Syntax Options
Error Tracking
The decorator automatically captures exceptions. When an error occurs, the trace records:- The error message in the
errorfield - The execution time until the error
- Input data that caused the error
Viewing Trace Hierarchy
After collecting traces, you can visualize the execution in the Dashboard or by usign the following code:SDK Integration
Tracing Tutorial
Step-by-step guide to instrumenting your agent and collecting traces.
Trace Service
Manage and collect traces for your AI agent operations using the SDK.
Trace Properties
The inference result this trace belongs to. Every trace must be linked to an inference result.
The name of the traced operation (e.g., function name).
The type of operation: SPAN, GENERATION, EVENT, AGENT, TOOL, CHAIN, RETRIEVER, EVALUATOR, EMBEDDING, or GUARDRAIL.
A human-readable description of the operation. Can be set manually via
start_trace(description=...) or automatically from function docstrings using @trace(include_docstring=True). Maximum size: 32KB.The ID of the parent trace for hierarchical relationships.
The input parameters passed to the operation. Maximum size: 128KB.
The result returned by the operation. Maximum size: 128KB.
Error message if the operation failed.
The execution time of the operation in milliseconds.
ISO 8601 timestamp when the operation started.
ISO 8601 timestamp when the operation completed.
Additional custom metadata about the trace. Maximum size: 128KB.
Best Practices
Use meaningful trace names
Use meaningful trace names
Choose descriptive names that clearly indicate the operation being traced:
Trace at meaningful boundaries
Trace at meaningful boundaries
Trace operations that represent logical units of work, not every single function:
Select appropriate node types
Select appropriate node types
Classify operations correctly to enable better filtering and analysis in the dashboard.
Keep input/output data reasonable
Keep input/output data reasonable
The decorator captures function arguments automatically. Consider what’s useful for debugging: