Overview
If you already instrument your LLM app with Langfuse using@observe decorators, you can send the same traces to Galtea for evaluation by swapping a single import. Under the hood, one span per function call flows to both Langfuse and Galtea simultaneously.
Real-Time Dual Export
Every span flows to both Langfuse and Galtea simultaneously — no polling, no extra API credentials needed.
Transparent to Langfuse
Nothing changes in Langfuse. Your dashboard, trace IDs, alerts, and URLs are completely unaffected.
Any Init Order
Initialize Galtea or Langfuse first — both orders work. The libraries detect each other automatically.
Selective Export
Galtea only exports traces when you explicitly link an
inference_result_id. Otherwise, it does nothing.Worried about impact on your Langfuse setup? See the Migration Guide for a detailed breakdown of what changes and what doesn’t.
Setup
1. Install
Requires Langfuse v3.0.0+ — v2.x is not supported.
2. Initialize the Galtea client
Initialization order with Langfuse doesn’t matter.To get your API key, go to the settings page on the Galtea platform.
3. Swap the import
Replace your Langfuseobserve import with the Galtea wrapper. The decorator API is identical — all @observe parameters (name, as_type, etc.) work the same way.
Instrumenting Your Agent
Decorate your agent functions with@observe, exactly as you would with Langfuse. Nested decorators create a parent-child trace hierarchy automatically.
my_agent("Hello", inference_result_id=ir_id) runs, Galtea receives a 3-level trace:
Observation types
Use theas_type parameter on @observe to set the observation type. Each type maps to a Galtea trace type automatically:
Langfuse as_type | Galtea trace type | Description |
|---|---|---|
span (default) | SPAN | Generic unit of work |
generation | GENERATION | LLM call with token usage |
agent | AGENT | Agent orchestrating tools |
tool | TOOL | Tool/function call |
retriever | RETRIEVER | Vector DB or search query |
chain | CHAIN | Link between steps |
evaluator | EVALUATOR | Output quality assessment |
embedding | EMBEDDING | Embedding model call |
guardrail | GUARDRAIL | Content safety check |
Unknown Langfuse observation types are automatically mapped to
SPAN in Galtea. Your traces are never dropped.Context manager API
If you use Langfuse’sstart_as_current_observation context manager instead of the @observe decorator, Galtea provides an equivalent wrapper:
@observe-decorated function can be called inside a start_as_current_observation block (or vice versa) and the parent-child hierarchy is preserved.
Only the root
start_as_current_observation call needs the Galtea wrapper. Child calls on yielded spans (e.g., root.start_as_current_observation(...)) are native Langfuse — no change needed.Linking Traces to an Inference Result
Galtea only exports trace data when aninference_result_id is explicitly linked. Without it, Galtea does nothing — no data is sent, no spans are modified. There are three ways to link traces:
Using SDK methods (recommended)
When usinggenerate() or simulate(), the SDK manages inference_result_id automatically — zero extra code:
Passing inference_result_id as a kwarg
Pass inference_result_id to the outermost @observe-decorated function. The wrapper manages the trace context automatically:
inference_result_id kwarg is consumed by the wrapper — it does not reach your function’s parameters.
Manual set_context / clear_context
For full control, manage the context lifecycle yourself:
How It Works
Both Galtea and Langfuse use OpenTelemetry internally. When both are initialized, they share the same tracing infrastructure — each span created by@observe flows to both Langfuse cloud and the Galtea API. Galtea only processes spans that have an inference_result_id linked; everything else is ignored. Langfuse observation attributes (type, input, output, metadata) are automatically mapped to their Galtea equivalents.