Skip to main content

Overview

If you already instrument your LLM app with Langfuse — whether using @observe decorators or the LangChain CallbackHandler — you can send the same traces to Galtea for evaluation by swapping a single import. Under the hood, one span per function call flows to both Langfuse and Galtea simultaneously.

Real-Time Dual Export

Every span flows to both Langfuse and Galtea simultaneously — no polling, no extra API credentials needed.

Transparent to Langfuse

Nothing changes in Langfuse. Your dashboard, trace IDs, alerts, and URLs are completely unaffected.

Any Init Order

Initialize Galtea or Langfuse first — both orders work. The libraries detect each other automatically.

Selective Export

Galtea only exports traces when you explicitly link an inference_result_id. Otherwise, it does nothing.
Worried about impact on your Langfuse setup? See the Integration Guide for a detailed breakdown of what changes and what doesn’t.

Setup

1. Install

pip install 'galtea[langfuse]'
Requires Langfuse v3.0.0+ — v2.x is not supported.
If you use Langfuse’s LangChain CallbackHandler, install with LangChain support:
pip install 'galtea[langfuse-langchain]'

2. Initialize the Galtea client

Initialization order with Langfuse doesn’t matter.
import galtea

client = galtea.Galtea(api_key="YOUR_API_KEY")
To get your API key, go to the settings page on the Galtea platform.

3. Swap the import

Replace your Langfuse observe import with the Galtea wrapper. The decorator API is identical — all @observe parameters (name, as_type, etc.) work the same way.
# Before:
# from langfuse import observe

# After:
from galtea.integrations.langfuse import observe

Instrumenting Your Agent

Decorate your agent functions with @observe, exactly as you would with Langfuse. Nested decorators create a parent-child trace hierarchy automatically.
from galtea.integrations.langfuse import observe


@observe(name="retrieve")
def retrieve(query: str) -> list[str]:
    # Your retrieval logic (vector DB, search, etc.)
    return ["relevant document 1", "relevant document 2"]


@observe(name="generate")
def generate(query: str, context: list[str]) -> str:
    # Your LLM call
    return "Generated response based on context"


@observe(name="my-agent")
def my_agent(user_input: str) -> str:
    context = retrieve(user_input)
    return generate(user_input, context)
When my_agent("Hello", inference_result_id=ir_id) runs, Galtea receives a 3-level trace:
my-agent (root)
├── retrieve
└── generate

Observation types

Use the as_type parameter on @observe to set the observation type. Each type maps to a Galtea trace type automatically:
from galtea.integrations.langfuse import observe


@observe(name="my-retriever", as_type="retriever")
def search_docs(query: str) -> list[str]:
    return ["doc1", "doc2"]


@observe(name="my-llm-call", as_type="generation")
def call_llm(prompt: str) -> str:
    return "LLM response"


@observe(name="my-tool", as_type="tool")
def call_api(endpoint: str) -> dict:
    return {"status": "ok"}


@observe(name="my-agent", as_type="agent")
def agent(user_input: str) -> str:
    docs = search_docs(user_input)
    api_result = call_api("/check")
    return call_llm(f"Context: {docs}, API: {api_result}, Question: {user_input}")
Langfuse as_typeGaltea trace typeDescription
span (default)SPANGeneric unit of work
generationGENERATIONLLM call with token usage
agentAGENTAgent orchestrating tools
toolTOOLTool/function call
retrieverRETRIEVERVector DB or search query
chainCHAINLink between steps
evaluatorEVALUATOROutput quality assessment
embeddingEMBEDDINGEmbedding model call
guardrailGUARDRAILContent safety check
Unknown Langfuse observation types are automatically mapped to SPAN in Galtea. Your traces are never dropped.

Context manager API

If you use Langfuse’s start_as_current_observation context manager instead of the @observe decorator, Galtea provides an equivalent wrapper:
from galtea.integrations.langfuse import start_as_current_observation

# Create spans using context managers instead of decorators
with start_as_current_observation(
    name="process-query",
    as_type="span",
    inference_result_id="inferenceResult_abc123",
) as root_span:
    # All child spans (decorator or context manager) are children of root_span
    docs = search_docs("user query")

    with start_as_current_observation(name="generate-response", as_type="generation") as gen:
        response = "Generated response"
        gen.update(output=response, model="gpt-4")

    root_span.update(output=response)
Both APIs can be mixed freely — an @observe-decorated function can be called inside a start_as_current_observation block (or vice versa) and the parent-child hierarchy is preserved.
Only the root start_as_current_observation call needs the Galtea wrapper. Child calls on yielded spans (e.g., root.start_as_current_observation(...)) are native Langfuse — no change needed.

LangChain CallbackHandler

If you use Langfuse’s CallbackHandler for LangChain tracing, Galtea provides an equivalent wrapper:
# Before:
# from langfuse.langchain import CallbackHandler

# After:
from galtea.integrations.langfuse import CallbackHandler
Your handler initialization stays the same — create it once at app startup and pass it to any LangChain .invoke(), .batch(), or .stream() call. To link traces to Galtea, call set_inference_result_id before each invocation:
from galtea.integrations.langfuse import CallbackHandler

handler = CallbackHandler()  # at app init — no inference_result_id yet

# Per request:
handler.set_inference_result_id("inferenceResult_abc123")
# chain.invoke({"input": "query"}, config={"callbacks": [handler]})
# Context is automatically cleared when the chain finishes.
The handler automatically manages set_context / clear_context around LangChain callback lifecycles — no context managers or manual cleanup needed. You can also pass inference_result_id directly in the constructor if you prefer to create a handler per request.
The CallbackHandler requires langchain to be installed. Install it with: pip install langchain
All three APIs can be mixed freely — for example, an @observe-decorated function can pass a CallbackHandler to a LangChain chain inside it, and the parent-child trace hierarchy is preserved.

Linking Traces to an Inference Result

Galtea only exports trace data when an inference_result_id is explicitly linked. Without it, Galtea does nothing — no data is sent, no spans are modified. There are three ways to link traces: When using generate() or simulate(), the SDK manages inference_result_id automatically — zero extra code:
# With generate() — zero extra lines, SDK manages context internally:
result = client.inference_results.generate(agent=my_agent, session=session)

# With simulate() — same, each turn gets its own IR and traces:
result = client.simulator.simulate(session_id=session.id, agent=my_agent)

Passing inference_result_id as a kwarg

Pass inference_result_id to the outermost @observe-decorated function. The wrapper manages the trace context automatically:
# The wrapper handles set_context/clear_context automatically:
result = my_agent("What is gestational diabetes?", inference_result_id="inferenceResult_abc123")
The inference_result_id kwarg is consumed by the wrapper — it does not reach your function’s parameters.

Manual set_context / clear_context

For full control, manage the context lifecycle yourself:
from galtea.utils.tracing import clear_context, set_context

token = set_context(inference_result_id="inferenceResult_abc123")
try:
    result = my_agent("What is gestational diabetes?")
finally:
    clear_context(token)
set_context must wrap outside the @observe-decorated call. If called inside the decorated function, the outermost span will be missed.

How It Works

Both Galtea and Langfuse use OpenTelemetry internally. When both are initialized, they share the same tracing infrastructure — each span created by @observe flows to both Langfuse cloud and the Galtea API. Galtea only processes spans that have an inference_result_id linked; everything else is ignored. Langfuse observation attributes (type, input, output, metadata) are automatically mapped to their Galtea equivalents.