Returns evaluable metrics annotated with whether they can be satisfied by the version’s product metadata and conversation endpoint connection output mapping. Trace-dependent and runtime-only params are out of scope and treated as always satisfied.
Documentation Index
Fetch the complete documentation index at: https://docs.galtea.ai/llms.txt
Use this file to discover all available pages before exploring further.
API key authorization. Pass your API key in the Authorization header as a Bearer token. Both new (gsk_*) and legacy (gsk-) API keys are accepted, e.g. Authorization: Bearer gsk_... or Authorization: Bearer gsk-....
Version ID to evaluate compatibility against
Maximum number of metrics to return (default 100)
List of metrics with compatibility annotations
"metric_123"
"Accuracy"
Whether the metric can be satisfied by the resolved product metadata and conversation endpoint connection output mapping.
SELF_HOSTED, FULL_PROMPT, PARTIAL_PROMPT, HUMAN_EVALUATION, GEVAL, DEEPEVAL, DETERMINISTIC "FULL_PROMPT"
"Measures the accuracy of responses"
Human-readable explanation of why the metric is incompatible. Non-null iff isCompatible === false.
"Missing required parameters: product_capabilities, retrieval_context"