Creates an optimized copy of a metric by sending its human-annotated evaluations to the metrics-generator service, which uses the gap between AI and human scores to produce a better judge prompt. Only FULL_PROMPT and PARTIAL_PROMPT metrics owned by an organization can be optimized. Requires at least 20 evaluations with both an AI score and a human annotation. Returns a placeholder metric with isBeingOptimized=true. See Metrics.
API key authorization. Pass your API key in the Authorization header as a Bearer token. Both new (gsk_*) and legacy (gsk-) API keys are accepted, e.g. Authorization: Bearer gsk_... or Authorization: Bearer gsk-....
Source Metric ID
Optimized metric placeholder created successfully
"metric_123"
"org_123"
"user_123"
"Accuracy"
Ordered list of inference-result fields the evaluator needs (e.g. input, actualOutput, expectedOutput, retrievalContext). Determines which data the evaluation engine extracts from each inference result.
["input", "actualOutput", "expectedOutput"]SELF_HOSTED, FULL_PROMPT, PARTIAL_PROMPT, HUMAN_EVALUATION, GEVAL, DEEPEVAL, DETERMINISTIC "FULL_PROMPT"
"Evaluate the accuracy of the response"
["accuracy", "quality"]"Measures the accuracy of responses"
"https://docs.example.com/metrics/accuracy"
"GPT-4"
When true, evaluationParams are injected at the top level of the evaluator prompt instead of nested inside the conversation context.
Whether the metric is currently being optimized.
["spec_123"]["ug_123"]