Returns the count of evaluations that have both an AI score and a human-provided score (annotation). For FULL_PROMPT/PARTIAL_PROMPT metrics this means the AI judged the evaluation and a human later reviewed it. For HUMAN_EVALUATION metrics both fields hold the same human-provided value (for analytics compatibility).
API key authorization. Pass your API key in the Authorization header as a Bearer token. Both new (gsk_*) and legacy (gsk-) API keys are accepted, e.g. Authorization: Bearer gsk_... or Authorization: Bearer gsk-....
Metric ID
Human annotation count retrieved successfully