Skip to main content
GET
/
metrics
/
{id}
Get metric by ID
curl --request GET \
  --url https://api.galtea.ai/metrics/{id} \
  --header 'Authorization: Bearer <token>'
{
  "id": "metric_123",
  "organizationId": "org_123",
  "userId": "user_123",
  "name": "Accuracy",
  "evaluationParams": [
    "input",
    "actualOutput",
    "expectedOutput"
  ],
  "source": "FULL_PROMPT",
  "judgePrompt": "Evaluate the accuracy of the response",
  "tags": [
    "accuracy",
    "quality"
  ],
  "description": "Measures the accuracy of responses",
  "documentationUrl": "https://docs.example.com/metrics/accuracy",
  "evaluatorModelName": "GPT-4",
  "areEvalParamsTop": true,
  "specificationIds": [
    "spec_123"
  ],
  "createdAt": "2023-11-07T05:31:56Z",
  "legacyAt": "2023-11-07T05:31:56Z",
  "disabledAt": "2023-11-07T05:31:56Z"
}

Authorizations

Authorization
string
header
required

API key authorization. Pass your API key in the Authorization header as a Bearer token. Both new (gsk_*) and legacy (gsk-) API keys are accepted, e.g. Authorization: Bearer gsk_... or Authorization: Bearer gsk-....

Path Parameters

id
string
required

Metric ID

Response

Metric retrieved successfully

id
string
Example:

"metric_123"

organizationId
string | null
Example:

"org_123"

userId
string | null
Example:

"user_123"

name
string
Example:

"Accuracy"

evaluationParams
string[]

Ordered list of inference-result fields the evaluator needs (e.g. input, actualOutput, expectedOutput, retrievalContext). Determines which data the evaluation engine extracts from each inference result.

Example:
["input", "actualOutput", "expectedOutput"]
source
enum<string> | null
Available options:
SELF_HOSTED,
FULL_PROMPT,
PARTIAL_PROMPT,
HUMAN_EVALUATION,
GEVAL,
DEEPEVAL,
DETERMINISTIC
Example:

"FULL_PROMPT"

judgePrompt
string | null
Example:

"Evaluate the accuracy of the response"

tags
string[]
Example:
["accuracy", "quality"]
description
string | null
Example:

"Measures the accuracy of responses"

documentationUrl
string | null
Example:

"https://docs.example.com/metrics/accuracy"

evaluatorModelName
string | null
Example:

"GPT-4"

areEvalParamsTop
boolean | null

When true, evaluationParams are injected at the top level of the evaluator prompt instead of nested inside the conversation context.

specificationIds
string[]
Example:
["spec_123"]
createdAt
string<date-time>
legacyAt
string<date-time> | null
disabledAt
string<date-time> | null