Skip to main content
GET
/
metrics
Get metrics
curl --request GET \
  --url https://api.galtea.ai/metrics \
  --header 'Authorization: Bearer <token>'
[
  {
    "id": "metric_123",
    "organizationId": "org_123",
    "userId": "user_123",
    "name": "Accuracy",
    "evaluationParams": [
      "input",
      "actualOutput",
      "expectedOutput"
    ],
    "source": "FULL_PROMPT",
    "judgePrompt": "Evaluate the accuracy of the response",
    "tags": [
      "accuracy",
      "quality"
    ],
    "description": "Measures the accuracy of responses",
    "documentationUrl": "https://docs.example.com/metrics/accuracy",
    "evaluatorModelName": "GPT-4",
    "areEvalParamsTop": true,
    "specificationIds": [
      "spec_123"
    ],
    "createdAt": "2023-11-07T05:31:56Z",
    "legacyAt": "2023-11-07T05:31:56Z",
    "disabledAt": "2023-11-07T05:31:56Z"
  }
]

Authorizations

Authorization
string
header
required

API key authorization. Pass your API key in the Authorization header as a Bearer token. Both new (gsk_*) and legacy (gsk-) API keys are accepted, e.g. Authorization: Bearer gsk_... or Authorization: Bearer gsk-....

Query Parameters

ids
string[]

Filter by metric IDs

organizationIds
string[]

Filter by organization IDs

productIds
string[]

Filter by product IDs

names
string[]

Filter by metric names (exact match, multiple)

name
string

Filter by metric name (partial match)

description
string

Filter by metric description (partial match)

tags
string[]

Filter by tags

sources
enum<string>[]

Filter by metric sources

Available options:
SELF_HOSTED,
FULL_PROMPT,
PARTIAL_PROMPT,
HUMAN_EVALUATION,
GEVAL,
DEEPEVAL,
DETERMINISTIC
specificationIds
string[]

Filter by specification IDs

userGroupIds
string[]

Filter by user group IDs

fromCreatedAt
string<date-time>

Filter metrics created at or after this timestamp (ISO 8601 format)

toCreatedAt
string<date-time>

Filter metrics created at or before this timestamp (ISO 8601 format)

includeDefaultEntities
boolean

Include default/predefined metrics

includeOwnEntities
boolean

Include metrics owned by the user's organization

includeLegacy
boolean

Include legacy/deprecated metrics

suitableForMonitoring
boolean

Filter metrics suitable for monitoring

sort
string[]

Sort instructions (field and direction pairs)

limit
integer

Maximum number of results

offset
integer

Number of results to skip

Response

Metrics retrieved successfully

id
string
Example:

"metric_123"

organizationId
string | null
Example:

"org_123"

userId
string | null
Example:

"user_123"

name
string
Example:

"Accuracy"

evaluationParams
string[]

Ordered list of inference-result fields the evaluator needs (e.g. input, actualOutput, expectedOutput, retrievalContext). Determines which data the evaluation engine extracts from each inference result.

Example:
["input", "actualOutput", "expectedOutput"]
source
enum<string> | null
Available options:
SELF_HOSTED,
FULL_PROMPT,
PARTIAL_PROMPT,
HUMAN_EVALUATION,
GEVAL,
DEEPEVAL,
DETERMINISTIC
Example:

"FULL_PROMPT"

judgePrompt
string | null
Example:

"Evaluate the accuracy of the response"

tags
string[]
Example:
["accuracy", "quality"]
description
string | null
Example:

"Measures the accuracy of responses"

documentationUrl
string | null
Example:

"https://docs.example.com/metrics/accuracy"

evaluatorModelName
string | null
Example:

"GPT-4"

areEvalParamsTop
boolean | null

When true, evaluationParams are injected at the top level of the evaluator prompt instead of nested inside the conversation context.

specificationIds
string[]
Example:
["spec_123"]
createdAt
string<date-time>
legacyAt
string<date-time> | null
disabledAt
string<date-time> | null