What is a Model?

A model in Galtea represents a Large Language Model (LLM) configuration with associated cost information. This allows the platform to track, estimate, and report on the costs of using different LLMs across your products and evaluations.

Models are organization-wide and can be referenced across multiple products to ensure consistent cost tracking.

You can view and manage your models on the Galtea dashboard.

Use Cases for Models

Models in Galtea are primarily used for:

Cost Tracking

Monitor spending on AI models across different products and versions

Budget Planning

Project future costs based on token usage patterns

Cost Optimization

Compare costs between different model providers and configurations

Financial Reporting

Generate accurate reports on LLM usage expenses

Model Properties

When creating a model in Galtea, you’ll need to provide the following information:

Name
Text
required

The name of the model. This should be descriptive and indicate the provider and version. Example: “GPT-4 Turbo” or “Claude 3 Haiku”

Cost per Input Token
Number

The cost in dollars per input token. This is the rate charged by the provider for tokens in your prompts. Example: 0.00001 (representing $0.00001 per token)

Cost per Output Token
Number

The cost in dollars per output token. This is the rate charged by the provider for tokens in the model’s responses. Example: 0.00003 (representing $0.00003 per token)

Cost per Cache Read Input Token
Number

The cost in dollars per cached input token. Some providers offer reduced rates for cached requests. Example: 0.000005 (representing $0.000005 per token)

Tokenizer Provider
Text

The provider of the tokenizer used by the model. This is important for accurate token counting and cost estimation. Right now, the only supported tokenizers are: Examples:

Source
Text
required

The provider or source of the pricing model. This can be a URL to the model’s pricing page or documentation. For instance: https://openai.com/api/pricing/

SDK Integration