Evaluation Parameters
To compute theknowledge_retention
metric, the following parameters are required in every turn of the conversation:
input
: The user message in the conversation.actual_output
: The LLM-generated response to the user message.
How Is It Calculated?
Theknowledge_retention
score is based on a two-step LLM-driven process:
- Knowledge Extraction: An LLM is used to extract key facts, assertions, and informational snippets from the previous
conversational_turns
. - Retention Evaluation: For each subsequent LLM response, the system assesses whether the model has retained the relevant knowledge or has exhibited information attrition (e.g., contradictions, omissions, or inconsistencies).
This metric was incorporated to the Galtea platform from the open source library deepeval, for more information you can also visit their documentation.