Evaluation Parameters
To compute the text_match metric, the following parameters must be provided:actual_output
: The generated text from the model.expected_output
: The target or reference text to compare against.
How Is It Calculated?
The metric uses a fuzzy string matching algorithm (based on edit distance) to compute the similarity ratio between the actual and expected outputs. If the similarity ratio exceeds 85%, the output is considered a match, otherwise; it is marked as a non-match.Interpretation of Scores
- 1.0 – Texts are considered a match (similarity > 85%).
- 0.0 – Texts are not considered a match (similarity ≤ 85%).
Suggested Test Case Types
Use Text Match when evaluating:- Simple equivalence checks where light paraphrasing or minor differences are allowed.
- Rule-based or heuristic outputs where exact matches aren’t expected but alignment is necessary.
- Pass/fail QA checks for generated text.