actual_output) and the information found in the retrieval_context. It is a core indicator of hallucination risk in retrieval-augmented generation systems.
A high faithfulness score indicates that the model grounds its answer in retrieved content, rather than introducing unsupported or fabricated information.
Evaluation Parameters
To compute thefaithfulness metric, the following inputs are required:
input: The user’s original prompt.actual_output: The LLM-generated response.retrieval_context: The retrieved passages or nodes used by the model.
How Is It Calculated?
Thefaithfulness score is determined using an LLM-as-a-judge process with explicit binary pass criteria:
- Fact Extraction: The LLM extracts factual claims made in
actual_output. - Verification Against Source: Each claim is checked against the
retrieval_contextfor substantiation.- If all claims are fully supported by the context (no contradictions or hallucinations), the output is considered faithful.
- If any claim is not supported, contradicted, or hallucinatory relative to the context, the output is considered unfaithful.
- 1 (Faithful): All factual claims in the response are substantiated by the provided retrieval context with no hallucinations or contradictions.
- 0 (Unfaithful): The response contains at least one unsupported, fabricated, or contradictory claim relative to the retrieval context.
This binary scoring system helps teams monitor hallucination risk and improve trust in generated responses.