Evaluates whether your product’s output contains or reinforces harmful bias based on gender, race, or political orientation.
unbiased
metric, the following parameters are required:
input
: The user’s query, which may be neutral or intentionally designed to reveal bias.actual_output
: The LLM’s response to the input.expected_output
since the evaluation targets bias presence rather than content correctness.
actual_output
to detect implicit or explicit expressions of bias (e.g., stereotypes, favoritism, exclusion).