
Lynx detects AI hallucinations by analyzing the responses generated by large language models (LLMs) and comparing them to correct responses1. It uses a fine-tuned version of Meta's Llama 3 model, trained on 2400 examples of hallucinations and their corresponding correct responses. Lynx identifies inconsistencies and inaccuracies in the AI-generated content, helping to mitigate the risk of misinformation and incorrect decision-making based on AI outputs.

The Lynx AI model, developed by Patronus AI, is designed to detect and mitigate hallucinations in large language models (LLMs). Hallucinations occur when LLMs generate responses that are coherent but do not align with factual reality or the input context. Lynx aims to provide a faster, cheaper, and more reliable way to detect these hallucinations without human intervention, improving AI trustworthiness and reliability4.