Introducing the Factual Consistency Score: Your AI Evaluator
The Factual Consistency Score is Vectara's integration of the hallucination detection model into its generative AI platform
Date
Time
LocationOnline

Recently, Vectara introduced an evolved version of its popular open-source Hughes Evaluation Model (HHEM), which detects the level of hallucinations in popular LLMs and in generated responses from those systems. It’s a calibrated score that helps developers evaluate hallucinations automatically. The new feature can be used by our customers to measure and improve response quality. The FCS can also be used as a visual cue of response quality shown to the end-users of RAG applications
The Factual Consistency Score is Vectara's integration of the hallucination detection model into its generative AI platform. In this Webinar replay, Machine Learning Team Lead Forrest Bao will show us the new FCS score but in the Vectara UI and through the API.
In this replay, we will give you a first look at The Factual Consistency Score powered by HHEM V2
How FCS evaluates for hallucinations
Example of the FCS score in the Vectara UI and API
Thoughts on how admins can configure the use of FCS scores to meet their end user requirements