Vectara for evaluation
Vectara’s mission is to drive for truth and relevance in generative AI.
Validate with citations
Many black-box AI tools produce erroneous answers, forcing users to fact-check generated output. Vectara gives you full control and transparency. Citations show how answers were derived and link directly to source documents. Vectara also lets you decide how results are displayed and produced.
Factual consistency score with every answer
The Factual Consistency Score is the production-ready version of Vectara’s popular open-source Hughes Hallucination Evaluation Model (HHEM).
HHEM detects the level of hallucinations in popular LLMs and in generated responses from those systems into the core platform.
The Factual Consistency Score helps developers evaluate hallucinations automatically so users can measure and improve response quality.
Leading open source evaluation innovation with HHEM
As of October 2024, the HHEM model has seen over 2.5 million downloads.
All AI systems hallucinate. Even the top-performing models have statistically significant levels of hallucination. Vectara is leading the industry in hallucination mitigation with our widely adopted Hughes Hallucination Evaluation Model (version 2).
The HHEM model powers our factual consistency score. Vectara periodically evaluates all models to ensure we provide best-in-class hallucination protection performance.
Enterprise controls for policy and security
Vectara is serverless RAGaaS with no need to manage infrastructure, security policies, or performance.
Vectara complies with regulatory environments that disallow training of public models on corporate, personal, or government data.
We provide client-configurable data retention, meaning users can discard the original documents and text used in the platform and have full control of the data access.