Vectara

Categories

Blog

All posts

Visibility and Control: Vectara’s Admin Center
Product Updates

Visibility and Control: Vectara’s Admin Center

The new Vectara Admin Center provides a centralized, intuitive interface for on-premise and VPC customers to manage their Vectara deployments. It empowers DevOps and IT Admin teams with centralized visibility and control to prevent RAG Sprawl overhead.

Matt Gonzales
Matt Gonzales
Introducing Enterprise Deep Research
Tools

Introducing Enterprise Deep Research

Deep Research for the Enterprise is poised to become the killer app of generative AI, unlocking a powerful way to explore nuanced, data-rich questions for decision making by tapping directly into their own document repositories.

Ofer MendelevitchDavid Oplatka
Ofer Mendelevitch,David Oplatka
Is Your RAG Consistent?
Evaluation

Is Your RAG Consistent?

In highly regulated enterprise environments, predictability is non-negotiable: the same question to your RAG application should return consistent answers every time. With Open-RAG-Eval’s new Consistency-Adjusted Index, you can now precisely quantify that level of reliability.

Vishal NaikOfer Mendelevitch
Vishal Naik,Ofer Mendelevitch
Open Evaluation

Open Evaluation

The Open Evaluation website makes it easier for you to spot the patterns within an Open RAG Eval evaluation report and take effective steps to improve your RAG system.

CJ CenizalDonna Dong
CJ Cenizal,Donna Dong
Trusted AI: the important role of guardian agents
Vectara

Trusted AI: the important role of guardian agents

AI agents have the power to transform enterprise workflows—but without proper safeguards, they can easily go off course. In this blog, we explore the growing need for Guardian Agents like Vectara’s Hallucination Correction Agent

Eva Nahari
Eva Nahari
HCMBench: an evaluation toolkit for hallucination correction models
Product Updates

HCMBench: an evaluation toolkit for hallucination correction models

HCMBench is Vectara’s open-source evaluation toolkit designed to rigorously test and compare hallucination correction models. With modular pipelines, diverse datasets, and multi-level evaluation metrics, it gives developers a powerful, standardized way to measure and improve the accuracy of RAG system outputs.

Rogger Luo
Rogger Luo
Before you go...

Connect with
our Community!