Vectara

Categories

Blog - Research

All posts

Is Semantic Chunking worth the computational cost?
Research

Is Semantic Chunking worth the computational cost?

Discover why fixed-size chunking often outperforms semantic chunking in real-world RAG systems. This study explores the trade-offs between simplicity, computational efficiency, and retrieval accuracy, challenging the assumption that semantic chunking is always the superior choice.

Renyi QuForrest Bao
Renyi Qu,Forrest Bao
Correcting Hallucinations in Large Language Models
Research

Correcting Hallucinations in Large Language Models

In this blog post, we share the results of our initial experiments aimed at correcting hallucinations generated by Large Language Models (LLMs). Our focus is on the open-book setting, which encompasses tasks such as summarization and Retrieval-Augmented Generation (RAG).

Utkarsh JainSuleman KaziOfer Mendelevitch
Utkarsh Jain,Suleman Kazi,Ofer Mendelevitch
Top Large Language Models (LLMs)
Research

Top Large Language Models (LLMs)

The top large language models in the Summer of 2024, along with recommendations for when to use each based upon needs like API, tunable, or fully hosted.

Suleman KaziAmin AhmadVivek SourabhRogger LuoParth VashishtAbhilasha Lodha
Suleman Kazi,Amin Ahmad,Vivek Sourabh,Rogger Luo,Parth Vashisht,Abhilasha Lodha
Generative AI for Legal Teams
Research

Generative AI for Legal Teams

Learn how Legal teams can use Generative AI to safely increase efficiency, decrease costs, and improve the effectiveness of paralegal research

Sean Anderson
Sean Anderson
Before you go...

Connect with
our Community!