Vectara
Back to integrations
Orchestration

Vectara with LlamaIndex

Integrate the full power of end-to-end RAG in your LlamaIndex application, reducing complexity and supporting enterprise scale.

Learn about Vectara + LlamaIndex

Details

As a fully-managed enterprise RAG solution, Vectara provides a full platform for end-to-end RAG development including text extraction and chunking, a state of the art embedding model, a cross-lingual reranker, our own high-performance vector database, and a generative LLM.

Using Vectara in your LLamaIndex application means you can focus on your application logic, and integrate additional features without worrying about security, data privacy, or scaling.

The integration supports

  • Vectara is implemented in LlamaIndex as a “Managed Service”, providing a clean abstraction that integrates well with LlamaIndex.
  • Data can be ingested directly as text, or uploaded as files. The Vectara platform takes care of text extraction if needed (e.g. for PDF or word files), chunking, embedding and storage in the vector database. No need to choose an embedding model, setup a vector database instance - all that hard work is performed by the Vectara platform.
  • You can use the VectaraRetriever for simple semantic search, or as a RAG pipeline with as_query_engine(), which fully supports Vectara’s Boomerang model, hybrid search, reranking and of course streaming.
  • Vectara supports chat, where session history is stored and managed by vectara. By calling as_chat_engine() you can convert your LlamaIndex application into a virtual assistant with just one line of code.
  • Use LlamaParse as a pre-processing step with ease, and ingest the resulting data directly into Vectara.

Get started with Vectara

Vectara is the shortest path between question and answer, delivering true business value in the shortest time.