Skip to main content
Menu

Hacker Guide

Introduction

Welcome Hackers! The purpose of this guide is to give you a quick start as well as a safe haven to help you maximize your time during the hackathon.

As we all know, time flies during a hackathon!

Vectara Overview

Vectara is the trusted GenAI platform. It’s designed to make it easy for you to build and deploy GenAI applications that can generate text-based answers using your particular data (this type of application flow is also known as RAG or retrieval-augmented-generation). You just ingest your data and then build apps using the Query and Summarization APIs. It’s that simple.

Vectara providers RAG-as-a-service. What does that mean?

It means we take care of all the heavy lifting during both data ingest (text extraction, chunking, embedding and storing in the vector store) as well during query and retrieval (embedding of the query, neural search or hybrid search, reranking and calling the LLM) – so you don’t have to do all that. Instead you can focus your efforts on your building your application.

Use-cases include:

  • Question Answering: provide direct answers to specific questions based on the data.
  • Conversational AI / chat: chatbots that can hold a full conversation with a human, with back-and-forth exchanges.
  • Semantic (Neural) Search: build applications powered by fast and powerful semantic search that finds documents by matching to the user intent.

Getting started with Vectara

You can get to know Vectara’s features in 5 minutes:

  • Sign up for a free Vectara account.
  • Log in and take the 5-minute walk-through.

We have more resources to help you build apps with Vectara:

  • The Quick Start guide shows you how to use the Vectara Console.
  • Read the API recipes for common patterns with our APIs.
  • Our API playground shows you how Vectara’s API requests and responses are structured.
  • See “Additional resources”, below, for a comprehensive list.

General guidelines:

  • Vectara provides a generous free plan (ideal for hackathon) with up to 50MB of text and 15K monthly queries.
  • Use the Indexing API to ingest text data into your Vectara corpus or the File Upload API to upload files such as PDF, PPT or DOC.
  • Use the Search API to run queries against the data ingested. A query can be search only, include a summary or be used for multi-turn chat.
  • The Console provides a unified view of information about your Vectara account, corpora and their characteristics. You can also run example queries directly from the console for quick experimentation.

Common Questions

What should I do if I need to go over the free plan limits?

Vectara’s free plan provides a pretty generous offering of up to 50MB of text (this is raw text, not the original file size) and 15K queries a month. This should be sufficient for many use-cases, including in a hackathon setting.

To get additional capacity, you can add a credit card to the account and buy additional bundles, or transition to our Scale plan.

Should I use RAG and not fine-tuning?

Our experience shows that “Fine-tuning is for Form, and RAG is for facts” as discussed here and here.

What is the Boomerang embedding model?

Boomerang is the name of Vectara’s newest embedding model. This model encodes text from your data as “vector embeddings” and is used to power the high performance retrieval process that is part of the RAG pipeline.

Read more about Boomerang, and the importance of using a good retrieval model for getting best results from RAG.

Where is my customer ID / corpus ID / API key?

  • Here’s some info on how to find your customer ID.
  • To get a corpus ID, view a corpus in the Console. The corpus ID is at the top of the screen, e.g. “Corpus ID: XXX”.
  • To create API Keys – follow this guide.

What is HHEM?

HHEMv2, also known as FCS (Factual Consistency Score) is not integrated into the Vectara platform and can be returned automatically with every Query request.

Can I stream the LLM response?

Yes, Vectara now supports streaming outputs.

See our blog post announcing streamed query, and check out the playground or our docs.

Additional resources

API docs https://docs.vectara.com/docs/
API playground https://docs.vectara.com/docs/rest-api/
Getting help Join our Discord server or forums if you have questions.
If you have any feedback for us, we would be glad to hear it – please let us know in the forums or our Discord channel.
Open source
  • Vectara-ingest helps with data ingestion – crawling data sources and indexing them into Vectara.
  • Vectara-answer is a user interface for question answering – demonstrates a UI concept.
  • React-search is a React package that allows you to integrate Vectara semantic search in any React app with just a few lines of code
  • React-chatbot is a React package that allows you to integrate Vectara Chat in any React app with just a few lines of code.
  • Create-UI is a fast way to generate a Vectara-powered sample codebase for a range of user interfaces.
Sample code We have quite a few “sample applications” hackers can take a look at here.
Additionally, we published some sample code in these jupyter notebooks
Blog posts
Youtube
Flowise + Vectara Tutorial
LangFlow + Vectara Tutorial
Ask LangChain Docs video
More here

 

Integrations LangChain: https://blog.langchain.dev/langchain-vectara-better-together/
LlamaIndex:https://vectara.com/llamaindex-vectara/
Startup program https://vectara.com/startups
Close Menu