Vectara RAG Hackathon
Learn how SomoSim improved their educational content search with Vectara’s trusted GenAI platform, enabling practitioners to more quickly, easily, and accurately find exactly what they are looking for, regardless of how they ask.
≈ 9 minutes readIntroduction
RAG, or Retrieval Augmented Generation, is the most exciting development for developing GenAI applications for the enterprise, so In November, we organized a hackathon challenging participants to create innovative solutions using RAG.
What is RAG?
Simply put, RAG allows you to apply ChatGPT to your private or custom data, and create question-answering, chatbots or other conversational AI applications that use LLMs and are far superior to current approaches.
If you’re relatively new to the space of LLMs and RAG, you might find our LLM glossary useful as a quick read.
We wrote previously about the architecture of RAG and highlighted a few of the key decisions one has to make such as proper chunking and careful design of the retrieval engine.
In other words, RAG seems easy but at enterprise scale, it’s nothing but. This is why at Vectara we put all the components needed for RAG behind our managed service, so you can focus on your application and we will manage the rest.
Our co-sponsor Anyscale provided hackathon participants with access to their Anyscale Endpoints, which provides easy access to inference with open-source LLMs like Llama2; participants had access to all the tools they needed to get going.
The Hackathon
The hackathon, which we named “RAG: LLMs with your data”, took place on Nov 3-9, 2023. It had 2000+ registrants from all over the world.
We started with an exciting kickoff stream that included a detailed introduction to the Vectara platform with a full explanation of our API, an introduction to Anyscale, followed by a panel discussion on RAG with participation from Vectara, Anyscale, and Llama Index.
A couple of days into the hackathon Vectara’s head of front-end engineering, CJ, provided an amazing talk about the importance of user experience in RAG. You don’t want to miss that.
Forrest from Vectara’s ML team then guided us through Vectara’s “Hughes Hallucination Evaluation Model” (HEM) which is a first-of-its-kind model to measure hallucinations (you can see more coverage in the NYTimes and in our blog post).
All in all, hackathon participants had access to RAG tutorials, expert talks, and mentors to help guide them through the process, and we are happy to make these resources available to a broader audience.
The Winners
There were a total of 32 submitted group projects that were all pretty amazing solutions demonstrating the power of RAG, and focused on specific vertical use cases like Healthcare, Legal, Financial Services, and Customer Support.
But ultimately we had to choose one overall winner, and a winner per each of the categories. And here are the winners:
Best Overall:Tinnitus Health Care. The team designed and implemented a platform that empowers users to gain personalized insights that aid in understanding, managing, and communicating the intricacies of tinnitus, fostering a supportive community, and helping patients in their journey with this often untreated condition.
Best Healthcare app: Brain MRI AI. This app improves the diagnosis of brain tumors: combining Vectara’s semantic search with the webkitSpeechRecognition API to deliver accurate, voice-activated support. Through this integration, radiologists can dictate their observations, which are then matched against an extensive database of cases and relevant literature, dramatically accelerating the diagnosing process.
Best Legal app: Law IO Buddy. This app extends an existing legal assistance tool that already provides the capability to answer questions from legal documents with a legal chatbot powered by Vectara, which can answer any legal questions from its database of various constitutional and legal documents.
Best Financial Services app: Ticker Analyst. This tool is designed for financial analysts, savvy investors, and academic researchers, and aims to simplify the extraction and analysis of complex financial data. Given a company ticker, it crawls the EDGAR database to extract the relevant data and ingest it into a Vectara corpus and then provides relevant responses to user queries with the RAG pipeline.
Best Customer Support app: QnA Boosted RAG with Vectara. An innovative approach to creating a customer-support system by ingesting into Vectara not only existing data but augmenting that with synthetic question/answer pairs that could support specific user queries.
A big congratulations to all the winners, and kudos to all the teams for their efforts in creating wonderful RAG-based applications.
I invite you to see our winners stream where each team presents their project in a lot more detail.
Conclusions
RAG is powering secure, scalable LLM-based generative AI applications, and in this hackathon teams got to learn more about RAG – from how to use Vectara’s API, to tips and best practices about RAG user experience, and anything in between.
For those who missed the hackathon or could not participate this time, I invite you to explore the Vectara platform today:
- Sign up for a free Vectara account.
- Follow the quickstart guide to create a corpus and an API key.
- Ingest your data into the corpus using Vectara’s Indexing API or use the open source vectara-ingest project.
- Integrate your query into your application using Vectara’s Search API or use the open source vectara-answer project for a quick-start.
- Join the Vectara developer community on our forums and Discord server.
Want to participate in our next hackathon?
Follow our LinkedIn and Twitter/X accounts, or sign up for our newsletter to stay informed on upcoming events.