Developing Applications with Retrieval Augmented Generation
How developers can build GenAI conversational search applications with Vectara’s Retrieval Augmented Generation
2-minute read timeWe just announced Vectara’s GenAI conversational search platform, allowing developers to use “Retrieval Augmented Generation,” which Vectara calls “Grounded Generation,” to reduce hallucinations and build fast and scalable LLM-based conversational applications with their own data.
To experience the power of GenAI conversational search, you can try our AskNews sample application.
I am excited to share a few technical resources that we’ve created to help you build your conversational search applications with Vectara.
Vectara documentation
With the launch of the summarization feature, we’ve improved our documentation and added detailed documentation about using Summarization and Hybrid Search via the API.
Our API Playground has also been updated, so you can now try your queries with summarization and hybrid search directly from there.
Open source repositories
We recently launched vectara-ingest, an open source community-supported repository that serves as a building block to crawl various data sources and ingest the content into Vectara.
Yesterday we also launched vectara-answer, an open-source repository that provides sample code for creating conversational search applications, and can be used as a building block to create a quick prototype for your application or even as a baseline for your full application deployment.
The Vectara developer community
GenAI is transformational, and the ability to develop innovative new conversational search applications truly depends on collaboration and community.
Today we are formally launching our new community Discord server, and we invite all of you to join. The Discord server is a great place to have real-time conversations, ask questions, report bugs, and share your best practices in building conversational search applications. Please feel free to ping any one of us at Vectara on the Discord server and say hello.
Our existing Vectara forums will continue to be active and monitored and provide an additional avenue for collaboration that is open and visible to anyone.
Finally, in case you missed them – here are some recent blog posts that you might find useful:
- Why do LLMs hallucinate
- Announcing vectara-ingest
- Vectara’s API playground
- An overview of Large Language Models