Meet Vectara: powerful, free LLM-powered search
We’re in a transformative moment in computer science. AI breakthroughs like GPT-3, Copilot, DALL-E and Stable Diffusion are changing how computers understand, interpret, and interact with human language
≈ 12 minutes readWe’re in a transformative moment in computer science. AI breakthroughs like GPT-3, Copilot, DALL-E and Stable Diffusion are changing how computers understand, interpret, and interact with human language. These Large Language Models have already shown they can author text, write code, create art, and remodel knowledge in ingenious new ways.
Yet these breakthroughs aren’t evenly distributed. Sophisticated AI technologies like neural networks are underpinning services we use every day, but they’re not accessible to the majority of developers who could use them to enhance user experience in their own applications.
This is a huge problem to solve, and I’m privileged to lead a brilliant team at Vectara that is taking on this challenge with our first product, which we’re making freely available for production workloads today. As the incredible potential and wide range of applications for neural networks are coming into view, we’re starting our journey with an offering that overcomes the limitations and difficulties of keyword search.
We’ve built a novel LLM-powered search product that is simple for companies to integrate into their websites and applications. The Vectara platform empowers organizations to easily and quickly embed the most advanced natural language processing available into their apps and site search, delivering incredibly relevant results with seamless ease of use for developers.
Vectara’s LLM-powered search is the most effective way to achieve semantic search. The IT industry continues to emulate semantic understanding with keyword search algorithms that haven’t been updated in decades. The results continue to be underwhelming. In contrast, neural search is an unqualified, sea-change breakthrough for search. As an experienced purveyor of breakthrough technologies (see Cloudera and “Big Data”), I am convinced that nearly every search application will be powered by native neural search within the next 5 years.
Breakthrough LLM-powered search-as-a-service
Search is essential for navigating the world’s knowledge, whether combing through technical materials or seeking guidance on life’s big questions. However people ask – with shorthand, typos, or using voice – they want fast, accurate results. Yet the algorithms that power most search applications rely on keywords, leading to narrow results which often miss the intent behind the question.
Very few search experiences today can answer queries based on contextual semantic understanding. The technology that makes this feat possible, which uses large language models and vector similarity as a first principle in information retrieval, has been expensive and difficult to deploy without extensive machine learning expertise. With our free, easy-to-use neural search as a service platform, Vectara overcomes both of these barriers and opens the door to unprecedented improvements in search relevance.
With this initial offering, we aim to solve the issues that have plagued keyword search engines for over 30 years. These algorithms require intense constant gardening to apply sophisticated language rules to emulate even the most basic levels of semantic relevance. We also recognize that users increasingly demand answers delivered in their language of choice and with the appropriate cultural or business context. By leveraging the power of AI to identify implied meanings, Vectara makes it possible for every user to be heard and understood.
Unparalleled search performance for all
Our platform applies deep vector encoding for all indexed content and all issued queries, enabling a true meaning-to-meaning response that retrieves relevant information based on neural-powered understanding of intent and context. Yet even the most advanced models are only useful if they can be easily applied to real-world challenges and integrated into existing workflows. That’s why we built our developer-first platform with powerful yet easy to use APIs, giving any organization access to cutting-edge AI without the high cost or uncertainty associated with other solutions.
Vectara delivers a complete search pipeline from extraction and indexing to retrieval, rerank, and calibration. As a composable neural data processing platform, developers can easily integrate Vectara into their stack to quickly embed or enhance site and app search, product discovery, customer support, user navigation, and many other possibilities. With our platform, searches are more accurate and less frustrating, chatbots get smarter, product recommendations become more personalized, and users experience more delight engaging with content and applications.
Simple to integrate
Vectara’s neural search as a service is one of the most powerful applications of neural large language models available today. Unlike other search as a service offerings, however, no initial or continual tuning or training is required. Vectara uses large language models to analyze data without additional training or the continuous investment in updating language configurations required by keyword platforms such as stop words, synonyms, acronyms, emojis, spelling corrections, part-of-whole relationships, lemmatization and stemming rules.
We engineered Vectara so that even the smallest teams can implement a highly available, auto-scaling, and secure, enterprise-grade service. We carefully monitor system latency, and our API-first platform to allow you to query and index from anywhere. No specialized search engineering or AI/ML knowledge is necessary to integrate this advanced neural search into your site or applications. Of course Vectara’s APIs allow opinionated search developers to customize the behavior at a deeper level if desired.
A complete ML search pipeline
Vectara provides a low latency, highly fault-tolerant platform that delivers a complete, yet composable, ML search pipeline from extraction, encoding, and indexing to retrieval, and rerank. Vectara automatically routes data across relevant backend microservices to provide supporting functionality from snippet extraction to calibration with a single, unified API. The platform is self-optimizing; it automatically fine-tunes the neural network hyperparameters and the vector matching sub-systems to optimally configure the tradeoffs between precision, recall, throughput, and latency for each indexed data set.
Cross-language search
The Vectara platform operates across languages to deliver a search experience that is truly language agnostic. Search in one language and get the correct result, even if it’s in another language. The Vectara LLM-powered search platform does that by mapping the input query from language space to a meaning space, then it matches meaning-to-meaning to return accurate results, irrespective of language. Most solutions on the market today translate all content and queries to English first to emulate this capability, but that has a whole slew of drawbacks including cost, speed and quality of results. Vectara supports cross-language search for the most widely-used written languages on the planet, including Chinese, Arabic, Spanish, German, Portuguese, Japanese, Korean, French, Italian, Dutch, Polish, Thai, Turkish, Russian, and English.
Try Vectara today
As an API-first company, we know the value of integrating a product into your stack to discover how it works. It’s why we’re launching Vectara with a generous free tier that offers any user up to 15,000 queries per month. For more intensive workloads, our Scale tier provides a complete search pipeline with deeper enterprise features capable of running on the largest websites in the world.
We’re thrilled to begin this journey and spread the power of neural search to organizations of any size. Get started today and connect with us.