Vectara
Back to blog
Product Updates

Introducing custom text encoders on Vectara

We’re excited to announce custom text encoders, empowering users to integrate their preferred text embedding models directly into Vectara. This enhancement offers greater flexibility and control over the text processing pipeline, accommodating various project requirements.

3-minute read timeIntroducing custom text encoders on Vectara

Seamless integration

With this feature, you can now create and manage custom text encoders within Vectara. By specifying details such as the encoder's name, description, output dimensions, endpoint URL, model name, and authentication parameters, you can tailor the text embedding process to align with your specific needs. This customization ensures that your data is processed using models and configurations that best fit your application's context.​

Important benefits

  1. Flexibility: Choose the most suitable text embedding models for your use case, whether they are hosted on-premises or through external providers
  2. Consistency: Maintain uniformity in text processing across different applications by standardizing on a single encoder
  3. Control: Manage and update your encoders as needed, ensuring that your text embeddings remain aligned with evolving project requirements
  4. Security: Leverage vetted models that have already gone through compliance and security evaluations, maintaining your company’s security posture without needing to validate new models
  5. Explainability: Use models that your team already understands and trusts, ensuring predictable and interpretable outputs—a crucial factor in maintaining trustworthy AI as the landscape evolves

Integrating an OpenAI API-compatible encoder

To integrate an OpenAI API-compatible encoder, you can use the Create an encoder API by sending a POST request to /v2/encoders with the necessary parameters. This includes the encoder's type, name, description, output dimensions, endpoint URL, model name, and authentication details. Once configured, Vectara will utilize your specified encoder for text embedding tasks.

  1. type: set to openai-compatible to indicate compatibility with OpenAI-based embedding APIs
  2. name: provides a user-defined label for the encoder configuration
  3. description: adds metadata about the encoder
  4. uri: specifies the API endpoint of where to send the encoding requests, such as https://api.openai.com/v1/embeddings
  5. model: specifies the exact model name, such as text-embedding-ada-002
  6. auth: defines authentication details, supporting either Bearer token authentication or custom header-based authentication

Example request

This example request creates an encoder that can be used with the Create corpus API (via encoder_name).

Example response

The response includes details about the newly created encoder.

Conclusion

With new expanded encoder management capabilities, Vectara provides more flexibility for you to integrate custom text embedding models seamlessly. This feature ensures that your text processing aligns with your specific requirements, providing greater control and flexibility within the Vectara platform.

For the latest documentation about creating an encoder and how to use it, have a look here.

As always, we’d love to hear your feedback! Connect with us on our forums, on our Discord or on our community. If you’d like to see what Vectara can offer you for retrieval augmented generation on your application or website, sign up for an account!

Before you go...

Connect with
our Community!