Vectara
Back to blog

Generative AI Adoption Roadblocks, Pit Stops, Crashes, and Destinations

When building something new and testing the art of the possible, crashes are bound to happen. Not just for Ferraris but for all new technology, and especially with Generative AI…

21-minute read timeGenerative AI Adoption Roadblocks, Pit Stops, Crashes, and Destinations

When building something new and testing the art of the possible, crashes are bound to happen. Not just for Ferraris but for all new technology, and especially with Generative AI given its autonomous nature to produce results even when it doesn’t know the correct answer, otherwise known as an “LLM hallucination”. This blog will navigate you through text generation and how you can embed generative AI into your products and services to harness its transformative power, while avoiding the pit stops, roadblocks and crashes along the way.

GenAI Pitfalls Image

Choosing different routes can define your journey’s experience and outcome. Opting for the long, difficult path with numerous roadblocks, pitstops, and crashes can build resilience and offer profound lessons, although it demands more time and effort. Conversely, selecting the fast, easy, and most efficient route ensures a smoother, quicker arrival at your destination, allowing you to conserve energy and resources for what lies ahead (building the applications for your staff and customers).

Adopting generative AI can follow three main routes: fine-tuning, do-it-yourself retrieval-augmented generation (RAG), and RAG as a service (RAGaaS). 

  1. Fine-tuning involves customizing pre-trained models to align with specific tasks or industries. 
  2. Do-it-yourself RAG involves integrating external knowledge sources through complicated GenAI-specific processes
  3. RAGaaS offers a managed solution, providing a streamlined solution for integrating external data into AI models, eliminating the need for extensive technical expertise and development time.

From Science Fiction to Business Reality

Generative AI, particularly in text generation, has come a long way from its early days in the realm of science fiction. Initially, it was limited to basic pattern recognition and rule-based systems. However, advances in neural networks and natural language processing (NLP) have propelled generative AI into new heights. Think of it as the transition from the first clunky cars made by hand (circa 1880s to 1910s) to today’s sleek cars that are assembled by robots, in fully automated factories and with a century of innovation driving the process.

GenAI Pitfalls Image

Generative AI, like the invention of cars, revolutionizes the landscape by transforming the way we approach everyday tasks and solving complex problems with unprecedented efficiency. Just as cars enabled faster, more reliable transportation, generative AI automates and enhances creative and analytical processes, making tasks like writing, designing, and data analysis more accessible and powerful. Cars reshaped how we get around and design cities, GenAI is reshaping how we generate and interact with content. Both innovations, through their respective breakthroughs, have redefined productivity and opened up new horizons for technological advancement.

Adoption Roadblocks

Things will come across on the road where you’ll need to stop and turn around

IBM’s Global AI Adoption Index sheds some light on the major barriers to entry for GenAI. Data privacy (57%) and trust and transparency (43%) concerns are the biggest inhibitors of generative AI. 35% also say that lack of skills for implementation are a big inhibitor. But it’s no wonder why these are such big concerns.

GenAI Pitfalls Image

Data & Privacy Concerns:

Incidents like the prankster tricking a GM chatbot into agreeing to sell a $76,000 Chevy Tahoe for $1 highlight the vulnerabilities in AI systems. Companies must navigate complex data privacy regulations and ensure that their AI models are secure and compliant.

Microsoft did a 2024 Work Trend Index and found that 78% of AI users are bringing their own AI tools to work (BYOAI)—it’s even more common at small and medium-sized companies (80%). 

Or even worse, companies’ brand and reputation can be easily impacted. For example, Adobe didn’t train their AI on their customers’ data, but they updated their terms of service agreement to be more vague about the topic and that was enough to make their users very, very unhappy.

Trust and Transparency:

Generative AI models can produce what Forrester refers to as “coherent nonsense.” AI can sometimes generate plausible-sounding but incorrect or nonsensical answers, leading to misinformation. 

The industry is referring to this as hallucinations and there is a lot of it. Even from the most predominant companies in the space, Google’s AI says to put glue on pizza and eat rocks, Perplexity is a Bullshit Machine, and Microsoft’s AI tells businesses to break the law.

In McKinsey’s Global Survey on AI, nearly one-quarter of organizations are already experiencing negative consequences from generative AI inaccuracy. For instance, Air Canada faced issues with an AI chatbot that provided misleading information to customers and lost a legal case, a pharmacist review study found that nearly 75% of responses from ChatGPT for medication questions were found to be incomplete or wrong, and 56% of surveyed companies emphasized the risk of biases and hallucinations affecting the quality of AI outputs.

Imagine your car lying to you: you only have 10 miles of gas but your car says 100. Or even worse – your GPS taking you to the wrong destination, or your car says full oil, meanwhile you’ve been driving for months on zero oil, causing harm to your engine. Not to mention if you’re experimenting with the open source LLMs you won’t have any roadside assistance to get you home.

Skills Gap for Implementation:

GenAI Pitfalls Image

We don’t need any stats to back this up, The General Availability for GenAI was less than 2 years ago (ChatGPT came out Nov 30, 2022, with ChatGPT Enterprise release date of Aug 28, 2023). And that was just the beginning, new models are coming out every 2 weeks on average (image below). Even teams of Data Scientists with decades of expertise can’t keep up.

A Chronological Overview of Large Language Models (LLMs)

Forrester’s 2024 report underscores the critical need for specialized skills in navigating the complexities of generative AI, stating that enterprise AI leaders say that lack of technical skills in their organizations is the single greatest roadblock to their gaining the benefits they’re looking for from gen AI. For software engineers and Data Science Teams to be proficient in the new world of Generative AI will take some time, just like someone who builds cars will need some time to transition into building airports, trains or boats. 

Pit Stops

Things on the road where you’ll gain something from but they will cost you (snack or gas to go even farther).

Fine-tuning

Just like customizing a car to suit your brand, fine-tuning an AI model can enhance its performance for specific tasks. But you’re not the car maker which makes it challenging to become a winning car. Wells Fargo didn’t make a race car, they just paid to put their wrapper around it.

GenAI Pitfalls Image

OpenAI’s own survey has fine-tuning performance at 81.7% model accuracy. Imagine your car only works 81.7% of the time, I think we’d all look to buy a new one.

Your team will learn the intricacies of further training models and you could get to a point where it understands your domain better but you’ll be taking on more roadblocks and potholes compared to other techniques. 

DIY RAG / Orchestration

Orchestrating AI workflows, from companies like LangChain, AWS Bedrock or LlamaIndex provides a set of abstractions that help write DIY LLM programs while hiding some of the underlying details. However, they often introduce additional complexity and require significant technical expertise.

GenAI Pitfalls Image

Building with the orchestration platform is like buying all the parts and building the car yourself from all the parts. Would you rather have to put the car together yourself or have a running car?

Complicated Blueprint on Car Parts

Vector Databases

Vector databases are to data what gas tanks are to gas; they transform raw data into a format that computers can understand and process efficiently. A critical component of every RAG pipeline is the retrieval step, which needs to produce the most relevant facts to the LLM so that it can respond properly to the user query. A vector database enables this kind of retrieval in the form of semantic search.

While necessary to GenAI, vector databases are only one component of the RAG pipeline. A car can’t run with only a gas tank, it needs an engine, transmission, battery, and so on.

GenAI Pitfalls Image

Time to Production

Despite initial enthusiasm, many companies struggle with prolonged timelines to achieve tangible results from their generative AI projects. Approximately 90% of enterprise pilots for generative AI will not move into production environments. The roadblocks and pit stops above are the reasons for this.

GenAI Pitfalls Image

Latency

Analogous to the frustration of waiting 5 minutes for GPS directions while driving, latency issues in AI processing can significantly impact real-time applications, compromising user experience and operational efficiency. 

Getting a pilot running with good performance is one thing, trying to scale that to production is another. Kissmetrics found that 47% of customers expect a website to load in 2 seconds or less. Not only do consumers EXPECT your website to load in 2 seconds or less, but 40% of users will also abandon your website if it takes more than three seconds to load.

Exorbitant Costs

Unfortunately, the financial benefits of implemented projects have also been dismal: According to recent findings, 42% of companies have yet to realize substantial economic gains from their investments in generative AI initiatives.

More specifically for fine-tuning, Analysts and technologists estimate that the critical process of training a large language model could cost more than $4 million. More advanced language models could cost over “the high-single-digit millions” to train, said Rowan Curran, a Forrester analyst who focuses on AI and machine learning. OpenAI and Nvidia are the Standard Oil of our generation. 

Application Vendors

A detour to GenAI adoption is purchasing a pre-built application like Jasper, Glean or Writer. These products can quickly deliver short-term value but encounter more challenges in achieving long-term benefits.

This is akin to renting a car: while you gain short-term value, you pay for it in other ways (higher insurance, cost to the vendor, mileage restrictions). It also lacks the long-term value that ownership brings and may face limitations in customization.

GenAI Pitfalls Image

Crashes

Things that can cause damage, you’ll want to buckle up

Performance

Fine-tuning LLMs is like customizing a car’s engine for specific performance needs, ensuring it operates optimally for particular tasks. This process tailors the model to understand specialized contexts, much like tuning a car for better handling on specific terrains. However, fine-tuning can be time-consuming and resource-intensive, akin to repeatedly visiting a mechanic for adjustments. 

Even advanced teams like Bloomberg who fine-tuned GPT-3.5 into a BloombergGPT model have shown that generative AI can sometimes perform worse than expected compared to general-purpose models like GPT-4. These performance inconsistencies can lead to unexpected results and setbacks in AI deployment. And they have to refine-tune for every new model that comes out and that’s a new set of GPUs and labor costs in the “high-single-digit millions” to train.

Fine-tuning does have its importance but the models aren’t at the point yet where we can trust them to provide accurate answers. One of the people credited with creating GenAI, Aiden Gomes, said “It’s important to remember that we’re not at that end state already. There are very obvious applications where the tech isn’t ready. We shouldn’t be letting these models prescribe drugs to people without human oversight for example. One day it might be ready. At some point, you might have a model that has read all of humanity’s knowledge about medicine, and you’re actually going to trust it more than you trust a human doctor who’s only been able to, given the limited time that humans have, read a subset. I view that as a very possible future. Today, in the reality that exists, I really hope that no one is taking medical advice from these models and that a human is still in the loop. You have to be conscious of the limitations that exist.”

Leaders of the Pack

GenAI Pitfalls Image

In the IndyCar world, they were just rocked by the scandal of the Penske team cheating by coding their car systems to provide extra horsepower, ultimately getting caught and stripped of their wins. Generative AI is facing a similar scenario with its leads of the pack, OpenAI, Microsoft, Google and Amazon. Do you really want to follow after them causing more of a pile up?

OpenAI

OpenAI and Microsoft are facing a lawsuit from the NY Times for copyright infringement, while George R.R. Martin and 16 other authors are also suing OpenAI over ChatGPT. Scarlett Johansson has criticized OpenAI for imitating her voice without consent, and there are reports of Samsung employees leaking sensitive data via ChatGPT. Lawyers have been fined for filing cases based on bogus law generated by ChatGPT, similarly a multilingual AI health assistant powered by GPT-3.5 faced criticism for distributing misinformation, such as fake clinic details. Italy temporarily banned ChatGPT stating its developers did not have a legal basis to justify the storage and collection of users’ personal data in order to train the site’s algorithms.

Additionally, current and former OpenAI employees have warned that the company is not doing enough to control the dangers of AI, with senior safety researcher Jan Leike and former researcher Leopold Aschenbrenner criticizing the company for prioritizing products over safety and allegedly firing Aschenbrenner for raising safety concerns. Former board members have accused OpenAI CEO Sam Altman of lying.

Furthermore, OpenAI and Anthropic are stealing web content by using automated scraping and crawling tools either ignoring or circumventing established web rules. OpenAI also disregarded YouTube’s terms and conditions for copyrighted work and trained their models on the content they are unauthorized to use.

Microsoft

Microsoft’s Bing AI has been criticized for producing creepy and harmful conversations with users. Congress banning staff use of Microsoft’s AI Copilot. Additionally, Microsoft delayed its recall rollout due to privacy concerns. Controversies also arose from a Microsoft-powered bot in NYC, which made problematic statements about workers’ tips and income discrimination by landlords. Furthermore, Bing’s search bot provided incorrect answers to basic election questions.

Google

Google has faced several challenges with its AI technology, including a $271 million fine over a GenAI copyright issue. The company is working “around the clock” to fix bias issues in its AI tool, yet its new search answers are reportedly worse. Additionally, Google has scaled back its AI search plans after the summary feature advised people to eat glue. Further issues include the Google AI chatbot providing incorrect answers in an ad about the James Webb Telescope and another instance where its AI gave wrong information in a promo video.

Amazon

Amazon is actively gathering extensive data from GitHub to accelerate the training of its upcoming AI model. By encouraging employees to share GitHub accounts, Amazon aims to scrap high-quality code data necessary for developing advanced AI capabilities. This approach has sparked ethical concerns and may potentially raise legal challenges similar to those faced by other tech giants utilizing open-source platforms for AI development.

These organizations are going to extreme lengths to get their hands on data, this desperation speaks volumes for how they are approaching the rollout of this new technology.

Bad Content / Bad Experiences

The biggest crash is the negative consequence this new technology can have on customer experiences and perceptions of organizations that veer off course. We only scratched the surface with the examples above (Bloomberg, GM, Adobe, AirCanada); these issues are prevalent across all industries.

The media industry is grappling with the spread of AI-powered content filled with inaccuracies. AI’s reliability is under scrutiny, with legal models hallucinating in one out of six queries. While AI tools have the potential to assist doctors, they are not immune to making mistakes. A chatbot mistakenly directing a satisfied customer to a suicide prevention site. Additionally, there is a significant risk associated with AI hallucinating financial reports, raising concerns about the dependability of AI in critical applications.

Despite generative AI’s current issues, there’s light at the end of the tunnel. Like the automotive industry, you have the freedom to select a vehicle that is safer to drive, and best fits your needs for performance and reliability.

Reaching Your Destinations

Fulfillment of your journey’s purpose and the beginning of new adventures (Better Experiences & ROI)

The Market

McKinsey Global Institute estimates that generative AI will add between $2.6 and $4.4 trillion in annual value to the global economy, increasing the economic impact of AI as a whole by 15 to 40%. For comparison, the global automotive manufacturing market was worth 2.6 trillion U.S. dollars in 2023

Goldman Sachs predicts a 7%—or nearly $7 trillion—increase in global GDP attributable to generative AI, and the firm expects that two-thirds of U.S. occupations will be affected by AI-powered automation. The auto industry is one of the most important industries in the United States. It historically has contributed 3 – 3.5 percent to the overall Gross Domestic Product (GDP).  

Gartner’s projections forecast that over 80% of enterprises will adopt some form of GenAI technology like APIs, applications, and models by 2026 (up from less than 5% in 2023). Roughly 88% of Americans ages 15 or older are reported as drivers. (Bureau of Transportation).

Even though companies are faced with many challenges and uncertainty, 68% of executives believe that generative AI benefits outweigh its risks.

GenAI Pitfalls Image

The Innovation

The evolution of generative AI mirrors the transformation of the automotive industry from early combustion engines to modern electric vehicles. Initially, both fields relied on rudimentary and inefficient models—basic neural networks in AI and steam or gas-powered engines in cars. Over time, just as cars evolved to feature sleek designs, high efficiency, and autonomous driving capabilities, generative AI has advanced to produce highly sophisticated outputs, from realistic art to complex text generation. Both transformations highlight a shift from mere functional existence to innovative and intelligent systems that enhance human capability and experience. 

Generative AI has fundamentally transformed human-computer interaction by enabling machines to understand, create, and respond in ways previously unimaginable, bridging the gap between technology and human creativity with unprecedented depth and agility, transforming human productivity.

As we look on the horizon there is even more promise to come; from models that can take initiative, make decisions, and act independently within defined parameters, to writing code and optimizing other software. The exciting future for generative AI promises groundbreaking advancements in creativity, problem-solving, and human-machine interaction, revolutionizing industries and shaping a new era of innovation and discovery.

GenAI Pitfalls Image

Retrieval-Augmented Generation (RAG)

If using LLMs is like following directions using a paper map, RAG functions like a sophisticated GPS system that utilizes real-time traffic data to provide swift and precise guidance, much like enhancing an already capable vehicle without altering its core mechanics. This approach allows AI systems to seamlessly integrate new information and adjust to evolving conditions, ensuring optimal performance and relevance. As Oracle highlights, RAG is GenAI’s hottest topic, exemplifying its potential to revolutionize information retrieval and utilization.

GenAi Pitfalls Blog Image

MIT points out “a trove of unstructured and buried data is now legible, unlocking business value. Previous AI initiatives had to focus on use cases where structured data was ready and abundant; the complexity of collecting, annotating, and synthesizing heterogeneous datasets made wider AI initiatives unviable. By contrast, generative AI’s new ability to surface and utilize once-hidden data will power extraordinary new advances across the organization.”

To shine some light on this, 68% of data available to enterprises is left untapped, presenting a considerable opportunity for businesses to unlock valuable insights, drive innovation, and gain a competitive edge.

By integrating relevant knowledge from large datasets or knowledge bases into the generation process, RAG enhances the quality and relevance of generated text, making AI applications more effective in tasks requiring nuanced understanding and contextual accuracy.

Vectara

Like a car enthusiast who has their favorite brand and can share intricate history and details, I am a Gen AI enthusiast and Vectara is my favorite brand for a few reasons.

Similar to how Henry Ford’s end-to-end assembly line transformed automotive manufacturing, Vectara stands as an end-to-end platform for embedding powerful generative AI features into applications with extraordinary results. Everyone can now hit the road (get into production) without needing to build a car from scratch (directly using the LLM or with DIY RAG). All the parts (VectorDB, Embedding Model, Generative LLM, ReRanker, Guardrails, and MLOPs Infrastructure) are assembled by Vectara for the best performance, assurance, and reliability.

As an end-to-end Retrieval Augmented Generation (RAG) service, Vectara delivers the shortest path to a correct answer/action through a safe, secure, and trusted entry point. Vectara allows businesses to embed generative AI capabilities without the risk of hallucinations or data privacy concerns.

GenAi Pitfalls Blog Image

Vectara Differentiators: Using Car Analogies

  1. Navigating Complex Terrain: Just as an autonomous vehicle can navigate complex cityscapes without human intervention, Vectara autonomously navigates the intricacies of generative AI, streamlining processes and decision-making without constant human oversight.
  2. Performance: Vectara’s retrieval pipeline is like a hybrid engine, blending both semantic and keyword search to ensure you always find the best route to your destination.
  3. Trustworthiness: Vectara’s factual consistency score is your car’s advanced safety system, quantifying the risk of hallucination in every answer, giving you transparency and allowing you to know where you need to add more information to close knowledge gaps.
  4. User Experience Flexibility: With Vectara, switching between Q&A, summaries, and research responses is as easy as changing lanes with the push of a button.
  5. Access Control: Vectara’s access control is like setting different driver profiles in a car, ensuring that each user only sees the information they are allowed to access.
  6. Explainability: In-text citations from Vectara are your navigation system’s detailed route information, showing where the data comes from and allowing you to dig deeper to improve trust and user adoption.
  7. Time to Value: Vectara’s API-first approach is like having a car that’s easy to maintain and upgrade, getting you on the road quickly without long waits.
  8. Language Agnostic: Vectara’s multilingual capabilities are like a car’s ability to navigate any country, responding in the language of your choice.
  9. Easy-to-use API: Vectara’s API is like a customizable dashboard, allowing you to control the user interface for various applications like CSM, sales, marketing, and more.
  10. Optionality: Vectara offers the flexibility to switch between different LLMs and data inputs, akin to a car that can run on multiple types of fuel and has customizable features.
  11. Real-time Updatability: Adding new information in Vectara is like updating your car’s GPS in real-time, ensuring you always have the latest routes available.
  12. Mitigate Open Source Risk: Using Vectara is like having a dedicated mechanic, ensuring you’re not left stranded during a production outage with no one to call for help.
  13. Maintenance and Upgrades: Vectara manages the platform for you, like having a full-service maintenance package, so you don’t need to worry about ongoing technical support, bug fixes, and feature upgrades.
  14. Innovation and Future-Readiness: Autonomous vehicles represent the future of transportation with cutting-edge technology. Vectara embodies the future of generative AI, integrating the latest advancements to stay ahead in the rapidly evolving AI landscape.
  15. Lower TCO: Vectara offers an 8x lower total cost of ownership, like owning a fuel-efficient car that saves you money compared to more expensive, self-built options.

In addition to GenAI applications, organizations can also use Vectara for semantic search and significantly impact a business by enhancing its search capabilities and improving user experience. Key business impacts include:

  1. Increased Efficiency: Vectara’s advanced search algorithms enable faster and more accurate retrieval of relevant information, saving time for employees and customers.
  2. Enhanced Customer Satisfaction: By providing more relevant and precise search results, Vectara improves the overall user experience, leading to higher customer satisfaction and loyalty.
  3. Better Decision-Making: With quicker access to pertinent data, businesses can make informed decisions more rapidly and effectively.
  4. Competitive Advantage: Implementing sophisticated search technologies like Vectara can differentiate a business from its competitors, positioning it as a leader in innovation.
  5. Scalability: Vectara’s platform is designed to handle large volumes of data and queries, supporting business growth and scalability without compromising performance.
  6. Cost Savings: Efficient search reduces the time and resources spent on data retrieval, leading to operational cost savings.

Vectara is solving the problems that matter the most. How nice is it when you complete a trip without hitting any roadblocks, Pit stops, or crashes. Vectara is the autonomous vehicle of the GenAI world, enabling people to get from one point to another (pilot to ROI) in the safest, most efficient, and most productive way possible while allowing them to enjoy the cutting-edge innovations that power the journey.

Conclusion

As for GenAI as a whole, just as early cars faced numerous challenges but eventually became reliable and indispensable, generative AI is navigating through its initial hurdles to achieve greater accuracy and utility. Whether choosing to fine-tune AI models, adopt DIY RAG approaches, or opt for RAGaaS, businesses are finding ways to customize and enhance their AI systems. Like selecting from the many routes on your trip, these pathways offer various benefits tailored to different requirements, paving the way for a future where generative AI is as essential and efficient as the modern automobile.

Take Vectara for a test drive: https://console.vectara.com/signup

Vectara Sign-UpGet started with Vectara today!Sign Up!
Vectara Sign-Up
Before you go...

Connect with
our Community!