BARC RAG Research White Paper
We all know artificial intelligence (AI) has a trust problem. This is especially true for generative AI
We all know artificial intelligence (AI) has a trust problem. This is especially true for generative AI, a type of neural network that interprets and creates content such as text, imagery, or video. Probabilistic by nature, GenAI language models do not hesitate to speculate or fabricate answers to compensate for gaps in training data. These frequent “hallucinations,” coupled with other governance risks such as mishandling of private data and intellectual property, threaten to make GenAI more of a liability than an asset.
This research note explores an emerging solution to the problem. Retrieval-augmented generation, or RAG, selects and feeds domain-specific data into language models to improve their accuracy. We define RAG, its challenges, benefits, must-have characteristics, and common use cases. We then evaluate three architectural approaches—vector RAG for semantics, graph RAG for ontologies, and relational RAG for database values—and provide an example of a hybrid solution that combines all three approaches. Data and AI leaders who read this report will learn the value of RAG and the guiding principles to get started.