A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models (2024)

Yujuan Ding1, Wenqi Fan1∗, Liangbo Ning1, Shijie Wang1,Hengyun Li1,
Dawei Yin2, Tat-Seng Chua3, and Qing Li1
1The Hong Kong Polytechnic University, 2Baidu Inc, 3National University of Singapore.

(2024)

Abstract.

As one of the most advanced techniques in AI, Retrieval-Augmented Generation (RAG) techniques can offer reliable and up-to-date external knowledge, providing huge convenience for numerous tasks. Particularly in the era of AI-generated content (AIGC), the powerful capacity of retrieval in RAG in providing additional knowledge enables retrieval-augmented generation to assist existing generative AI in producing high-quality outputs. Recently, large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation, while still facing inherent limitations, such as hallucinations and out-of-date internal knowledge. Given the powerful abilities of RAG in providing the latest and helpful auxiliary information, retrieval-augmented large language models have emerged to harness external and authoritative knowledge bases, rather than solely relying on the model’s internal knowledge, to augment the generation quality of LLMs. In this survey, we comprehensively review existing research studies in retrieval-augmented large language models (RA-LLMs), covering three primary technical perspectives: architectures, training strategies, and applications. As the preliminary knowledge, we briefly introduce the foundations and recent advances of LLMs.Then, to illustrate the practical significance of RAG for LLMs, we categorize mainstream relevant work by application areas, detailing specifically the challenges of each and the corresponding capabilities of RA-LLMs. Finally, to deliver deeper insights, we discuss current limitations and several promising directions for future research.

Retrieval-Augmented Generation (RAG), Large Language Models (LLMs), Pre-training, Fine-tuning, In-context Learning, Prompting.

copyright: acmcopyrightjournalyear: 2024

1. Introduction

As one of the most fundamental data mining techniques, retrievalaims to understand the input query and extract relevant information from external data sources (Kobayashi and Takeda, 2000; Singhal et al., 2001). It hasfound extensive application in various fields (Buttcher et al., 2016; Yin et al., 2016; O’Hare et al., 2016), such as search, question answering, and recommender systems.For instance, search engines (e.g., Google, Bing, and Baidu) are the most successful applications of retrieval in the industry; they can filter and retrieve the most relevant web pages or documents that can match a user’s query (Croft et al., 2010; Yin et al., 2016), enabling users to find the desired information effectively.Meanwhile, retrieval models, through effective data maintenance in external databases, can provide faithful and timely external knowledge, thereby serving vital functions in various knowledge-intensive tasks.Due to their powerful capacities, retrieval techniques have been successfully incorporated into advanced generative models in the era of AI-Generated Content (AIGC) (Li et al., 2023a; Wu et al., 2024; Sheynin et al., 2023; Zhang et al., 2023a).Notably, the integration of retrieval models with language models has given rise to Retrieval-Augmented Generation (RAG) (Lewis et al., 2020c), which has emerged as one of the most representative techniques in the field of generative AI, aiming to enhance the generation quality of text content (Li et al., 2023a; Lewis et al., 2020c; Borgeaud et al., 2022).

A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models (1)

To advance generation models and enhance the generated results, RAG incorporates information or knowledge from external data sources, which serves as supplementary for the input query or the generated output (Min et al., 2020; Khandelwal et al., 2020).Specifically, RAG first invokes the retriever to search and extract the relevant documents from external databases, which are then leveraged as the context to enhance the generation process (Izacard and Grave, 2021b).In practice, RAG techniques are feasible and efficient to apply in various generation tasks with simple adaptation of the retrieval component, requiring minimal or even no additional training (Ram et al., 2023). Recent studies have demonstrated the great potential of RAG not only for knowledge-intensive tasks such as the Open-domain Question Answering (OpenQA) (Borgeaud et al., 2022; Guu et al., 2020; Petroni et al., 2020), but also for general language tasks (Khandelwal et al., 2020; He et al., 2021a; Xu et al., 2020), and various downstream applications (Wu et al., 2024; Liu et al., 2023).

Recent years have witnessed the rapid development of pre-trained foundation models, particularly Large Language Models (LLMs), which have demonstrated impressive performance across various tasks (Chowdhery et al., 2023; Achiam et al., 2023), including recommender systems (Zhao et al., 2024a), molecule discovery (Li et al., 2023a), and report generation (Ding et al., 2024).The great success of LLMs can be technically attributed to the advanced architectures with billion-level parameters pre-training on a huge amount of training corpus from various sources. These technical improvements have given rise to the remarkable emergence capabilities of LLMs (Zhao et al., 2023b, 2024a), particularly in language understanding and generation, in-context learning, and others. For instance, GPT-FAR introduces detailed prompts to teach GPT-4 to perform image tagging, statistical analysis, and text analysis for multi-modal fashion report generation (Ding et al., 2024).LLMs also achieve promising performance in recommender systems by understanding users’ preferences towards items (Wang et al., 2024a; Zhao et al., 2024a). Despite the success, LLMs still suffer from intrinsic limitations (Zhao et al., 2024a, 2023b), such as the lack of domain-specific knowledge, the problem of “hallucination”, and the substantial computational resources for updating the models.These problems are particularly notable in domain-specific fields like medicine and law.For instance, a recent study has demonstrated that legal hallucinations are pervasive and disturbing, with hallucination rates ranging from 69% to 88% in response to specific legal queries for state-of-the-art LLMs (Dahl et al., 2024).Moreover, the challenges of tackling the hallucination problem become even harder due to the substantial computational resources required for fine-tuning LLMs with domain-specific or the latest data.This, in turn, significantly hinders the widespread adoption of LLMs in various real-world applications.

To address these limitations, recent efforts have been made to take advantage of RAG to enhance the capabilities of LLMs in various tasks (Shi et al., 2023; Khandelwal et al., 2020; Borgeaud et al., 2022; Izacard and Grave, 2021a), especially those demanding high for the latest and reliable knowledge such as Question Answer (QA), AI4Science, and software engineering. For example, Lozano et al. (2023) introduces a scientific-specific QA system based on retrieving scientific literature dynamically. MolReGPT leverages RAG to enhance the in-context learning ability of ChatGPT for molecular discovery (Li et al., 2023a).As illustrated in Figure 1, an LLM-based dialog system will not be able to answer well for out-of-scope queries.In comparison, with the help of RAG to retrieve relevant knowledge from externaldata sources and integrate it into the process of generation, the dialog system succeeds in giving correct answers to the user. Given the remarkable progress in advancing LLMs with RAG, there is an imperative need for a systematic review of recent advances in Retrieval-Augmented Large Language Models (RA-LLM).

This survey aims to provide a comprehensive overview of the retrieval-augmented large language models, i.e., RA-LLMs, by summarizing representative methods from the aspects of RA-LLMs’ architecture, training, and applications.More specifically, following a brief introduction to the background knowledge of LLMs in Section 2, we review existing research from several primary perspectives of RA-LLMs in terms of retrieval, generation, and augmentation in Section 3, as well as the necessity and application frequency of retrieval in RAG.Then, we summarize the main training techniques of RA-LLMs in Section 4 and various RA-LLMs applications in Section 5.Finally, in Section 6, we discuss key challenges and potential directions for future exploration.

Concurrent to our survey, several related surveys have diverse focuses for RAG and LLMs. For example, Zhao et al. (2023a) specifically review multi-modal information-based RAG techniques and Zhao et al. (2024b) discuss the RAG for AIGC. Gao et al. (2023b) conduct a relatively comprehensive overview of RAG for LLMs. Our survey differs from these surveys in concentrating on technical perspectives and systematically reviewing models according to the architecture and training paradigm in RA-LLMs, as well as application tasks.

2. Background

In this section, we briefly present the background of large language models and prompt learning.

2.1. Large Language Models (LLMs)

Recently, the significant breakthrough of LLMs has revolutionized the field of artificial intelligence (Zhao et al., 2023b; Brown et al., 2020; Fan et al., 2024).The advanced LLMs are typically pre-trained on extensive data with billion-level parameters and have demonstrated the ability to understand and generate human-like text, leading to advancements in various natural language processing tasks such as text generation and information retrieval (Zhao et al., 2023b, 2024a).LLMs can be adapted to a variety of downstream tasks by fine-tuning them on specific datasets, allowing them to specialize in particular domains or applications.In general, most existing LLMs can be broadly divided into three main categories: Encoder-only, Decoder-only, and Encoder-Decoder models.

Encoder-only models, such as the BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) family of models, process input text by encoding it into a high-dimensional space.The key feature of Encoder-only models is their bi-directional nature, meaning that they can take into account both the left and right context of each token when encoding it. This bi-directionality allows Encoder-only models to better understand the meaning of words in context, which is crucial for tasks like sentiment analysis, review reading, and text classification (Xu et al., 2019; Devlin et al., 2019).In contrast to these models, Decoder-only models generate text in a left-to-right fashion.As a representative Decoder-only model, GPT (Generative Pre-trained Transformer) (Radford et al., 2018) predicts the next token in a sequence based on the context provided by the previous tokens.Their architecture makes them particularly effective for tasks like language generation, code generation, and creative writing.Encoder-Decoder models, such as T5 (Text-To-Text Transfer Transformer) (Raffel et al., 2020), uniquely transform a variety of NLP tasks into text generation problems.To be more specific, the encoder in T5 processes the input sequence to capture its meaning, while the decoder generates the output sequence based on the encoded information. This T5 architecture is well-suited for tasks that involve converting one sequence into another, such as machine translation, summarization, and conversational response generation.

2.2. Prompt Learning

2.2.1. Prompting Engineering

Due to the massive parameters of LLMs, prompt learning emerged as a paradigm to leverage the power of LLM to implement various tasks (Zhao et al., 2023b, 2024a), instead of fine-tuning the LLMs extensively.Prompt learning carefully designs the input that guides the model to perform downstream tasks in LLMs.For example, early methods (Petroni et al., 2019; Brown et al., 2020) provide manually crafted templates to handle various tasks in NLP.Specifically, Encoder-only models like BERT typically adopt cloze prompts because they very closely match the form of their pre-training task (Petroni et al., 2019; Cui et al., 2021).For other models like GPT, prefix prompts tend to be more suitable as they mesh well with the generation tasks (Brown et al., 2020).However, manually designed prompts rely on human experience without effectiveness guarantees. To address this limitation, soft prompt tuning was developed to learn the trainable continuous prompt embeddings (Li and Liang, 2021; Vu et al., 2022; Tu et al., 2022).For instance, Prefix-Tuning (Li and Liang, 2021) prepends a series of prefix embedding in the input, which can be trained and updated.This apportion allows prompts not to be real text, giving more flexibility in the generation of prompts.However, due to the lack of domain-specific knowledge, the model might still not generate accurate responses when facing new tasks.

2.2.2. In-Context Learning (ICL)

To overcome the limitations of vanilla prompt learning, recent efforts (Liu et al., 2022a; Kim et al., 2022; Zhang et al., 2023d) have developed in-context learning (ICL).ICL is a specific method of prompt learning that gives the model a few demonstrations of tasks within the prompt.This paradigm allows pre-trained LLMs to understand the pattern provided by the demonstrations to solve novel tasks without the need for fine-tuning.For example, by carefully selecting a few demonstrations, GPT-3 (Brown et al., 2020) has shown the capability to perform few-shot tasks (Liu et al., 2022a).This success indicates that LLMs have a remarkable ability to rapidly adapt to new tasks based on task-specific knowledge.

Despite its effectiveness, ICL usually relies heavily on the quality of the provided demonstrations, which may lead to the generation of sub-optimal outputs.Even worse, ICL may not have enough necessary information or prior knowledge to guide the LLMs in generating accurate responses.To address the aforementioned limitations of ICL, more recent studies introduce Retrieval-Augmented Generation (RAG) technologies (Lewis et al., 2020c; Ram et al., 2023; Shi et al., 2023).By integrating retrieval with generation, RAG models provide a promising direction for enhancing the performance and adaptability of LLMs across various tasks.

3. Retrieval-Augmented Large Language Models (RA-LLMs)

The RAG framework in the era of LLMs generally consists of three major processes of retrieval, generation, and augmentation, as well as the mechanism to determine whether the retrieval is needed. In this section, we will introduce important techniques involved in each component.

A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models (2)
A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models (3)

3.1. Retrieval

Given the query from the input of LLMs, the retrieval process in RAG aims to provide relevant information from the external knowledge sources, which can be either open-sourced or closed-sourced as shown in Figure 2. The key component, retriever, as further detailed in Figure 3, consists of several procedures, functioning as a whole to measure the relevance between the query and documents in the database for effective information retrieval. The specific pipeline of the retrieval is further determined by whether the pre- and post-retrieval processes are included.In this subsection, we will introduce the major techniques involved in the retrieval of traditional and LLM-based RAGs, including the retriever type, retrieval granularity, pre- and post-retrieval enhancement, and database construction.

3.1.1. Retriever Type

Retrieval methods can be generally categorized into two types: sparse and dense, based on the information encoding methods. Sparse retrieval is word-based and applied in text retrieval mostly, while dense retrieval embeds queries and external knowledge into vector spaces and is easily applied to various data formats.

As a straightforward approach, sparse retrieval, e.g., TF-IDF and BM25 (Sparck Jones, 1972; Robertson et al., 2009), usually relies on inverted index matching along with the raw data input. For example, many studies directly apply BM25 for passage-level retrieval to facilitate their RAG (Chen et al., 2017; Ram et al., 2023; Zhong et al., 2022; Jiang et al., 2023; Zhou et al., 2022; Xu et al., 2023b), where passages are specifically represented as a bag of words and ranked based on term and inverse document frequencies (Izacard and Grave, 2021b). On top of offering supplementary to enhance the input of the generator, sparse retrieval has also been used to find examples as demonstrations for ICL for RA-LLMs (Ye et al., 2023b; Luo et al., 2023b; Rubin et al., 2022; Agrawal et al., 2023; Sia and Duh, 2023).The main limitation of applying sparse retrieval in RAG is its no-training nature, which makes the retrieval performance heavily rely on the quality of database construction and query generation. Moreover, such fixed term-based methods only support similarity retrieval, while cannot be adapted for other retrieval considerations demanding in LLM applications, such as the diversity (Drozdov et al., 2022).

Dense retrieval, on the contrary, embeds the query and documents into continuous vector space with certain criteria, for example, semantic similarity (Karpukhin et al., 2020). Dense retrieval methods can often be trained, therefore hold more flexibility and potential in adaptation. As the key component of dense retriever, the embedding models have delicately different designs in existing RAG models. A simple design (Yogatama et al., 2021; Khandelwal et al., 2020; Lewis et al., 2020a; Wu et al., 2022) is to directly use a part of the generation model as the embedding layer of the retriever, which might be able to enhance the alignment between the retrieval and generation processes. BERT-based backbone (Devlin et al., 2019) is widely applied in retrieval models. One common retriever design is to construct two-stream encoders with BERT structure (one encoder for the query and the other for the documents), which is also called bi-encoder (Wu et al., 2020; Shi et al., 2023).Early-stage RAG methods tend to freeze (Borgeaud et al., 2022; Ram et al., 2023) or partially freeze (Lewis et al., 2020c) the parameters of the retriever to perform general-level relevant knowledge extraction and pay more attention to the knowledge leveraging and generator fine-tuning. Large-scale specialized pre-training further enhances RAG models to excel in more knowledge-intensive tasks. One typical success is Dense Passage Retriever (DPR) (Karpukhin et al., 2020), which uses a BERT-based backbone and is pre-trained specifically for the OpenQA task with question-answer pair data. DPR has shown strong capacity as a pre-trained retriever, facilitating many RAG models to succeed in various downstream tasks (Lewis et al., 2020c; Izacard and Grave, 2021b; Siriwardhana et al., 2023; Singh et al., 2021; Shi et al., 2023). It has also been regarded as the first step in the RAG paradigm for improving the performance of LLMs, which may further enhance the alignment of the embeddings between queries and relevant textual data through fine-tuning (Cheng et al., 2023). A recent study (Reichman and Heck, 2024) has also discovered that DPR training decentralizes how knowledge is stored in the network, creating multiple access pathways to the same information. With effective fine-tuning, bi-encoder retrievers are also applied widely in ICL-based RAG (Rubin et al., 2022; Poesia et al., 2022; Lu et al., 2023; Ye et al., 2023b; Milios et al., 2023; Li and Qiu, 2023). Specifically, they have been more often used for sentence embedding similarity-based retrieval, as well as for some special requirement in ICL, such as diverse example retrieval (Ye et al., 2023b).

Another stream of widely applied dense retrievers in RA-LLMs has one-encoder structures, which may be based on Transformer, BERT or other off-the-shelf sequence modeling backbones. These one-encoder retrievers are generally pre-trained on large-scale unaligned documents by contrastive learning (Reichman and Heck, 2024), which may therefore excel for their versatility, meaning that they can transfer and generalize better to new domains or tasks. Such general-purpose pre-trained retrievers,e.g., Contriever (Gautier et al., 2022) and Spider (Ram et al., 2022), would be more flexible to use in LLMs targeting on various tasks and have demonstrated their effectiveness in many RA-LLM methods, such as In-Context RALM (Ram et al., 2023), Atlas (Izacard et al., 2023), Self-RAG (Asai et al., 2023b), and others (Shi et al., 2023). According to experimental results in existing studies (Yu et al., 2023a), for open-domain QA tasks, when cooperated with InstructGPT (Ouyang et al., 2022), applying general-purpose pre-trained retriever (Contriever) without fine-tuning achieves comparable performance to sparse retriever (BM25). However, they are both worse than the DPR model fine-tuned on target datasets, showing the effectiveness of fine-tuning on targeted tasks and data.

3.1.2. Retrieval Granularity

Retrieval granularity denotes the retrieval unit in which the corpus is indexed, e.g., document, passage, token, or other levels like entity. For RAG, the choice of retrieval granularity can significantly impact the overall performance of the model in terms of effectiveness and efficiency as they determine the saving space for the database as well as the computational cost for searching (Asai et al., 2023a). Early stage retrieval-augmented language models (Chen et al., 2017) propose to retrieve whole pieces of documents, and then apply a machine comprehension model trained to detect answer spans in the returned documents, which focuses more on language reading and key information locating in the document.In generative language models, Chunk retrieval (also called passages in some references (Karpukhin et al., 2020; Guu et al., 2020; Jiang et al., 2023)) is common, which has been used in both traditional and LLM-based RAG models such as REALM (Guu et al., 2020), RAG (Lewis et al., 2020c) and Atlas (Izacard et al., 2023). A more fine-grained retrieval, i.e., token retrieval, instead can be done with faster searching but will bring more burden for the database saving. token retrieval is more suitable in cases requiring rare patterns or out-of-domain data (Khandelwal et al., 2020), meanwhile cooperates well with the every-token retrieval strategy as applied in kNN-LM and other similar work (Yogatama et al., 2021; He et al., 2021b; Min et al., 2023). In comparison, a text chunk may contain compact and complete information with less redundancy and irrelevancy, therefore becoming the mainstream retrieval text granularity in RAG.

Another major retrieval granularity proposed in RAG is entity retrieval. Unlike the above types of granularity, entity retrieval is designed from the perspective of knowledge rather than language. Févry et al. (2020) introduce the Entities as Experts (EAE) model, which divides the parameter space of language models according to the entity identity. The proposed EAE model aims to learn entity representations from the text along with other model parameters with the Wikipedia database and represent knowledge with entity memory.At a more fine-grained level, de Jong et al. (2022) propose to build the knowledge base by learning and retrieving mention rather than entity. Overall, applying entity or mention-level retrieval in RAG would be more effective for entity-centric tasks, and more efficient in space compared to token-wise retrieval.

3.1.3. Pre-retrieval and Post-retrieval Enhancement

To ensure the retrieval quality, i.e., increase the accuracy and relevance of the retrieved results, various pre- and post-retrieval strategies have been proposed to further enhance the input and output of the retriever. Wang et al. (2023f) propose a query expansion approach Query2doc, which generates pseudo-documents by few-shot prompting LLMs and expands the query with the relevant information in pseudo-documents, which can aid in query disambiguation and guide the retrievers. They have empirically demonstrated that such a method can boost the performance of both the sparse and dense retriever (Karpukhin et al., 2020) on ad-hoc information retrieval datasets. Similarly, Gao et al. (2023a) propose Hypothetical Document Embedding (HyDE) method, which instructs an LLM to generate hypothetical documents for the given query. The hypothetical documents are then used as new queries to get embedded and search for neighbors with the dense retriever.

Another pre-retrieval strategy, query rewrite (Ma et al., 2023a), aims to close the gaps between the input text and the needed knowledge in retrieval, to reformulate the original question into a more conducive version to retrieve. Specifically, Ma et al. (2023a) propose the Rewrite-Retrieve-Read framework, which prompts an LLM to generate the query for the retrieval function. The motivation of the rewriting step is to clarify the retrieval need in the new query to ease the burden on the retrieval function to comprehend the input and enhance the output, i.e., retrieved relevant information. They have tested both the settings of using a frozen LLM and a trainable model to be the rewriter, both outperforming naive RAG or generation models, demonstrating diverse performance on different tested QA datasets though.

Yu et al. (2023c) propose query augmentation to combine the original query and the preliminary generated outputs as a new query, which is further used to retrieve relevant information from the external database. The retrieved results can inspire the language model to rethink the generated results and enhance them. Compared to applying only the original query, such augmentation may contribute more relevant information retrieved from the corpus for the directly clarification of query-output relationships. Including initial output in the new query further enhances the lexical and semantic overlap between the supporting documents to be retrieved with the given question. Query augmentation achieves overall better performance among these query enhancement strategies since it may process all retrieved knowledge collectively while generating answers (Wang et al., 2024c).

Post-retrieval enhancement denotes the procedure to process the extracted top-k documents from the retriever before feeding them to the generator for the sake of better alignment between the retrieval and generation stages (Yang et al., 2023b), particularly for closed-source generators such as LLMs. For example, Yang et al. (2023b) propose the Pluggable Reward-driven Context Adapter (PRCA) that enables to fine-tune the lightweight adapter instead of the generator on specific datasets. It also distills the retrieved documents through reinforcement learning with the rewards resulting from the generator. Glass et al. (2022) propose Retrieve-Rerank-Generate (R2G) method, which assembles the retrieved documents of different retrieval approaches with the rerank operation to boost the robustness of the retrieval results. Another consideration for applying post-retrieval enhancement is that the retrieved information may sometimes be irrelevant or contain noise, which might not help with the generation model for the task, or even worse, harm the generation process (Wang et al., 2023a). Wang et al. (2023a), Asai et al. (2023b), Yu et al. (2023c) propose different strategies to mitigate the noise in retrieved knowledge documents. However, Xiong et al. (2023) empirically studied that these methods are dependent on the LLM’s confidence levels, which might not be as precise as expected. For this problem, Wang et al. (2024c) propose BlendFilter, which simultaneously considers the pre-retrieval query generation blending and the post-retrieval knowledge filtering. This method can tackle the complex questions as well as the noisy retrieved knowledge problems, therefore comprehensively enhancing the RA-LLM performance.

More recently, advanced RAG pipelines have been proposed using LLMs to generate reasoning paths and plans with the Information Retrieval (IR) module to iteratively retrieve knowledge to enhance LLM-based generation (Yao et al., 2023; Xu et al., 2023a; Shao et al., 2023). However, Zhu et al. (2023) point out that if the outputs of IR and LLM are low-quality, the retrieval and generation processes will get hindered by each other with such an iterative guidance pipeline. To overcome this barrier, they propose a new reasoning approach for query and retrieved knowledge enhancement. Post-retrieval strategies may also function to enhance the compatibility between the retrieved results and the generation models. For example, one of the main limitations of existing LLMs is the length of the input tokens, which prevents long retrieved documents being directly incorporated into existing RA-LLMs. For this limitation, Xu et al. (2023b) propose Retrieve, Compress, Prepend (RECOMP), which adds an intermediate step to process the retrieved documents into a textual summary before in-context augmentation in the generation process.

3.1.4. Database

Retrieval in RAG is conducted based on external knowledge source, which can be a closed- or open-sourced (Ma et al., 2023a; Menick et al., 2022), as illustrated in Figure 2. Closed-sourced database generally stores key-value pairs for knowledge, which can be constructed in various ways. The keys are primarily used for similarity matching, being as sparse vectors such as in BM25 or dense embeddings from the retrieval encoding. The value depends on the specific retrieval target, which is raw text in most cases (Guu et al., 2020; Lewis et al., 2020c; Izacard and Grave, 2021b; Borgeaud et al., 2022; Lewis et al., 2020a; Seo et al., 2019).For example, each Wikipedia article is split into disjoint 100-word chunks, to make a total of 21M documents in early RAG (Lewis et al., 2020c). Each document is encoded by a dense embedding and saved in the database as the value and key, respectively. The value can store tokens too, one for each as applied in kNN-LM and Spalm (Yogatama et al., 2021; Khandelwal et al., 2020). The source of the database depends on the specific application domains and tasks. Wikipedia is one of the most commonly applied general retrieval sets in previous RAG work, which stores factual structured information and has several versions differing in scale, from billion token-level (Khandelwal et al., 2020; Yogatama et al., 2021; Lewis et al., 2020c; Guu et al., 2020; Févry et al., 2020; de Jong et al., 2022; Xu et al., 2023b; Shi et al., 2023; Ram et al., 2023) to trillion token-level (Borgeaud et al., 2022).Domain-specific database is also used for downstream tasks. For example, for the code generation task, Zan et al. (2022) collect API information and code files of public libraries to build their APIretriever database. In addition, Zhou et al (Zhou et al., 2022) propose to use a documentation pool frequently updated with new content (newly released libraries) in their model.

Applying Internet searching engine (Luo et al., 2023a) such as Bing and Google avoids the maintenance of the search index and can access up-to-date knowledge (Lazaridou et al., 2022). Meanwhile, it provides a broader knowledge base than the closed-sourced database (Asai et al., 2023b; Lazaridou et al., 2022). Internet search has been widely incorporated with black-box LLMs and shows effectiveness for different functions such as knowledge augmentation (Lazaridou et al., 2022), fact-checking (Menick et al., 2022) and LLM agent enhancement (Yao et al., 2023). Compared to traditional RAG, Internet search has been leveraged more as the retriever in RA-LLMs owing to the extraordinary capability of LLMs to be the Reader to comprehend the searching results, i.e., the retrieved documents, as well as LLMs’ ability to use tools to process and analyze the them (Ma et al., 2023a). Existing studies (Yu et al., 2023a) have shown that leveraging search engines (e.g., InstrucGPT) is particularly effective for LLMs on zero-shot knowledge-intensive tasks such as OpenQA and fact checking.

3.2. Generation

The design of the generator heavily depends on the downstream tasks. For most text generation tasks, Decoder-only and Encoder-Decoder are two dominant structures (Zhao et al., 2023b).The recent development of commercial closed-sourced large foundation models makes black-box generation models mainstream in RA-LLMs.In this part, we will briefly review studies with these two types of generators: parameter-accessible (white-box) and parameter-inaccessible (black-box).

3.2.1. Parameter-Accessible Generators (White-box)

The structure of Encoder-Decoder processes the input and the target independently with different sets of parameters, in which a cross-attention component is developed to connect input tokens to target tokens. Representative Encoder-Decoder models include T5 (Raffel et al., 2020) and BART (Lewis et al., 2020b).In comparison, Decoder-only models process inputs and targets after concatenation, which makes the representations of the two parts concurrently built layer-by-layer as they propagate up the network.These two types of generators are widely applied in existing RAG work.For example, RAG (Lewis et al., 2020c) and Re2G (Glass et al., 2022) employ BART;FID (Izacard and Grave, 2021b) and EMDR2 utilize T5. There are other models (Borgeaud et al., 2022; Li et al., 2022a) leveraging Transformer-based Encoder-Decoder architecture but with some customized design. Generators in RAG differ themselves from general ones by incorporating retrieved data to enhance the generation accuracy and relevance. Furthermore, white-box generators allow parameter optimization, which can be trained to adapt to different retrieval and augmentation approaches for a better performance of generation.

3.2.2. Parameter-Inaccessible Generators (Black-box)

A certain proportion of LLMs are released without the disclosure of internal structures or the accessibility of parameters, especially those particularly large-scale ones such as GPT series (Achiam et al., 2023), Codex (Chen et al., 2021) and Claude, which are called black-box generation models.These generators only allow the operations of feeding queries (input) and receiving responses (output) while not allowing the internal structure to be altered or parameters to be updated.From another perspective, LLMs, even those open for fine-tuning, are large in scale and difficult to tune for downstream domain-specific tasks with only a limited amount of data.Black-box RA-LLMs, therefore, focus more on the retrieval and augmentation processes, trying to enhance the generatorby augmenting the input (also called prompt in the context of LLMs) with better knowledge, guidance, or examples for the generation.For example, Rubin et al. (2022) proposes to train a prompt retriever with the data labeled by language models themselves, which can be used to provide better examples for in-context learning, therefore enhancing the final generation performance.Xu et al. (2023b) propose to compress the retrieved documents before in-context integration, which can reduce the computational costs and also relieve the burden of LMs to identify relevant information in long retrieved documents.

3.3. Retrieval Integration for Generation Augmentation

Augmentation describes the technical process that integrates retrieval and generation parts, which is the essential part of RA-LLMs. In this subsection, we introduce three main designs of augmentation, which are conducted at the input, output, and intermediate layers of generator respectively, as illustrated in Figure 2.

3.3.1. Input-Layer Integration

A common way to integrate retrieved information/documents is to combine them with the original input/query and jointly pass them to the generator, which is called input-layer integration.For example, In-Context RALM (Ram et al., 2023) applies input-layer integration by specifically concatenating the original input and all retrieved documents into a single sequence as the new input for the generation model.Despite the effectiveness, such integration is limited to the number of retrieved documents, since the concatenated new input may be too long to be processed by the generation model. In-context RALM specifically alleviates this limitation by removing tokens from the beginning of the new input.To avoid information loss with such a token removing strategy, FID (Izacard and Grave, 2021b) employs a different integration method that processes each retrieved document independently in the encoder. This strategy is scalable to a large number of contexts as it only performs self-attention over one context at a time in the follow-up processing. Atlas (Izacard et al., 2023) and REPLUG (Shi et al., 2023) apply a similar parallel integration by concatenating the query and one retrieved document at a time.In general, most black-box generation-based RAG methods apply input-layer integration since neither the intermediate layer of the generation model or the output distribution is accessible.

More specially for LLMs, input-layer integration may use the retrieved content as (additional) prompts or demonstrations on top of using it as supplementary to the original input as in traditional RAGs (Rubin et al., 2022). Prompt retrieval aims to find suitable natural language prompts automatically through retrieval to teach the LLM to learn in context (Brown et al., 2020) or to induce the LLM to reason(Wei et al., 2022). It may boost the zero-shot ability of LLMs without delicate prompt engineering. For example, Cheng et al. (2023) propose to learn a prompt retriever based on the input-prompt pair data with score labels resulting from a frozen LLM.

3.3.2. Output-Layer Integration

Another kind of augmentation is post-hoc, i.e., output-layer integration, which joints retrieval and generation results.For example, kNN-LM (Khandelwal et al., 2020) interpolates two next-token distributions in prediction: one induced by the LM and the other induced by the nearest neighbors from the retrieval corpus.Output-layer linear integration (Grave et al., 2017; Zhong et al., 2022) is flexible to apply since it can be plugged into most generation models without additional training. However, the simplicity of output-layer integration also limits the model’s ability to reason about the retrieved text. To tackle this limitation, Yogatama et al. (2021) propose to add an extra gating network to post-process the retrieved data and achieve comparatively better performance. For LLMs, output-layer integration is as reasonable and adaptive as input-layer integration. REFEED (Yu et al., 2023c) proposes an answer refining mechanism that applies an LLM to evaluate the retrieved information and adjust the initial answer accordingly to enhance the accuracy of the response. Similarly, Zhang et al. (2023b) propose the COMBO framework, which matches LLM-generated passages with retrieved counterparts into compatible pairs based on pre-trained discriminators. The passage pairs are then handled by a Fusion-in-Decoder-based (Izacard and Grave, 2021b) to derive a final answer.

3.3.3. Intermediate-Layer Integration

Compared to the above two non-parametric approaches, a more engaging augmentation is to design a semi-parametric module to integrate the retrieved results through the internal layers of the generation model, which is called intermediate-layer integration.Such integration might add additional complexity and is promising to enhance the capability of the generation model with effective training.Typically, a Transformer module is introduced to leverage retrieved information (mostly encoded into dense representations) into the generation model to interact with the representations in the middle stage of the generation.For example, RETRO (Borgeaud et al., 2022) introduces a Chunked Cross Attention (CCA) layer to process the retrieved chunks in the generator blocks, and Wu et al. (2022) introduces the kNN-Augmented Attention Layer.Similarly, EAE (Févry et al., 2020) and TOME (de Jong et al., 2022) use Entity Memory and MemoryAttention layer to incorporate the retrieved Entity and Entity Mentions, respectively.Such intermediate-layer integration can use many blocks frequently and efficiently to enhance the capability of the whole RAG model. It offers an efficient alternative to incorporate a large number of text chunks frequently retrieved, which are challenging to process with input-layer integration due to the input length limit of LMs (Borgeaud et al., 2022). However, it also needs to be noted that intermediate-layer integration requires high access to the generation models, which is not feasible for most LLMs that are accessible through inference APIs (Ma et al., 2023a).

3.4. Retrieval Augmentation Necessity and Frequency

The retrieval operation in LLM-based generation generally aims to supplement knowledge to enhance generation.Although retrieval-augmented models have emerged promising, they have been criticized for not being a universal solution (Li et al., 2022b; Petroni et al., 2020) as indiscriminately augmenting LLMs with irrelevant passages can override potentially correct knowledge already possessed by LLMs and result in incorrect responses instead (Maekawa et al., 2024).Thakur et al. (2023) contribute a human-annotated dataset to help evaluate the robustness of LLMs against errors in external retrieved knowledge and observe that LLMs may double the hallucination rate on the non-relevant retrieved passages than on the relevant ones.Therefore, it is critical for RA-LLMs to accurately recall the prior knowledge while selectively incorporating retrieved information only when necessary,which is the path to robust RA-LLMs.

Most existing methods determine the necessity of retrieval based on the preliminary answers of LLMs or their internal reasoning results (Ram et al., 2023; Min et al., 2022). For example, Self-RAG (Asai et al., 2023b) introduces special tokens to assess the necessity of retrieval and control retrieval behavior. Other methods design iterative prompts to decide if extra information is required during generation, which thereby needs to invoke retrieval or other actions for LLMs (Yao et al., 2023; Wei et al., 2022). In traditional RAGs, retrieval necessity judgment has also been explored and proposed to address by intuitive approaches such as assessing the confidence of the logits produced by the generation models (Jiang et al., 2021; Kadavath et al., 2022; He et al., 2021b). Such a solution is also applicable to RA-LLMs, for example, FLARE (Jiang et al., 2023) dynamically triggers RAG if logits are lower than a specific threshold.More flexibly, Tan et al. (2024) introduces SlimPLM, a collaborative approach that detects missing knowledge in LLMs with a slim proxy model, which functions to generate a “heuristic answer”. The “heuristic answer” is used to assess the necessity of retrieval and facilitate the retrieval process if necessary by being applied in query rewriting.

In traditional RAGs that rarely consider retrieval necessity, retrieval frequency (also called retrieval stride) is an important design aspect to determine the degree of using the retrieval in the generation, thereby greatly affecting the overall performance of RAG models (Ram et al., 2023). Retrieval frequency controls how much to rely on the retrieval results, thereby affecting both the efficiency and effectiveness of the model. When the necessity of retrieval is not considered, retrieval frequency is often pre-defined and fixed, which have three common settings: one-time, every-n-token, and every-token.One-time retrieval invokes the retrieval function only once and tries to find all desired information in that one-time operation. One-time retrieval is usually operated at the beginning of the generation process, and then provides all retrieved documents to the generation models along with the original input, as applied in REALM (Guu et al., 2020).One-time retrieval is more suitable for the cases that the information needs in external databases are obvious to LLMs (Jiang et al., 2023).However, for language tasks requiring long-form output such as open-domain summarization, the dependency among the tokens in the output is more important to be considered during the generation.In these cases, pre-retrieved documents (through one-time retrieval) might not be enough to support the generation of the whole sequence of output, which calls for in-generation retrieval operations.To this end, In-Context RALM (Ram et al., 2023) and RETRO (Borgeaud et al., 2022) apply every-n-token retrieval in the process of generation for better augmentation.In comparison, kNN-LM (Khandelwal et al., 2020) adopts a more frequent retrieval strategy, which retrieves information for the prediction of every token during the generation.Overall, applying different frequencies of retrieval can impact both the effectiveness and efficiency of the whole RAG method.For example, more frequent retrieval leads to better performance but also increases the computing cost (Ram et al., 2023). Choosing retrieval frequency is almost a trade-off between computing cost and performance.

4. RA-LLMs Training

Based on whether training is required or not, existing RAG methods can be categorized into two main classes: train-free approaches and training-based approaches.Training-free methods usually directly leverage the retrieved knowledge during inference time without introducing extra training by inserting the retrieved text into the prompt, which is computationally efficient.However, one potential challenge is that the retriever and generator components are not specifically optimized for downstream tasks, which could easily lead to sub-optimal utilization of the retrieved knowledge.To fully exploit the external knowledge, extensive methods are proposed to fine-tune the retriever and generator, thereby guiding large language models to effectively adapt and integrate retrieved information (Sarto et al., 2022; Wang et al., 2023c; Schick et al., 2024; Zhu et al., 2024; Shao et al., 2023; Shi et al., 2023).

According to the training strategies, we categorize these training-based approaches into three classes:1) Independent Training approaches independently train each component in the RAG procedure,2) Sequential Training methods train one module first and freeze the well-trained component to guide the tuning process of the other part,and 3) Joint Training approaches train retriever and generator simultaneously.In the following section, we will comprehensively review the training-free, independent training, sequential training, and joint training methods.The comparison of these different training methods is depicted in Figure 4.

A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models (4)

4.1. Training-free

With the huge number of parameters, LLMs have exhibited human-level intelligence and achieved promising prediction performance on various downstream tasks.However, it is extremely challenging to frequently perform fine-tuning and update the knowledge stored in the model parameters (Lewis et al., 2020c) due to the considerable time and computational resources required.Recently, numerous studies have suggested enhancing LLMs with retrieval mechanisms, enabling them to dynamically acquire new knowledge from external sources without extra training processes (i.e., training-free) (Izacard and Grave, 2021b; Jiang et al., 2023; Khattab et al., 2022), instead of relying solely on the implicit knowledge encoded in the model’s parameters.These approaches have shown significant performance improvement for various knowledge-intensive tasks, such as open-domain question answering (Lewis et al., 2020c) and document summarization (Song et al., 2023).According to the different ways in which LLMs utilize retrieved information, we categorize these training-free methods into two categories: 1) Prompt Engineering-based Methods integrate retrieved knowledge into the original prompt directly,and 2) Retrieval-Guided Token Generation Methods retrieve information to calibrate the token generation process.

4.1.1. Prompt Engineering-based Methods

As the LLMs’ generation performance highly depends on the input query, numerous training-free RAG approaches employ external knowledge by refining the original prompts (Jiang et al., 2023; Khattab et al., 2022; Li et al., 2023c).Specifically, the retrieved texts are usually used as contextual information and combined with the original prompt to guide the generation of LLMs (Izacard and Grave, 2021b; Jiang et al., 2023; Khattab et al., 2022; Purwar and Sundar, 2023; Li et al., 2023c; Wang et al., 2023g; Kim et al., 2023).For example,In-Context RALM (Ram et al., 2023) keeps the LLM parameters unchanged and directly incorporates the retrieved document before the original prompt to augment the generation process.IRCoT (Trivedi et al., 2023) interleaves chain-of-thought (CoT) generation and knowledge retrieval steps, enabling the retrieval of more relevant information for subsequent reasoning steps compared to standard retrieval methods that rely solely on the question as the query.Instead of retrieving knowledge from a large corpus, GENREAD (Yu et al., 2023a) first prompts a LLM to generate contextual documents based on the query, and then generate answers based on the given context and question.SKR (Wang et al., 2023a) proposes guiding LLMs to determine whether they can answer a given question based on their internal knowledge, enabling flexible utilization of both internal and external knowledge by selectively calling the retriever.TOC (Kim et al., 2023) first retrieves relevant knowledge for ambiguous questions and recursively constructs a tree structure by clarifying ambiguous questions into multiple disambiguate questions, which is further aggregated to generate long-form answers.

4.1.2. Retrieval-Guided Token Generation Methods

In addition to directly integrating external knowledge into the original prompt, the auxiliary information can be employed to adjust the token generation process.For example, KNN-KMs (Khandelwal et al., 2020) first retrieves k most relevant contexts from the datastore based on the given query, and computes a neighbor distribution based on the distance.The output distribution is calibrated by interpolating the neighbor distribution and the original model’s output distribution.Rest (He et al., 2023) is proposed to replace the parametric draft model with a non-parametric retrieval datastore and retrieves relevant tokens based on the current context for speculative decoding (Chen et al., 2023a; Leviathan et al., 2023; Sun et al., 2024).

4.2. Independent Training

Independent training refers to training the retriever and LLMs as two entirely independent processes, in which there is no interaction between the retriever and the LLMs during the training process (Karpukhin et al., 2020; Zhou et al., 2022; Lan et al., 2022).Compared with training-free methods, the performance of the RAG-empowered models can be effectively enhanced by training LLMs to leverage the retrieved knowledge or retrievers to bridge the gap between information retrieval and language generation.For the training of LLMs, the negative loglikelihood loss is the most representative training objective (Radford et al., 2019; Touvron et al., 2023), which aims to guide the LLMs to generate desired output based on the given input.Regarding the retriever, it can be categorized into two types: 1) Sparse retriever (Ramos et al., 2003; Robertson et al., 2009), and 2) Dense retriever (Lan et al., 2022; Karpukhin et al., 2020; Zhou et al., 2022).The sparse retrievers usually exploit sparse features, e.g., word frequencies, to represent the documents and calculate the relevance scores based on task-specific metrics (Li et al., 2023a; Ramos et al., 2003; Robertson et al., 2009) such as TF-IDF and BM25.As for the dense retrievers, deep neural networks are employed to encode the query and documents into dense representations, and then the inner product is usually used to calculate relevance scores and retrieve the relevant external knowledge.For example, DPR (Karpukhin et al., 2020) adopts two independent BERT (Devlin et al., 2019) networks to encode the query and passages respectively, and trains these models by utilizing contrastive learning.CoG (Lan et al., 2022) proposes to train a prefix encoder and a phrase encoder for retrieval and reformulate the text generation as multiple copy-and-paste operations from existing source text collection.

4.3. Sequential Training

Independent training is an efficient approach to exploit the external knowledge during the generation process since the retriever and generator can be trained offline and any off-the-shelf models can be utilized, avoiding extra training costs.To better enhance the synergy between the retriever and generator, several methods have been proposed to train the retriever and LLMs sequentially.In these sequential training methods, the process typically begins with the independent pre-training of either the retriever or the generator, after which the pre-trained module is fixed while the other module undergoes training.Note that various existing models (e.g., BERT (Devlin et al., 2019; Reimers and Gurevych, 2019; Khattab and Zaharia, 2020), CLIP (Radford et al., 2021), T5 (Raffel et al., 2020)) can be directly employed as the fixed retriever and generator, thereby bypassing the first pertaining process.Compared to independent training, sequential training involves coordinated training of the retriever and generator, where the trainable module benefits from the assistance of the fixed module.Based on the training order between the retriever and generator, sequential training can be categorized into two classes: 1) Retriever First (Sarto et al., 2022; Wang et al., 2023c; Schick et al., 2024; Zhu et al., 2024; Asai et al., 2023b), and 2) LLMs First (Shi et al., 2023; Wang et al., 2024b; Shao et al., 2023).

4.3.1. Retriever First

These methods first train the retrieval model and then fix it. LLMs are then trained by utilizing the retrieved knowledge.For instance, RETRO (Borgeaud et al., 2022) adopts the BERT model that is pre-trained independently as the retriever, and an encoder-decoder architecture is trained to integrate retrieval chunks into the model’s predictions.RALMs (Yoran et al., 2023) adopts Google Search and the open-source COLBERTV2 (Khattab and Zaharia, 2020) as the pre-trained retriever and fine-tunes the LLM to effectively leverage the retrieved passages.ITER-RTGEN (Ren et al., 2023) utilizes the pre-trained S-BERT (Reimers and Gurevych, 2019) as the retriever and introduces an adaptive hybrid retrieval strategy for retrieving demonstrations. Additionally, it leverages T5 (Raffel et al., 2020) as the generator, which undergoes further fine-tuning based on the target label and input combining the original prompt with retrieved demonstrations.SMALLCAP (Ramos et al., 2023) proposes using the CLIP (Radford et al., 2021), which is a powerful pre-trained multi-modal network, to encode the input image and the textual data of the external datastore and retrieve the most relevant items based on the cosine similarity. A cross-attention layer is trained and GPT-2 (Radford et al., 2019) is used as the decoder to produce captions.

4.3.2. LLMs First

Similarly, it can also pre-train LLMs first, and then tune the retriever under the supervision of the well-trained LLMs.For example, DKRR (Izacard and Grave, 2021a) shows that attention scores from a sequence-to-sequence model can indicate the document’s relevance. Therefore, they propose to leverage the attention scores of a reader model to produce synthetic labels to train the retriever.AAR (Yu et al., 2023b) proposes using a small language model to generate the supervised signal for training retrievers. The well-trained retriever can be further leveraged to enhance the performance of black-box LLMs.RA-DIT (Lin et al., 2023) first fine-tunes the LLMs to enhance their ability to leverage retrieved knowledge, and then train the retriever to better align its output with LLMs.UPRISE (Cheng et al., 2023) proposes a lightweight method to enhance the zero-shot performance of LLMs in unseen tasks by introducing a prompt retriever. A frozen LLM is employed to guide the fine-tuning process of the prompt retriever, and this retriever then retrieves prompts for different tasks with various LLMs during inference.

4.4. Joint Training

Joint training methods (Zhong et al., 2022; Kang et al., 2023; Li et al., 2023b; Xu et al., 2023c; Hu et al., 2023; Cheng et al., 2024) employ the end-to-end paradigm to optimize the retriever and generator simultaneously. Instead of training each module sequentially, joint training methods effectively enhance the retriever’s ability to locate external knowledge for generation and the generator’s capacity to effectively leverage the retrieved information.For instance, RAG (Lewis et al., 2020c) minimizes the negative loglikelihood to jointly train the retriever and generator.REALM (Guu et al., 2020) adopts a similar training paradigm to that of RAG (Lewis et al., 2020c), and Maximum Inner Product Search (MIPS) (Ram and Gray, 2012; Chen et al., 2019; Shen et al., 2015; Ding et al., 2020) technique is used to locate the most relevant documents.To employ MIPS, all external documents are embedded first and a search index is produced for each embedding.An asynchronous index updating strategy (Guu et al., 2020; Izacard et al., 2023; Siriwardhana et al., 2023; Huang et al., 2023) is proposed to refresh the index every several hundred training steps to avoid time consumption of re-indexing all documents.

5. Applications

In this section, we will introduce some representative applications of retrieval-augmented large language models (RA-LLMs).To provide a clear overview of the applications of RA-LLMs, we will review them from three perspectives: NLP applications, downstream tasks, and domain-specific applications.The studies mentioned in this section are summarized and categorized in Figure 5.

5.1. NLP Applications

Due to the intrinsic capability in text generation, RA-LLMs have various applications in the NLP field, such as Question Answer (QA) Systems, ChatBot, and Fact Verification.

5.1.1. QA Systems

QA Systems aim to provide precise answers to user’s queries.However, even when trained on extensive data, these systems may lack the latest information or specific domain knowledge that is not included in their training data (Izacard and Grave, 2021b; Liu et al., 2022b).To address this limitation, the integration of RA-LLMs has played a crucial role in advancing the capabilities of QA systems by enhancing their ability to retrieve and synthesize relevant information (Borgeaud et al., 2022; Izacard and Grave, 2021b).Specifically, RA-LLMs can provide coherent and contextually relevant answers by leveraging their retrieval component to access a vast knowledge base.For example, REALM (Guu et al., 2020) integrates a knowledge retriever that can retrieve information from a large corpus during pre-training, fine-tuning, and inference.This approach allows REALM to effectively retrieve from a vast knowledge corpus, thereby improving the accuracy of its responses.Similarly, Fusion-in-Decoder (Izacard and Grave, 2021b) retrieves passages from support documents and then fuses them with questions to generate the answer, achieving higher accuracy.In addition, Borgeaud et al. (2022) indicate that the quality of the answers may rely more on the output of the retrieval encoder.

5.1.2. ChatBot

ChatBot is designed to interact with users in a natural and conversational manner (Liu et al., 2020).Different from the QA system, ChatBot focuses on maintaining a coherent and contextually rich conversation with the user.To enhance these capabilities, recent methods focus on integrating RA-LLMs (Komeili et al., 2022; Zhang et al., 2020; Kang et al., 2023) for its ability to augment the ChatBot with relevant external knowledge, facilitating more engaging and context-rich interactions with users.For example, some studies (Ghazvininejad et al., 2018; Chen et al., 2020) retrieve relevant knowledge from static databases (e.g., a Wikipedia dump) to augment conversation. Komeili et al. (2022) propose retrieving information from the internet search to further augment conversation performance.Considering the dynamic nature of knowledge in the world, another model (Wang et al., 2023d) further accesses large amounts of dynamic information in search engines to generate responses.

5.1.3. Fact Verification

Fact Verification is a critical task in verifying the accuracy and reliability of information.With the need for trusted evidence, RA-LLMs are being utilized to enhance the capabilities of fact verification (Lewis et al., 2020c; Izacard et al., 2023; Lewis et al., 2020c). Lewis et al. (2020c) first propose retrieval of external knowledge to augment a range of knowledge-intensive tasks including fact verification.On the other hand, Atlas (Izacard et al., 2023) examines the performance of the RA-LLMs for fact verification under few-shot learning.Recently, Self-RAG (Asai et al., 2023b) has greatly made a notable impression by incorporating a self-reflective mechanism. Specifically, Self-RAG reflects on whether retrieved information is helpful and judges the reliability of retrieved information, thereby greatly improving the verification accuracy.

5.2. Downstream Tasks

In addition to NLP applications, RA-LLMs can also be applied to various downstream tasks, such as recommendations and software engineering.

5.2.1. Recommendations

Recommender systems play an important role in modeling users’ preferences and providing personalized recommendations (Zhang et al., 2024; Wang et al., 2024a; Fan et al., 2019; Zhao et al., 2024a; Fan et al., 2020, 2022a).Recently, RA-LLMs have demonstrated great potential in providing personalized and contextually relevant recommendations by integrating retrieval and generation processes (Di Palma, 2023; Wu et al., 2024; Lu et al., 2021).For example, Di Palma (2023) proposes a simple retrieval-augmented recommendation model, that leverages knowledge from movie or book datasets to enhance recommendations.Additionally, Lu et al. (2021) further retrieval from the reviews to enrich item information in recommender systems.CoRAL (Wu et al., 2024) utilizes reinforcement learning to retrieve collaborative information from the dataset and align it with semantic information for more accurate recommendations.

5.2.2. Software Engineering

The rise of RA-LLMs has influenced many aspects of software engineering (Zhou et al., 2022; Nashid et al., 2023; Ye et al., 2023a).For example, some studies propose the retrieval-augmented generation paradigm for code generation (Zhou et al., 2022) and program repair (Nashid et al., 2023).Similarly, Parvez et al. (2021) retrieve top-ranked codes or summaries from the codebase and aggregate them with input to enhance code generation and summarization.In addition, RA-LLMs show potential in tabulardata processing (Ye et al., 2023a; Li et al., 2024b) and Text-to-SQL semantic parsing (Shi et al., 2022; Poesia et al., 2022).

5.3. Domain-specific Applications

RA-LLMs have been widely adopted for various domain-specific tasks, such as AI for Science and Finance.

5.3.1. AI for Science

RA-LLMs have proven to be beneficial for the realms of science, such as molecular and protein. Molecules include identifying the molecule’s property and predicting new molecules, thereby favoring drug discovery.Currently, some RA-LLMs have been applied to molecules by integrating retrieval of molecule structure and biomedical entities like protein, molecule, and disease (Wang et al., 2023b; Liu et al., 2023; Yang et al., 2023a; Wang et al., 2023e), etc.Wang et al. (2023b); Li et al. (2023a) propose retrieval-based frameworks by retrieving from the database to guide molecule generation.Liu et al. (2023) introduce a multi-modal molecule structure-text model by retrieving textual knowledge from a large-scale dataset for molecular property prediction.In addition, RA-LLMs also significantly influence Protein representation and generation (Sun et al., 2023; Ma et al., 2023b).For instance, RSA (Ma et al., 2023b) queries protein sequences associated with a collection of structurally or functionally similar sequences in the database to enhance protein representations.Furthermore, Lozano et al. (2023) present a clinical QA system based on retrieving published review articles.

5.3.2. Finance

In the highly data-driven and information-intensive field of finance, RA-LLMs have proved to be a significant technology for enhancing decision-making (Zhang et al., 2023c; Yepes et al., 2024; Li et al., 2024a).For example, Zhang et al. (2023c) retrieve financial information from external sources, such as news platforms (e.g., Bloomberg and Reuters) and social media platforms (e.g., Twitter, Reddit), to combine with the original query to enhance the precision of financial sentiment analysis.In addition, financial QA is another primary task of financial analysis, which usually extracts relevant knowledge from financial documents. As professional documents are usually stored in PDF format, Lin (2024) introduces a PDF parser combined with RA-LLMs to retrieve knowledge from financial reports.On the other hand, Yepes et al. (2024) propose a document chunking method based on structure instead of chunking based on paragraphs, further improving the quality of RA-LLMs outputs.

6. Future Challenges and Opportunities

Since the studies of RA-LLMs are still in the early stage, we present some potential research directions that can be explored in the future in the field of RA-LLMs.

Trustworthy RA-LLMs.The essential objective of developing RAG-empowered LLMs is to enhance the capability of the language models, thereby benefiting users and society by alleviating redundant and meaningless labor, increasing conveniences, and spurring social progress.However, recent research indicates that RA-LLMs can be maliciously and unintentionally manipulated to make unreliable decisions and harm humans (Deng et al., 2024; Zou et al., 2024), which may have serious consequences in safety-critical scenarios (Liu et al., 2021; Fan et al., 2022b, 2021; Chen et al., 2023b, 2022). In addition, private retrieval database has a risk of leakage, raising concerns regarding the privacy of RA-LLMs (Zeng et al., 2024).Therefore, developing trustworthy RA-LLMs is of paramount importance as it can significantly mitigate the potential negative impacts of LLMs technology and provide people with powerful AI models that can be fully trusted.To be specific, the ideal trustworthiness in RA-LLMs systems should possess the following characteristics: 1) robustness, 2) fairness, 3) explainability, and 4) privacy.For example, robustness means a trustworthy RA-LLMs system should be robust against malicious or inadvertent perturbations introduced by attackers.Fairness indicates a trustworthy RA-LLMs system ought to avoid discrimination during the decision-making process.Explainability requires a complete understanding of the intrinsic workings of RA-LLMs systems, i.e., the predictions of RA-LLMs systems are explainable and transparent.Privacy entails safeguarding the safety of this private information housed within the datastore when establishing trustworthy RA-LLMs systems.

Multi-Lingual RA-LLMs.The ability of leveraging knowledge from multiple languages can greatly enhance the capabilities of retrieval-augmented language models.As the world becomes increasingly interconnected, there is a growing need for AI systems that can understand and communicate across different languages.By incorporating multilingual knowledge retrieval and generation, these models can access and synthesize information from diverse linguistic sources, enabling more comprehensive and nuanced understanding and generation capabilities.Additionally, multilingual models can facilitate cross-cultural communication and knowledge sharing and breaking down language barriers, thereby bringing convenience to people across different regions of the world, especially those in areas with minority languages (Kabra et al., 2023; Li et al., 2023c).For example, users from countries with less prevalent languages can utilize abundant English and Chinese corpora for knowledge retrieval, enhancing the performance of large language models in downstream tasks.

Multi-modal RA-LLMs.Multi-modal retrieval-augmented generation extends the knowledge sources beyond text to include various data modalities such as images, videos, and audio.By integrating various modalities, LLMs can leverage richer contextual information than single-modal RAG and develop a more comprehensive understanding of users’ needs, bringing precise, fine-grained, and high-quality generation.For instance, an image or video can provide valuable visual cues that complement textual information, leading to more precise language generation (Zhu et al., 2024; Hu et al., 2023).By fusing multiple modalities, multi-modal RA-LLMs can develop a more comprehensive understanding of the world, leading to more accurate and insightful outputs, benefiting a wide range of domains, including healthcare (Zhu et al., 2024), drug discovery (Shtar, 2021), molecular analysis (Liu et al., 2023; Andrews et al., 2022), etc.

Quality of External Knowledge.As a commonly used datastore in current RAG systems, Wikipedia (Zhu et al., 2024; Karpukhin et al., 2020) serves as a vast repository of external textual knowledge used to augment the generation process, which contains millions of articles covering various disciplines.However, the reliability and accuracy of individual articles within Wikipedia vary significantly, and the introduction of some texts that deviate from facts might even mislead the model’s generation process.Therefore, it is crucial to enhance the quality of the external knowledge corpus and mitigate the negative impact of low-quality knowledge on the performance of LLMs.By enhancing the quality of the external knowledge and tailing robust mechanisms by filtering out low-quality or unreliable information, the RAG-empowered LLM systems might produce more accurate, reliable outputs, thereby improving their effectiveness in various real-world applications.

7. Conclusion

Retrieval-augmented generation (RAG), a cutting-edge AI technique, has achieved remarkable success across various applications, including recommendations, molecule generation, protein representation, and software engineering, owing to the powerful capabilities of retrieval in providing supplementary information to enhance generation performance.Recently, increasing efforts have been made to alleviate the limitations of large language models (LLMs), such as hallucination and out-of-date internal knowledge, by leveraging retrieval to provide the latest auxiliary information and teaching LLMs to harness the retrieved external knowledge.With the rapid advancements in retrieval-augmented large language models (RA-LLMs), there is a pressing need for a comprehensive and systematic overview.To bridge this gap, in this paper, we comprehensively review the RA-LLMs from architectures, training strategies, and applications perspectives, providing researchers with an in-depth understanding.Moreover, since the studies of RA-LLMs are still in the early stage, we also discuss the current limitations and several potential research directions for future research.

References

  • (1)
  • Achiam et al. (2023)Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023.Gpt-4 technical report.arXiv preprint arXiv:2303.08774 (2023).
  • Agrawal et al. (2023)Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, and Marjan Ghazvininejad. 2023.In-context Examples Selection for Machine Translation. In ACL (Findings). Association for Computational Linguistics, 8857–8873.
  • Andrews et al. (2022)Miles C Andrews, Junna Oba, Chang-Jiun Wu, Haifeng Zhu, Tatiana Karpinets, Caitlin A Creasy, Marie-Andrée Forget, Xiaoxing Yu, Xingzhi Song, Xizeng Mao, et al. 2022.Multi-modal molecular programs regulate melanoma cell state.Nature communications 13, 1 (2022), 4000.
  • Asai et al. (2023a)Akari Asai, Sewon Min, Zexuan Zhong, and Danqi Chen. 2023a.Retrieval-based language models and applications. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts). 41–46.
  • Asai et al. (2023b)Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023b.Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection. In The Twelfth International Conference on Learning Representations.
  • Borgeaud et al. (2022)Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022.Improving language models by retrieving from trillions of tokens. In International conference on machine learning. PMLR, 2206–2240.
  • Brown et al. (2020)Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020.Language models are few-shot learners.Advances in neural information processing systems 33 (2020), 1877–1901.
  • Buttcher et al. (2016)Stefan Buttcher, Charles LA Clarke, and Gordon V Cormack. 2016.Information retrieval: Implementing and evaluating search engines.Mit Press.
  • Chen et al. (2023a)Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023a.Accelerating large language model decoding with speculative sampling.arXiv preprint arXiv:2302.01318 (2023).
  • Chen et al. (2017)Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017.Reading Wikipedia to Answer Open-Domain Questions. In ACL (1). Association for Computational Linguistics, 1870–1879.
  • Chen et al. (2022)Jingfan Chen, Wenqi Fan, Guanghui Zhu, Xiangyu Zhao, Chunfeng Yuan, Qing Li, and Yihua Huang. 2022.Knowledge-enhanced Black-box Attacks for Recommendations. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 108–117.
  • Chen et al. (2021)Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021.Evaluating large language models trained on code.arXiv preprint arXiv:2107.03374 (2021).
  • Chen et al. (2023b)Xiao Chen, Wenqi Fan, Jingfan Chen, Haochen Liu, Zitao Liu, Zhaoxiang Zhang, and Qing Li. 2023b.Fairly adaptive negative sampling for recommendations. In Proceedings of the ACM Web Conference 2023. 3723–3733.
  • Chen et al. (2020)Xiuyi Chen, Fandong Meng, Peng Li, Feilong Chen, Shuang Xu, Bo Xu, and Jie Zhou. 2020.Bridging the gap between prior and posterior knowledge selection for knowledge-grounded dialogue generation. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP). 3426–3437.
  • Chen et al. (2019)Yudong Chen, Zhihui Lai, Yujuan Ding, Kaiyi Lin, and Wai Keung Wong. 2019.Deep supervised hashing with anchor graph. In Proceedings of the IEEE/CVF international conference on computer vision. 9796–9804.
  • Cheng et al. (2023)Daixuan Cheng, Shaohan Huang, Junyu Bi, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Furu Wei, Weiwei Deng, and Qi Zhang. 2023.UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 12318–12337.
  • Cheng et al. (2024)Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, and Rui Yan. 2024.Lift yourself up: Retrieval-augmented text generation with self-memory.Advances in Neural Information Processing Systems 36 (2024).
  • Chowdhery et al. (2023)Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023.Palm: Scaling language modeling with pathways.Journal of Machine Learning Research 24, 240 (2023), 1–113.
  • Croft et al. (2010)W Bruce Croft, Donald Metzler, and Trevor Strohman. 2010.Search engines: Information retrieval in practice. Vol. 520.Addison-Wesley Reading.
  • Cui et al. (2021)Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021.Template-Based Named Entity Recognition Using BART. In ACL/IJCNLP (Findings) (Findings of ACL, Vol. ACL/IJCNLP 2021). Association for Computational Linguistics, 1835–1845.
  • Dahl et al. (2024)Matthew Dahl, Varun Magesh, Mirac Suzgun, and Daniel E Ho. 2024.Large legal fictions: Profiling legal hallucinations in large language models.arXiv preprint arXiv:2401.01301 (2024).
  • de Jong et al. (2022)Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, and William W. Cohen. 2022.Mention Memory: incorporating textual knowledge into Transformers through entity mention attention. In ICLR. OpenReview.net.
  • Deng et al. (2024)Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, and Yang Liu. 2024.Pandora: Jailbreak GPTs by Retrieval Augmented Generation Poisoning.arXiv preprint arXiv:2402.08416 (2024).
  • Devlin et al. (2019)Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT (1). Association for Computational Linguistics, 4171–4186.
  • Di Palma (2023)Dario Di Palma. 2023.Retrieval-augmented recommender system: Enhancing recommender systems with large language models. In Proceedings of the 17th ACM Conference on Recommender Systems. 1369–1373.
  • Ding et al. (2024)Yujuan Ding, Yunshan Ma, Wenqi Fan, Yige Yao, Tat-Seng Chua, and Qing Li. 2024.FashionReGen: LLM-Empowered Fashion Report Generation.arXiv preprint arXiv:2403.06660 (2024).
  • Ding et al. (2020)Yujuan Ding, Wai Keung Wong, Zhihui Lai, and Zheng Zhang. 2020.Discriminative dual-stream deep hashing for large-scale image retrieval.Information Processing & Management 57, 6 (2020), 102288.
  • Drozdov et al. (2022)Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. 2022.Compositional semantic parsing with large language models. In The Eleventh International Conference on Learning Representations.
  • Fan et al. (2021)Wenqi Fan, Tyler Derr, Xiangyu Zhao, Yao Ma, Hui Liu, Jianping Wang, Jiliang Tang, and Qing Li. 2021.Attacking black-box recommendations via copying cross-domain user profiles. In 2021 IEEE 37th International Conference on Data Engineering (ICDE). IEEE, 1583–1594.
  • Fan et al. (2022a)Wenqi Fan, Xiaorui Liu, Wei Jin, Xiangyu Zhao, Jiliang Tang, and Qing Li. 2022a.Graph Trend Filtering Networks for Recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 112–121.
  • Fan et al. (2019)Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019.Graph neural networks for social recommendation. In The world wide web conference. 417–426.
  • Fan et al. (2020)Wenqi Fan, Yao Ma, Qing Li, Jianping Wang, Guoyong Cai, Jiliang Tang, and Dawei Yin. 2020.A graph neural network framework for social recommendations.IEEE Transactions on Knowledge and Data Engineering (2020).
  • Fan et al. (2024)Wenqi Fan, Shijie Wang, Jiani Huang, Zhikai Chen, Yu Song, Wenzhuo Tang, Haitao Mao, Hui Liu, Xiaorui Liu, Dawei Yin, et al. 2024.Graph Machine Learning in the Era of Large Language Models (LLMs).arXiv preprint arXiv:2404.14928 (2024).
  • Fan et al. (2022b)Wenqi Fan, Xiangyu Zhao, Xiao Chen, Jingran Su, Jingtong Gao, Lin Wang, Qidong Liu, Yiqi Wang, Han Xu, Lei Chen, et al. 2022b.A Comprehensive Survey on Trustworthy Recommender Systems.arXiv preprint arXiv:2209.10117 (2022).
  • Févry et al. (2020)Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. 2020.Entities as Experts: Sparse Memory Access with Entity Supervision. In EMNLP (1). Association for Computational Linguistics, 4937–4951.
  • Gao et al. (2023a)Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2023a.Precise Zero-Shot Dense Retrieval without Relevance Labels. In ACL (1). Association for Computational Linguistics, 1762–1777.
  • Gao et al. (2023b)Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. 2023b.Retrieval-augmented generation for large language models: A survey.arXiv preprint arXiv:2312.10997 (2023).
  • Gautier et al. (2022)Izacard Gautier, Caron Mathilde, Hosseini Lucas, Riedel Sebastian, Bojanowski Piotr, Joulin Armand, and Grave Edouard. 2022.Unsupervised dense information retrieval with contrastive learning.Transactions on Machine Learning Research (2022).
  • Ghazvininejad et al. (2018)Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018.A knowledge-grounded neural conversation model. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
  • Glass et al. (2022)Michael R. Glass, Gaetano Rossiello, Md. Faisal Mahbub Chowdhury, Ankita Naik, Pengshan Cai, and Alfio Gliozzo. 2022.Re2G: Retrieve, Rerank, Generate. In NAACL-HLT. Association for Computational Linguistics, 2701–2715.
  • Grave et al. (2017)Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017.Improving Neural Language Models with a Continuous Cache. In ICLR (Poster). OpenReview.net.
  • Guu et al. (2020)Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020.Retrieval augmented language model pre-training. In International conference on machine learning. PMLR, 3929–3938.
  • He et al. (2021b)Junxian He, Graham Neubig, and Taylor Berg-Kirkpatrick. 2021b.Efficient Nearest Neighbor Language Models. In EMNLP (1). Association for Computational Linguistics, 5703–5714.
  • He et al. (2021a)Qiuxiang He, Guoping Huang, Qu Cui, Li Li, and Lemao Liu. 2021a.Fast and accurate neural machine translation with translation memory. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 3170–3180.
  • He et al. (2023)Zhenyu He, Zexuan Zhong, Tianle Cai, Jason D Lee, and Di He. 2023.Rest: Retrieval-based speculative decoding.arXiv preprint arXiv:2311.08252 (2023).
  • Hu et al. (2023)Ziniu Hu, Ahmet Iscen, Chen Sun, Zirui Wang, Kai-Wei Chang, Yizhou Sun, Cordelia Schmid, David A Ross, and Alireza Fathi. 2023.Reveal: Retrieval-augmented visual-language pre-training with multi-source multimodal knowledge memory. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 23369–23379.
  • Huang et al. (2023)Jie Huang, Wei Ping, Peng Xu, Mohammad Shoeybi, Kevin Chen-Chuan Chang, and Bryan Catanzaro. 2023.Raven: In-context learning with retrieval augmented encoder-decoder language models.arXiv preprint arXiv:2308.07922 (2023).
  • Izacard and Grave (2021a)Gautier Izacard and Edouard Grave. 2021a.Distilling Knowledge from Reader to Retriever for Question Answering. In ICLR 2021-9th International Conference on Learning Representations.
  • Izacard and Grave (2021b)Gautier Izacard and Edouard Grave. 2021b.Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. In EACL 2021-16th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 874–880.
  • Izacard et al. (2023)Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2023.Atlas: Few-shot Learning with Retrieval Augmented Language Models.Journal of Machine Learning Research 24, 251 (2023), 1–43.
  • Jiang et al. (2021)Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021.How can we know when language models know? on the calibration of language models for question answering.Transactions of the Association for Computational Linguistics 9 (2021), 962–977.
  • Jiang et al. (2023)Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023.Active Retrieval Augmented Generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 7969–7992.
  • Kabra et al. (2023)Anubha Kabra, Emmy Liu, Simran Khanuja, Alham Fikri Aji, Genta Winata, Samuel Cahyawijaya, Anuoluwapo Aremu, Perez Ogayo, and Graham Neubig. 2023.Multi-lingual and Multi-cultural Figurative Language Understanding. In The 61st Annual Meeting Of The Association For Computational Linguistics.
  • Kadavath et al. (2022)Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022.Language models (mostly) know what they know.arXiv preprint arXiv:2207.05221 (2022).
  • Kang et al. (2023)Minki Kang, Jin Myung Kwak, Jinheon Baek, and Sung Ju Hwang. 2023.Knowledge graph-augmented language models for knowledge-grounded dialogue generation.arXiv preprint arXiv:2305.18846 (2023).
  • Karpukhin et al. (2020)Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020.Dense Passage Retrieval for Open-Domain Question Answering. In EMNLP (1). Association for Computational Linguistics, 6769–6781.
  • Khandelwal et al. (2020)Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020.Generalization through Memorization: Nearest Neighbor Language Models. In International Conference on Learning Representations.
  • Khattab et al. (2022)Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2022.Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp.arXiv preprint arXiv:2212.14024 (2022).
  • Khattab and Zaharia (2020)Omar Khattab and Matei Zaharia. 2020.Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 39–48.
  • Kim et al. (2023)Gangwoo Kim, Sungdong Kim, Byeongguk Jeon, Joonsuk Park, and Jaewoo Kang. 2023.Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Models. In The 2023 Conference on Empirical Methods in Natural Language Processing.
  • Kim et al. (2022)Hyuhng Joon Kim, Hyunsoo Cho, Junyeob Kim, Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2022.Self-generated in-context learning: Leveraging auto-regressive language models as a demonstration generator.arXiv preprint arXiv:2206.08082 (2022).
  • Kobayashi and Takeda (2000)Mei Kobayashi and Koichi Takeda. 2000.Information retrieval on the web.ACM computing surveys (CSUR) 32, 2 (2000), 144–173.
  • Komeili et al. (2022)Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022.Internet-Augmented Dialogue Generation. In ACL (1). Association for Computational Linguistics, 8460–8478.
  • Lan et al. (2022)Tian Lan, Deng Cai, Yan Wang, Heyan Huang, and Xian-Ling Mao. 2022.Copy is All You Need. In The Eleventh International Conference on Learning Representations.
  • Lazaridou et al. (2022)Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. 2022.Internet-augmented language models through few-shot prompting for open-domain question answering.arXiv preprint arXiv:2203.05115 (2022).
  • Leviathan et al. (2023)Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023.Fast inference from transformers via speculative decoding. In International Conference on Machine Learning. PMLR, 19274–19286.
  • Lewis et al. (2020a)Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettlemoyer. 2020a.Pre-training via paraphrasing.Advances in Neural Information Processing Systems 33 (2020), 18470–18481.
  • Lewis et al. (2020b)Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020b.BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In ACL. Association for Computational Linguistics, 7871–7880.
  • Lewis et al. (2020c)Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020c.Retrieval-augmented generation for knowledge-intensive nlp tasks.Advances in Neural Information Processing Systems 33 (2020), 9459–9474.
  • Li et al. (2022b)Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2022b.Large language models with controllable working memory.arXiv preprint arXiv:2211.05110 (2022).
  • Li et al. (2024b)Hongxin Li, Jingran Su, Yuntao Chen, Qing Li, and ZHAO-XIANG ZHANG. 2024b.SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models.Advances in Neural Information Processing Systems 36 (2024).
  • Li et al. (2023a)Jiatong Li, Yunqing Liu, Wenqi Fan, Xiao-Yong Wei, Hui Liu, Jiliang Tang, and Qing Li. 2023a.Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective.arXiv preprint arXiv:2306.06615 (2023).
  • Li et al. (2024a)Xiang Li, Zhenyu Li, Chen Shi, Yong Xu, Qing Du, Mingkui Tan, Jun Huang, and Wei Lin. 2024a.AlphaFin: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework.arXiv preprint arXiv:2403.12582 (2024).
  • Li et al. (2023b)Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yu Gu, Zhiyuan Liu, and Ge Yu. 2023b.Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured Data. In The 61st Annual Meeting Of The Association For Computational Linguistics.
  • Li et al. (2023c)Xiaoqian Li, Ercong Nie, and Sheng Liang. 2023c.From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL. In NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following.
  • Li and Qiu (2023)Xiaonan Li and Xipeng Qiu. 2023.Mot: Pre-thinking and recalling enable chatgpt to self-improve with memory-of-thoughts.arXiv preprint arXiv:2305.05181 (2023).
  • Li and Liang (2021)Xiang Lisa Li and Percy Liang. 2021.Prefix-Tuning: Optimizing Continuous Prompts for Generation. In ACL/IJCNLP (1). Association for Computational Linguistics, 4582–4597.
  • Li et al. (2022a)Zonglin Li, Ruiqi Guo, and Sanjiv Kumar. 2022a.Decoupled context processing for context augmented language modeling.Advances in Neural Information Processing Systems 35 (2022), 21698–21710.
  • Lin (2024)Demiao Lin. 2024.Revolutionizing Retrieval-Augmented Generation with Enhanced PDF Structure Recognition.arXiv preprint arXiv:2401.12599 (2024).
  • Lin et al. (2023)Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Richard James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, et al. 2023.RA-DIT: Retrieval-Augmented Dual Instruction Tuning. In The Twelfth International Conference on Learning Representations.
  • Liu et al. (2020)Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2020.Does Gender Matter? Towards Fairness in Dialogue Systems. In Proceedings of the 28th International Conference on Computational Linguistics. 4403–4416.
  • Liu et al. (2021)Haochen Liu, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Yunhao Liu, Anil K Jain, and Jiliang Tang. 2021.Trustworthy ai: A computational perspective.arXiv preprint arXiv:2107.06641 (2021).
  • Liu et al. (2022a)Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022a.What Makes Good In-Context Examples for GPT-3?. In DeeLIO@ACL. Association for Computational Linguistics, 100–114.
  • Liu et al. (2023)Shengchao Liu, Weili Nie, Chengpeng Wang, Jiarui Lu, Zhuoran Qiao, Ling Liu, Jian Tang, Chaowei Xiao, and Animashree Anandkumar. 2023.Multi-modal molecule structure–text model for text-based retrieval and editing.Nature Machine Intelligence 5, 12 (2023), 1447–1457.
  • Liu et al. (2022b)Ye Liu, Semih Yavuz, Rui Meng, Dragomir Radev, Caiming Xiong, and Yingbo Zhou. 2022b.Uni-Parser: Unified Semantic Parser for Question Answering on Knowledge Base and Database. In EMNLP. Association for Computational Linguistics, 8858–8869.
  • Lozano et al. (2023)Alejandro Lozano, Scott L Fleming, Chia-Chun Chiang, and Nigam Shah. 2023.Clinfo. ai: An open-source retrieval-augmented large language model system for answering medical questions using scientific literature. In PACIFIC SYMPOSIUM ON BIOCOMPUTING 2024. World Scientific, 8–23.
  • Lu et al. (2023)Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. 2023.Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning. In ICLR. OpenReview.net.
  • Lu et al. (2021)Yu Lu, Junwei Bao, Yan Song, Zichen Ma, Shuguang Cui, Youzheng Wu, and Xiaodong He. 2021.RevCore: Review-Augmented Conversational Recommendation. In ACL/IJCNLP (Findings) (Findings of ACL, Vol. ACL/IJCNLP 2021). Association for Computational Linguistics, 1161–1173.
  • Luo et al. (2023a)Hongyin Luo, Yung-Sung Chuang, Yuan Gong, Tianhua Zhang, Yoon Kim, Xixin Wu, Danny Fox, Helen Meng, and James Glass. 2023a.Sail: Search-augmented instruction learning.arXiv preprint arXiv:2305.15225 (2023).
  • Luo et al. (2023b)Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbrasaite, and Vincent Y Zhao. 2023b.Dr. icl: Demonstration-retrieved in-context learning.arXiv preprint arXiv:2305.14128 (2023).
  • Ma et al. (2023b)Chang Ma, Haiteng Zhao, Lin Zheng, Jiayi Xin, Qintong Li, Lijun Wu, Zhihong Deng, Yang Lu, Qi Liu, and Lingpeng Kong. 2023b.Retrieved Sequence Augmentation for Protein Representation Learning.bioRxiv (2023), 2023–02.
  • Ma et al. (2023a)Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, and Nan Duan. 2023a.Query rewriting for retrieval-augmented large language models.arXiv preprint arXiv:2305.14283 (2023).
  • Maekawa et al. (2024)Seiji Maekawa, Hayate Iso, Sairam Gurajada, and Nikita Bhutani. 2024.Retrieval Helps or Hurts? A Deeper Dive into the Efficacy of Retrieval Augmentation to Language Models.arXiv preprint arXiv:2402.13492 (2024).
  • Menick et al. (2022)Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, et al. 2022.Teaching language models to support answers with verified quotes.arXiv preprint arXiv:2203.11147 (2022).
  • Milios et al. (2023)Aristides Milios, Siva Reddy, and Dzmitry Bahdanau. 2023.In-context learning for text classification with many labels. In Proceedings of the 1st GenBench Workshop on (Benchmarking) Generalisation in NLP. 173–184.
  • Min et al. (2022)Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022.Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?. In EMNLP. Association for Computational Linguistics, 11048–11064.
  • Min et al. (2020)Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020.AmbigQA: Answering Ambiguous Open-domain Questions. In EMNLP (1). Association for Computational Linguistics, 5783–5797.
  • Min et al. (2023)Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2023.Nonparametric Masked Language Modeling. In ACL (Findings). Association for Computational Linguistics, 2097–2118.
  • Nashid et al. (2023)Noor Nashid, Mifta Sintaha, and Ali Mesbah. 2023.Retrieval-based prompt selection for code-related few-shot learning. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 2450–2462.
  • O’Hare et al. (2016)Neil O’Hare, Paloma De Juan, Rossano Schifanella, Yunlong He, Dawei Yin, and Yi Chang. 2016.Leveraging user interaction signals for web image search. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 559–568.
  • Ouyang et al. (2022)Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022.Training language models to follow instructions with human feedback.Advances in neural information processing systems 35 (2022), 27730–27744.
  • Parvez et al. (2021)Md. Rizwan Parvez, Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021.Retrieval Augmented Code Generation and Summarization. In EMNLP (Findings). Association for Computational Linguistics, 2719–2734.
  • Petroni et al. (2020)Fabio Petroni, Patrick S. H. Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2020.How Context Affects Language Models’ Factual Predictions. In AKBC.
  • Petroni et al. (2019)Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019.Language models as knowledge bases?arXiv preprint arXiv:1909.01066 (2019).
  • Poesia et al. (2022)Gabriel Poesia, Alex Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, and Sumit Gulwani. 2022.Synchromesh: Reliable Code Generation from Pre-trained Language Models. In ICLR. OpenReview.net.
  • Purwar and Sundar (2023)Anupam Purwar and Rahul Sundar. 2023.Keyword Augmented Retrieval: Novel framework for Information Retrieval integrated with speech interface.arXiv preprint arXiv:2310.04205 (2023).
  • Radford et al. (2021)Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021.Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748–8763.
  • Radford et al. (2018)Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018.Improving language understanding by generative pre-training.(2018).
  • Radford et al. (2019)Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019.Language models are unsupervised multitask learners.OpenAI blog 1, 8 (2019), 9.
  • Raffel et al. (2020)Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020.Exploring the limits of transfer learning with a unified text-to-text transformer.Journal of machine learning research 21, 140 (2020), 1–67.
  • Ram et al. (2023)Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023.In-context retrieval-augmented language models.Transactions of the Association for Computational Linguistics 11 (2023), 1316–1331.
  • Ram et al. (2022)Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, and Amir Globerson. 2022.Learning to Retrieve Passages without Supervision. In NAACL-HLT. Association for Computational Linguistics, 2687–2700.
  • Ram and Gray (2012)Pariksh*t Ram and Alexander G Gray. 2012.Maximum inner-product search using cone trees. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. 931–939.
  • Ramos et al. (2003)Juan Ramos et al. 2003.Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, Vol. 242. Citeseer, 29–48.
  • Ramos et al. (2023)Rita Ramos, Bruno Martins, Desmond Elliott, and Yova Kementchedjhieva. 2023.Smallcap: lightweight image captioning prompted with retrieval augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2840–2849.
  • Reichman and Heck (2024)Benjamin Z. Reichman and Larry Heck. 2024.Retrieval-Augmented Generation: Is Dense Passage Retrieval Retrieving?CoRR abs/2402.11035 (2024).
  • Reimers and Gurevych (2019)Nils Reimers and Iryna Gurevych. 2019.Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 3982–3992.
  • Ren et al. (2023)Yubing Ren, Yanan Cao, Ping Guo, Fang Fang, Wei Ma, and Zheng Lin. 2023.Retrieve-and-sample: Document-level event argument extraction via hybrid retrieval augmentation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 293–306.
  • Robertson et al. (2009)Stephen Robertson, Hugo Zaragoza, et al. 2009.The probabilistic relevance framework: BM25 and beyond.Foundations and Trends® in Information Retrieval 3, 4 (2009), 333–389.
  • Rubin et al. (2022)Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022.Learning To Retrieve Prompts for In-Context Learning. In NAACL-HLT. Association for Computational Linguistics, 2655–2671.
  • Sarto et al. (2022)Sara Sarto, Marcella Cornia, Lorenzo Baraldi, and Rita Cucchiara. 2022.Retrieval-augmented transformer for image captioning. In Proceedings of the 19th international conference on content-based multimedia indexing. 1–7.
  • Schick et al. (2024)Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2024.Toolformer: Language models can teach themselves to use tools.Advances in Neural Information Processing Systems 36 (2024).
  • Seo et al. (2019)Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur P Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019.Real-time open-domain question answering with dense-sparse phrase index.arXiv preprint arXiv:1906.05807 (2019).
  • Shao et al. (2023)Zhihong Shao, Yeyun Gong, Minlie Huang, Nan Duan, Weizhu Chen, et al. 2023.Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy. In The 2023 Conference on Empirical Methods in Natural Language Processing.
  • Shen et al. (2015)Fumin Shen, Wei Liu, Shaoting Zhang, Yang Yang, and Heng Tao Shen. 2015.Learning binary codes for maximum inner product search. In Proceedings of the IEEE International Conference on Computer Vision. 4148–4156.
  • Sheynin et al. (2023)Shelly Sheynin, Oron Ashual, Adam Polyak, Uriel Singer, Oran Gafni, Eliya Nachmani, and Yaniv Taigman. 2023.kNN-Diffusion: Image Generation via Large-Scale Retrieval. In ICLR. OpenReview.net.
  • Shi et al. (2022)Peng Shi, Rui Zhang, He Bai, and Jimmy Lin. 2022.XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for Cross-lingual Text-to-SQL Semantic Parsing. In EMNLP (Findings). Association for Computational Linguistics, 5248–5259.
  • Shi et al. (2023)Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023.Replug: Retrieval-augmented black-box language models.arXiv preprint arXiv:2301.12652 (2023).
  • Shtar (2021)Guy Shtar. 2021.Multimodal machine learning for drug knowledge discovery. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 1115–1116.
  • Sia and Duh (2023)Suzanna Sia and Kevin Duh. 2023.In-context learning as maintaining coherency: A study of on-the-fly machine translation using large language models.arXiv preprint arXiv:2305.03573 (2023).
  • Singh et al. (2021)Devendra Singh, Siva Reddy, Will Hamilton, Chris Dyer, and Dani Yogatama. 2021.End-to-end training of multi-document reader and retriever for open-domain question answering.Advances in Neural Information Processing Systems 34 (2021), 25968–25981.
  • Singhal et al. (2001)Amit Singhal et al. 2001.Modern information retrieval: A brief overview.IEEE Data Eng. Bull. 24, 4 (2001), 35–43.
  • Siriwardhana et al. (2023)Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kaluarachchi, Rajib Rana, and Suranga Nanayakkara. 2023.Improving the domain adaptation of retrieval augmented generation (RAG) models for open domain question answering.Transactions of the Association for Computational Linguistics 11 (2023), 1–17.
  • Song et al. (2023)Mingyang Song, Yi Feng, and Liping Jing. 2023.Hisum: Hyperbolic interaction model for extractive multi-document summarization. In Proceedings of the ACM Web Conference 2023. 1427–1436.
  • Sparck Jones (1972)Karen Sparck Jones. 1972.A statistical interpretation of term specificity and its application in retrieval.Journal of documentation 28, 1 (1972), 11–21.
  • Sun et al. (2023)Fang Sun, Zhihao Zhan, Hongyu Guo, Ming Zhang, and Jian Tang. 2023.Graphvf: Controllable protein-specific 3d molecule generation with variational flow.arXiv preprint arXiv:2304.12825 (2023).
  • Sun et al. (2024)Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, and Felix Yu. 2024.Spectr: Fast speculative decoding via optimal transport.Advances in Neural Information Processing Systems 36 (2024).
  • Tan et al. (2024)Jiejun Tan, Zhicheng Dou, Yutao Zhu, Peidong Guo, Kun Fang, and Ji-Rong Wen. 2024.Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs.arXiv preprint arXiv:2402.12052 (2024).
  • Thakur et al. (2023)Nandan Thakur, Luiz Bonifacio, Xinyu Zhang, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Boxing Chen, Mehdi Rezagholizadeh, et al. 2023.NoMIRACL: Knowing When You Don’t Know for Robust Multilingual Retrieval-Augmented Generation.arXiv preprint arXiv:2312.11361 (2023).
  • Touvron et al. (2023)Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023.Llama 2: Open foundation and fine-tuned chat models.arXiv preprint arXiv:2307.09288 (2023).
  • Trivedi et al. (2023)Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023.Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions. In The 61st Annual Meeting Of The Association For Computational Linguistics.
  • Tu et al. (2022)Lifu Tu, Caiming Xiong, and Yingbo Zhou. 2022.Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual Understanding With Multilingual Language Models. In EMNLP (Findings). Association for Computational Linguistics, 5478–5485.
  • Vu et al. (2022)Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou’, and Daniel Cer. 2022.SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer. In ACL (1). Association for Computational Linguistics, 5039–5059.
  • Wang et al. (2023d)Ante Wang, Linfeng Song, Qi Liu, Haitao Mi, Longyue Wang, Zhaopeng Tu, Jinsong Su, and Dong Yu. 2023d.Search-engine-augmented dialogue response generation with cheaply supervised query production.Artificial Intelligence 319 (2023), 103874.
  • Wang et al. (2023c)Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, et al. 2023c.Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 7763–7786.
  • Wang et al. (2024a)Hanbing Wang, Xiaorui Liu, Wenqi Fan, Xiangyu Zhao, Venkataramana Kini, Devendra Yadav, Fei Wang, Zhen Wen, Jiliang Tang, and Hui Liu. 2024a.Rethinking Large Language Model Architectures for Sequential Recommendations.arXiv preprint arXiv:2402.09543 (2024).
  • Wang et al. (2024c)Haoyu Wang, Tuo Zhao, and Jing Gao. 2024c.BlendFilter: Advancing Retrieval-Augmented Large Language Models via Query Generation Blending and Knowledge Filtering.arXiv preprint arXiv:2402.11129 (2024).
  • Wang et al. (2023f)Liang Wang, Nan Yang, and Furu Wei. 2023f.Query2doc: Query Expansion with Large Language Models. In EMNLP. Association for Computational Linguistics, 9414–9423.
  • Wang et al. (2024b)Liang Wang, Nan Yang, and Furu Wei. 2024b.Learning to Retrieve In-Context Examples for Large Language Models. In EACL (1). Association for Computational Linguistics, 1752–1767.
  • Wang et al. (2023g)Xintao Wang, Qianwen Yang, Yongting Qiu, Jiaqing Liang, Qianyu He, Zhouhong Gu, Yanghua Xiao, and Wei Wang. 2023g.Knowledgpt: Enhancing large language models with retrieval and storage access on knowledge bases.arXiv preprint arXiv:2308.11761 (2023).
  • Wang et al. (2023a)Yile Wang, Peng Li, Maosong Sun, and Yang Liu. 2023a.Self-Knowledge Guided Retrieval Augmentation for Large Language Models. In The 2023 Conference on Empirical Methods in Natural Language Processing.
  • Wang et al. (2023b)Zichao Wang, Weili Nie, Zhuoran Qiao, Chaowei Xiao, Richard G. Baraniuk, and Anima Anandkumar. 2023b.Retrieval-based Controllable Molecule Generation. In ICLR. OpenReview.net.
  • Wang et al. (2023e)Zifeng Wang, Zichen Wang, Balasubramaniam Srinivasan, Vassilis N Ioannidis, Huzefa Rangwala, and Rish*ta Anubhai. 2023e.BioBridge: Bridging Biomedical Foundation Models via Knowledge Graph.arXiv preprint arXiv:2310.03320 (2023).
  • Wei et al. (2022)Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022.Chain-of-thought prompting elicits reasoning in large language models.Advances in neural information processing systems 35 (2022), 24824–24837.
  • Wu et al. (2024)Junda Wu, Cheng-Chun Chang, Tong Yu, Zhankui He, Jianing Wang, Yupeng Hou, and Julian McAuley. 2024.CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation.arXiv preprint arXiv:2403.06447 (2024).
  • Wu et al. (2020)Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020.Scalable Zero-shot Entity Linking with Dense Entity Retrieval. In EMNLP (1). Association for Computational Linguistics, 6397–6407.
  • Wu et al. (2022)Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022.Memorizing Transformers. In ICLR. OpenReview.net.
  • Xiong et al. (2023)Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. 2023.Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms.arXiv preprint arXiv:2306.13063 (2023).
  • Xu et al. (2023c)Benfeng Xu, Chunxu Zhao, Wenbin Jiang, PengFei Zhu, Songtai Dai, Chao Pang, Zhuo Sun, Shuohuan Wang, and Yu Sun. 2023c.Retrieval-augmented domain adaptation of language models. In Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023). 54–64.
  • Xu et al. (2023b)Fangyuan Xu, Weijia Shi, and Eunsol Choi. 2023b.RECOMP: Improving retrieval-augmented LMs with context compression and selective augmentation. In The Twelfth International Conference on Learning Representations.
  • Xu et al. (2019)Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019.BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis. In NAACL-HLT (1). Association for Computational Linguistics, 2324–2335.
  • Xu et al. (2020)Jitao Xu, Josep-Maria Crego, and Jean Senellart. 2020.Boosting neural machine translation with similar translations. In Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 1570–1579.
  • Xu et al. (2023a)Shicheng Xu, Liang Pang, Huawei Shen, Xueqi Cheng, and Tat-seng Chua. 2023a.Search-in-the-chain: Towards the accurate, credible and traceable content generation for complex knowledge-intensive tasks.arXiv preprint arXiv:2304.14732 (2023).
  • Yang et al. (2023b)Haoyan Yang, Zhitao Li, Yong Zhang, Jianzong Wang, Ning Cheng, Ming Li, and Jing Xiao. 2023b.PRCA: Fitting Black-Box Large Language Models for Retrieval Question Answering via Pluggable Reward-Driven Contextual Adapter. In EMNLP. Association for Computational Linguistics, 5364–5375.
  • Yang et al. (2023a)Ling Yang, Zhilin Huang, Xiangxin Zhou, Minkai Xu, Wentao Zhang, Yu Wang, Xiawu Zheng, Wenming Yang, Ron O Dror, Shenda Hong, et al. 2023a.Prompt-based 3d molecular diffusion models for structure-based drug design.(2023).
  • Yao et al. (2023)Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. 2023.ReAct: Synergizing Reasoning and Acting in Language Models. In ICLR. OpenReview.net.
  • Ye et al. (2023b)Jiacheng Ye, Zhiyong Wu, Jiangtao Feng, Tao Yu, and Lingpeng Kong. 2023b.Compositional exemplars for in-context learning. In International Conference on Machine Learning. PMLR, 39818–39833.
  • Ye et al. (2023a)Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, and Yongbin Li. 2023a.Large Language Models are Versatile Decomposers: Decomposing Evidence and Questions for Table-based Reasoning. In SIGIR. ACM, 174–184.
  • Yepes et al. (2024)Antonio Jimeno Yepes, Yao You, Jan Milczek, Sebastian Laverde, and Leah Li. 2024.Financial Report Chunking for Effective Retrieval Augmented Generation.arXiv preprint arXiv:2402.05131 (2024).
  • Yin et al. (2016)Dawei Yin, Yuening Hu, Jiliang Tang, Tim Daly, Mianwei Zhou, Hua Ouyang, Jianhui Chen, Changsung Kang, Hongbo Deng, Chikashi Nobata, et al. 2016.Ranking relevance in yahoo search. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 323–332.
  • Yogatama et al. (2021)Dani Yogatama, Cyprien de Masson d’Autume, and Lingpeng Kong. 2021.Adaptive semiparametric language models.Transactions of the Association for Computational Linguistics 9 (2021), 362–373.
  • Yoran et al. (2023)Ori Yoran, Tomer Wolfson, Ori Ram, and Jonathan Berant. 2023.Making Retrieval-Augmented Language Models Robust to Irrelevant Context. In The Twelfth International Conference on Learning Representations.
  • Yu et al. (2023a)Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023a.Generate rather than Retrieve: Large Language Models are Strong Context Generators. In ICLR. OpenReview.net.
  • Yu et al. (2023c)Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, and Ashish Sabharwal. 2023c.Improving language models via plug-and-play retrieval feedback.arXiv preprint arXiv:2305.14002 (2023).
  • Yu et al. (2023b)Zichun Yu, Chenyan Xiong, Shi Yu, and Zhiyuan Liu. 2023b.Augmentation-Adapted Retriever Improves Generalization of Language Models as Generic Plug-In. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2421–2436.
  • Zan et al. (2022)Daoguang Zan, Bei Chen, Zeqi Lin, Bei Guan, Yongji Wang, and Jian-Guang Lou. 2022.When Language Model Meets Private Library. In EMNLP (Findings). Association for Computational Linguistics, 277–288.
  • Zeng et al. (2024)Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, et al. 2024.The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG).arXiv preprint arXiv:2402.16893 (2024).
  • Zhang et al. (2023c)Boyu Zhang, Hongyang Yang, Tianyu Zhou, Muhammad Ali Babar, and Xiao-Yang Liu. 2023c.Enhancing financial sentiment analysis via retrieval augmented large language models. In Proceedings of the Fourth ACM International Conference on AI in Finance. 349–356.
  • Zhang et al. (2020)Houyu Zhang, Zhenghao Liu, Chenyan Xiong, and Zhiyuan Liu. 2020.Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs. In ACL. Association for Computational Linguistics, 2031–2043.
  • Zhang et al. (2024)Jiahao Zhang, Rui Xue, Wenqi Fan, Xin Xu, Qing Li, Jian Pei, and Xiaorui Liu. 2024.Linear-Time Graph Neural Networks for Scalable Recommendations.arXiv preprint arXiv:2402.13973 (2024).
  • Zhang et al. (2023a)Mingyuan Zhang, Xinying Guo, Liang Pan, Zhongang Cai, Fangzhou Hong, Huirong Li, Lei Yang, and Ziwei Liu. 2023a.Remodiffuse: Retrieval-augmented motion diffusion model. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 364–373.
  • Zhang et al. (2023b)Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, and Lu Wang. 2023b.Merging generated and retrieved knowledge for open-domain QA.arXiv preprint arXiv:2310.14393 (2023).
  • Zhang et al. (2023d)Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023d.Automatic Chain of Thought Prompting in Large Language Models. In ICLR. OpenReview.net.
  • Zhao et al. (2024b)Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, and Bin Cui. 2024b.Retrieval-Augmented Generation for AI-Generated Content: A Survey.arXiv preprint arXiv:2402.19473 (2024).
  • Zhao et al. (2023a)Ruochen Zhao, Hailin Chen, Weishi Wang, Fangkai Jiao, Xuan Long Do, Chengwei Qin, Bosheng Ding, Xiaobao Guo, Minzhi Li, Xingxuan Li, et al. 2023a.Retrieving multimodal information for augmented generation: A survey.arXiv preprint arXiv:2303.10868 (2023).
  • Zhao et al. (2023b)Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023b.A survey of large language models.arXiv preprint arXiv:2303.18223 (2023).
  • Zhao et al. (2024a)Zihuai Zhao, Wenqi Fan, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, et al. 2024a.Recommender systems in the era of large language models (llms).IEEE Transactions on Knowledge and Data Engineering (2024).
  • Zhong et al. (2022)Zexuan Zhong, Tao Lei, and Danqi Chen. 2022.Training Language Models with Memory Augmentation. In 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022.
  • Zhou et al. (2022)Shuyan Zhou, Uri Alon, Frank F Xu, Zhengbao Jiang, and Graham Neubig. 2022.Docprompting: Generating code by retrieving the docs. In The Eleventh International Conference on Learning Representations.
  • Zhu et al. (2023)Yin Zhu, Zhiling Luo, and Gong Cheng. 2023.Furthest Reasoning with Plan Assessment: Stable Reasoning Path with Retrieval-Augmented Large Language Models.arXiv preprint arXiv:2309.12767 (2023).
  • Zhu et al. (2024)Yinghao Zhu, Changyu Ren, Shiyun Xie, Shukai Liu, Hangyuan Ji, Zixiang Wang, Tao Sun, Long He, Zhoujun Li, Xi Zhu, et al. 2024.REALM: RAG-Driven Enhancement of Multimodal Electronic Health Records Analysis via Large Language Models.arXiv preprint arXiv:2402.07016 (2024).
  • Zou et al. (2024)Wei Zou, Runpeng Geng, Binghui Wang, and Jinyuan Jia. 2024.PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models.arXiv preprint arXiv:2402.07867 (2024).
A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models (2024)

FAQs

What is the rag approach to LLM? ›

RAG is one approach to solving some of these challenges. It redirects the LLM to retrieve relevant information from authoritative, pre-determined knowledge sources. Organizations have greater control over the generated text output, and users gain insights into how the LLM generates the response.

What are the limitations of rag LLM? ›

The limitations of generic LLMs, such as outdated training data, lack of organization-specific context and AI hallucinations, are roadblocks to high search accuracy and performance in these AI models.

What is meant by rag in LLM? ›

This is called retrieval augmented generation (RAG), as you would retrieve the relevant data and use it as augmented context for the LLM. Instead of relying solely on knowledge derived from the training data, a RAG workflow pulls relevant information and connects static LLMs with real-time data retrieval.

What are the disadvantages of rag? ›

RAG is unable to fully understand whether the data that is being retrieved is the most relevant information the language model needs to effectively solve the problem.

How to evaluate LLM results? ›

The most common LLM evaluation metrics being employed today are relevance, hallucinations, question-answering accuracy, toxicity, and retrieval-specific metrics. Each one of these LLM system evals will have different templates based on what you are trying to evaluate.

What are the disadvantages of retrieval-augmented generation? ›

Disadvantages include struggles in capturing relevance for complex queries. Advantages of Retrieval-Augmented Generation include improved accuracy, while disadvantages involve potential hallucinations and factual inaccuracies in generated output.

How valuable is an LLM? ›

While there is no question that an extra year of specialized law studies in a particular practice area, whether it be international law or intellectual property, or health care law, will certainly enhance ones resume, it is unlikely that an LL. M. will in and of itself make the difference in your job search.

What are the limits of LLM? ›

LLMs have certain technical limitations that users need to be aware of. The length of the input prompt and the output text are both subject to limitations. While many LLMs can accept prompts up to a few thousand words, longer texts may exceed the input length limit.

What are the best practices for rag? ›

Best practices for successful RAG implementation include applying regular updates, diversifying data sources, collaborating with experts to review the results, and implementing specialized tools to manage the RAG data pipeline.

How does LLM work? ›

LLMs are built on machine learning: specifically, a type of neural network called a transformer model. In simpler terms, an LLM is a computer program that has been fed enough examples to be able to recognize and interpret human language or other types of complex data.

How does rag work? ›

How does retrieval augmented generation (RAG) work? RAG is about feeding language models with necessary information. Instead of asking LLM directly(like in general-purpose models), we first retrieve the very accurate data from our knowledge library that is well maintained and then use that context to return the answer.

What is the purpose of a rag? ›

A torn bit of old fabric is a rag. You might use a rag to dust the bookshelf or scrub the bathtub, but you wouldn't want to wear one. If your clothes are torn and dirty, they're also rags, and from the sense of "worthless scrap," trashy or low quality newspapers have also long been called rags.

What is rag openai? ›

RAG is an AI framework that retrieves facts from an external knowledge base to ensure that large language models (LLMs) are grounded in the most accurate and up-to-date information. A typical RAG system comprises an LLM, a vector database like Milvus and some prompts as code.

What is the rag technique in AI? ›

Retrieval augmented generation (RAG) is a natural language processing (NLP) technique that combines the strengths of both retrieval- and generative-based artificial intelligence (AI) models.

What is rag in project management? ›

In project management RAG—or red, amber, green—statuses act as a KPI traffic light: red is an alert, amber (or yellow) signals caution, and green means you're in the clear. Within this article, we'll walk through how you can establish your RAG statuses and tolerances.

What is the framework for rag? ›

RAG, or Retrieval-Augmented Generation, is a framework designed to optimize generative AI. Its primary goal is to ensure that the responses generated by AI are not only up-to-date and relevant to the input prompt but also accurate.

Top Articles
Latest Posts
Article information

Author: Terence Hammes MD

Last Updated:

Views: 5837

Rating: 4.9 / 5 (49 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Terence Hammes MD

Birthday: 1992-04-11

Address: Suite 408 9446 Mercy Mews, West Roxie, CT 04904

Phone: +50312511349175

Job: Product Consulting Liaison

Hobby: Jogging, Motor sports, Nordic skating, Jigsaw puzzles, Bird watching, Nordic skating, Sculpting

Introduction: My name is Terence Hammes MD, I am a inexpensive, energetic, jolly, faithful, cheerful, proud, rich person who loves writing and wants to share my knowledge and understanding with you.