Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG

Authors: Bowen Jin, Jinsung Yoon, Jiawei Han, Sercan Arik

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental However, our empirical findings demonstrate that for many long-context LLMs, the quality of generated output initially improves first, but then subsequently declines as the number of retrieved passages increases. This paper presents comprehensive analyses on long-context LLMs in RAG systems. Contrary to the suggestions of previous work (Xu et al., 2023; Li et al., 2024), our research reveals that increasing the number of retrieved passages does not consistently improve performance with longcontext LLMs (Section 3.1). Instead, we observe that the generative modeling performance initially increases and then declines simply providing more retrieved passages does not guarantee better outcomes. To assess the effectiveness of retrieval reordering, we conduct experiments with two retrievers (e5 and BM25), two long-context LLMs (Gemma-2-9B-Chat and Mistral-Nemo-12B-Instruct), and two datasets (NQ and Pop QA).
Researcher Affiliation Collaboration 1University of Illinois Urbana-Champaign 2Google Cloud EMAIL, EMAIL
Pseudocode Yes The pseudo-code and intuition for retrieval reordering can be found in Appendix E and Q. Appendix E: Retrieval Reordering. Algorithm 2 Retrieval Reordering Algorithm
Open Source Code No The paper mentions using external tools like 'axolotl codebase', 'huggingface inference pipeline', and 'v LLM codebase' for experiments, but it does not contain an explicit statement about releasing the source code for the methodology described in this paper or a link to a code repository.
Open Datasets Yes We evaluate the performance of RAG systems on the Natural Questions (NQ) (Kwiatkowski et al., 2019) dataset. We utilize the same training data mixture as in Section 5.1 and augment it with reasoning labels generated by Gemini-1.5-Pro for each question-passage pair. These labels provide explicit guidance on identifying relevant passages. We use e5 as the retriever and Wiki-18 as the corpus. F.1 TRAINING DATASETS... Natural Question (short-form), Wizard of Wikipedia (long-form), FEVER (true/false), and MMLU (close-set). F.2 TESTING DATASETS... (1) Question-answering: Trivia QA, Pop QA, Web Questions; (2) Multi-hop tasks: Hotpot QA, 2Wiki Multi Hop QA, Bamboogle; (3) Long-form tasks: ASQA; (4) Slot filling: T-REx, Zero-shot RE. Following Karpukhin et al. (2020), we use the text chunks from 2018 Wikipedia dump as the retrieval corpus. Pub Med QA and Bio ASQ (Xiong et al., 2024), utilizing Pub Med as the retrieval corpus.
Dataset Splits No The paper lists the number of instances for various datasets used for training (e.g., NQ 12,500, Wo W 12,500) and testing (e.g., Trivia QA 11,313, Pop QA 14,267). However, it does not explicitly provide training, validation, or test splits for individual datasets, nor does it refer to predefined standard splits for reproducing data partitioning within a single dataset. Instead, it uses different entire datasets for training and testing.
Hardware Specification Yes We fine-tune both Gemma-2-9B-Base and Mistral-Nemo-12B-Base using 8 H100 GPUs. For Gemini-1.0-Pro tuning, we use the Google Cloud Tuning API with the default settings.
Software Dependencies No The paper mentions using 'axolotl codebase', 'huggingface inference pipeline', and 'v LLM codebase' for tuning and inference, and 'Google Cloud Tuning API' for Gemini-1.0-Pro, but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes The hyperparameters can be found in Table 4. Table 4: Implicit RAG finetuning hyperparameters. Model peak lr lr scheduler warm up # epoch batch size Flash Att Gemma-2-9B-Base 1e-6 cosine 5% 4 64 False Mistral-Nemo-12B-Base 1e-6 cosine 5% 4 64 True Gemini-1.0-Pro default default default 1 default default. For all the compared LLMs, we conduct top-p sampling (p = 1) and the maximum number of generated token is set to be 32.