Interactive Large Language Models for Reliable Answering under Incomplete Context

Authors: Jing-Cheng Pang, Heng-Bo Fan, Pengyuan Wang, Jia-Hao Xiao, Nan Tang, Si-Hang Yang, Chengxing Jia, Ming-Kun Xie, Xiang Chen, Sheng-Jun Huang, Yang Yu

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical studies, across various challenging question answering benchmarks, where LLMs are posed queries with incomplete context, demonstrate the effectiveness of La MSe I. The method improves answer accuracy from 31.9% to 50.9%, outperforming other leading question-answering frameworks. Moreover, in experiments involving human participants, La MSe I consistently generates answers superior to or comparable to baselines in more than 82% of the cases.
Researcher Affiliation Collaboration 1 National Key Laboratory for Novel Software Technology, Nanjing University, China & School of Artificial Intelligence, Nanjing University, China 2 College of Computer Science and Technology/Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, China 3Polixir.ai
Pseudocode Yes Algorithm 1 Practical Implementation of La MSe I Require: User input X, active learning selection strategy S, active inquiry threshold δ, number of clarifying questions M Sample a set of answers to user query: {Ai = M(X)} Calculate the variation of the answers Var(A) (Eq.1) if Var(A) < δ then // Low uncertainty Generate the answer directly: Y = M(X) else // Active inquiry Generate a set of clarifying questions Q Select questions from the set with active learning strategy: Q = S(Q) Inquire the user and get the feedback U( Q) Generate the answer Y end if Return: Answer to user query Y
Open Source Code No The paper does not provide an explicit statement or a direct link to a code repository for the methodology described. It mentions using third-party tools like 'Bing search API' and 'sentence-transformers library' but not their own implementation code.
Open Datasets Yes We conduct experiments on five challenging Q&A datasets alongside a dataset dedicated to meeting summarization... (1) Hotpot QA (Yang et al., 2018); (2) Strategy QA (Geva et al., 2021); (3) 2Wiki Multi Hop QA (Ho et al., 2020); (4) Mu Si Que (Trivedi et al., 2022); (5) IIRC (Ferguson et al., 2020); (6) QMSum (Zhong et al., 2021)... We further conduct experiments with Ambig QA dataset Min et al. (2020).
Dataset Splits No We evaluate various methods on the first 400 questions from the training set across five Q&A datasets... In the current research, we systematically conduct experiments using a range of well-established datasets to assess the performance of La MSe I. These experiments involve creating scenarios where user queries are ambiguous by withholding supporting facts.
Hardware Specification Yes The experiments are conducted with 2 NVIDIA 3090 and AMD EPYC 9654 96-Core processor.
Software Dependencies No The paper mentions using 'text-embedding-ada-002 model' and 'Chat GPT (Open AI, 2023)' as well as 'all-mpnet-base-v2 model from sentence-transformers library'. However, it does not provide specific version numbers for any programming languages, libraries (like sentence-transformers), or frameworks used to implement their methodology.
Experiment Setup Yes Num. of clarifying questions M 3 δ 0.005 temperature for uncertainty estimation 0.5 top_p 1 presence penalty 1 sample strategy diversity Num. of demonstration 2