TY - GEN
T1 - Micro-Act
T2 - 63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
AU - Huo, Nan
AU - Li, Jinyang
AU - Qin, Bowen
AU - Qu, Ge
AU - Li, Xiaolong
AU - Li, Xiaodong
AU - Ma, Chenhao
AU - Cheng, Reynold
N1 - Publisher Copyright:
© 2025 Association for Computational Linguistics.
PY - 2025
Y1 - 2025
N2 - Retrieval-Augmented Generation (RAG) systems commonly suffer from Knowledge Conflicts, where retrieved external knowledge contradicts the inherent, parametric knowledge of large language models (LLMs). It adversely affects performance on downstream tasks such as question answering (QA). Existing approaches often attempt to mitigate conflicts by directly comparing two knowledge sources in a side-by-side manner, but this can overwhelm LLMs with extraneous or lengthy contexts, ultimately hindering their ability to identify and mitigate inconsistencies. To address this issue, we propose MICRO-ACT, a framework with a hierarchical action space that automatically perceives context complexity and adaptively decomposes each knowledge source into a sequence of fine-grained comparisons. These comparisons are represented as actionable steps, enabling reasoning beyond the superficial context. Through extensive experiments on five benchmark datasets, MICRO-ACT consistently achieves significant increase in QA accuracy over state-of-the-art baselines across all 5 datasets and 3 conflict types, especially in temporal and semantic types where all baselines fail significantly. More importantly, MICRO-ACT exhibits robust performance on non-conflict questions simultaneously, highlighting its practical value in real-world RAG applications. Code can be found at https://github.com/Nan-Huo/Micro-Act.
AB - Retrieval-Augmented Generation (RAG) systems commonly suffer from Knowledge Conflicts, where retrieved external knowledge contradicts the inherent, parametric knowledge of large language models (LLMs). It adversely affects performance on downstream tasks such as question answering (QA). Existing approaches often attempt to mitigate conflicts by directly comparing two knowledge sources in a side-by-side manner, but this can overwhelm LLMs with extraneous or lengthy contexts, ultimately hindering their ability to identify and mitigate inconsistencies. To address this issue, we propose MICRO-ACT, a framework with a hierarchical action space that automatically perceives context complexity and adaptively decomposes each knowledge source into a sequence of fine-grained comparisons. These comparisons are represented as actionable steps, enabling reasoning beyond the superficial context. Through extensive experiments on five benchmark datasets, MICRO-ACT consistently achieves significant increase in QA accuracy over state-of-the-art baselines across all 5 datasets and 3 conflict types, especially in temporal and semantic types where all baselines fail significantly. More importantly, MICRO-ACT exhibits robust performance on non-conflict questions simultaneously, highlighting its practical value in real-world RAG applications. Code can be found at https://github.com/Nan-Huo/Micro-Act.
UR - https://www.scopus.com/pages/publications/105021059338
U2 - 10.18653/v1/2025.acl-long.909
DO - 10.18653/v1/2025.acl-long.909
M3 - Conference contribution
AN - SCOPUS:105021059338
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 18550
EP - 18574
BT - Long Papers
A2 - Che, Wanxiang
A2 - Nabende, Joyce
A2 - Shutova, Ekaterina
A2 - Pilehvar, Mohammad Taher
PB - Association for Computational Linguistics (ACL)
Y2 - 27 July 2025 through 1 August 2025
ER -