a.k.a. IterRAG
> 透過多次檢索來回答多跳問題的方式都算是 Iterative 吧,是各種做法的統稱:
> 在 https://arxiv.org/abs/2409.14924 的 Iterative RAG 有列出更多 papers
* Iterative RAG: Methods and Practical Considerations
* https://medium.com/@mehrdad-/iterative-rag-explained-methods-and-practical-considerations-fbf194fae991 (2024/8/7)
* paper: Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions (2023/6)
* https://arxiv.org/abs/2212.10509
* paper: Retrieve, Summarize, Plan: Advancing Multi-hop Question Answering with an Iterative Approach (2024/6)
* https://arxiv.org/abs/2407.13101
* paper: Inference Scaling for Long-Context Retrieval Augmented Generation (2024/10)
* https://arxiv.org/abs/2410.04343v1
* 出自 https://github.com/langchain-ai/ollama-deep-researcher
* paper: Auto-RAG: Autonomous Retrieval-Augmented Generation for Large Language Models (2024/11)
* https://arxiv.org/abs/2411.19443
* https://x.com/omarsar0/status/1863600141103501454
* paper: Inference Scaling for Long-Context Retrieval Augmented Generation
* https://arxiv.org/abs/2410.04343v1 (2024/10)
* 其中講到 [[DRAG]] 和 IterRAG 的做法,附錄有給確切的 prompt
* [[ReAct Prompting]] 也算
* [[Self-RAG]]
* [[Forward-looking active retrieval augmented generation (FLARE)]]
--
* 用 Deep Research 的研究(2025/2/16)
* https://chatgpt.com/c/67b1f634-1b88-8008-9c1d-908f5d259aa1
* https://chatgpt.com/c/67b1fa7a-f250-8008-b770-1d8718b7f1a4
* [ ] Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy https://arxiv.org/abs/2305.15294
* [ ] PlanRAG https://arxiv.org/abs/2406.12430
* [ ] Retrieve, Summarize, Plan(ReSP) https://arxiv.org/abs/2407.13101
* [ ] i-MedRAG https://arxiv.org/abs/2408.00727
* [ ] Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions https://arxiv.org/abs/2212.10509