EviMem: Evidence-Gap-Driven Iterative Retrieval for Long-Term Conversational Memory
Abstract
EviMem combines IRIS for detecting evidence gaps through sufficiency evaluation and LaceMem for layered memory hierarchy to improve conversational question answering accuracy while reducing latency.
Long-term conversational memory requires retrieving evidence scattered across multiple sessions, yet single-pass retrieval fails on temporal and multi-hop questions. Existing iterative methods refine queries via generated content or document-level signals, but none explicitly diagnoses the evidence gap, namely what is missing from the accumulated retrieval set, leaving query refinement untargeted. We present EviMem, combining IRIS (Iterative Retrieval via Insufficiency Signals), a closed-loop framework that detects evidence gaps through sufficiency evaluation, diagnoses what is missing, and drives targeted query refinement, with LaceMem (Layered Architecture for Conversational Evidence Memory), a coarse-to-fine memory hierarchy supporting fine-grained gap diagnosis. On LoCoMo, EviMem improves Judge Accuracy over MIRIX on temporal (73.3% to 81.6%) and multi-hop (65.9% to 85.2%) questions at 4.5x lower latency. Code: https://github.com/AIGeeksGroup/EviMem.
Community
The code has been open-sourced.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SmartSearch: How Ranking Beats Structure for Conversational Memory Retrieval (2026)
- MemORAI: Memory Organization and Retrieval via Adaptive Graph Intelligence for LLM Conversational Agents (2026)
- Chronos: Temporal-Aware Conversational Agents with Structured Event Retrieval for Long-Term Memory (2026)
- HiGMem: A Hierarchical and LLM-Guided Memory System for Long-Term Conversational Agents (2026)
- PAR$^2$-RAG: Planned Active Retrieval and Reasoning for Multi-Hop Question Answering (2026)
- MemFlow: Intent-Driven Memory Orchestration for Small Language Model Agents (2026)
- HyperMem: Hypergraph Memory for Long-Term Conversations (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.27695 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper