Abstract
Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar's memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed - yielding speed-ups of 4-10x depending on the base LLM - as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting and input context length generalization with Larimar and show their effectiveness.
Community
Will you be publishing the code for this paper?
Actually, I did not fully understand the methods due to my none of backgrounds to episodic and other theories, but can understand why this structure is designed, where hippocampus-neocortex interaction inspires your model. I have some questions in the motivation. Is concept of episodic memory neccesary to build the hippocampus-neocortex interacntion If then, why is episodic memory important? Is normal memory structure such as RAG never appropriate to implement it.?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CAMELoT: Towards Large Language Models with Training-Free Consolidated Associative Memory (2024)
- MEMORYLLM: Towards Self-Updatable Large Language Models (2024)
- MemoryPrompt: A Light Wrapper to Improve Context Tracking in Pre-trained Language Models (2024)
- Online Adaptation of Language Models with a Memory of Amortized Contexts (2024)
- Long-Context Language Modeling with Parallel Context Encoding (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
anyone found a pytorch implementation out there?
Larimar: Revolutionizing Large Language Models with Brain-Inspired Memory Control
Links 🔗:
👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper