text "1,In order to thrive in hostile and ever-changing natural environments, mammalian brains evolved to store large amounts of knowledge about the world and continually integrate new information while avoiding catastrophic forgetting." "2,Despite the impressive accomplishments, large language models (LLMs), even with retrieval-augmented generation (RAG), still struggle to efficiently and effectively integrate a large amount of new experiences after pre-training." "3,In this work, we introduce HippoRAG, a novel retrieval framework inspired by the hippocampal indexing theory of human long-term memory to enable deeper and more efficient knowledge integration over new experiences." "4,HippoRAG synergistically orchestrates LLMs, knowledge graphs, and the Personalized PageRank algorithm to mimic the different roles of neocortex and hippocampus in human memory." "5,We compare HippoRAG with existing RAG methods on multi-hop question answering and show that our method outperforms the state-of-the-art methods remarkably, by up to 20%." "6,Single-step retrieval with HippoRAG achieves comparable or better performance than iterative retrieval like IRCoT while being 10-30 times cheaper and 6-13 times faster, and integrating HippoRAG into IRCoT brings further substantial gains." "7,Finally, we show that our method can tackle new types of scenarios that are out of reach of existing methods." "8,Millions of years of evolution have led mammalian brains to develop the crucial ability to store large amounts of world knowledge and continuously integrate new experiences without losing previous ones." "9,This exceptional long-term memory system eventually allows us humans to keep vast stores of continuously updating knowledge that forms the basis of our reasoning and decision making [15]." "10,Despite the progress of large language models (LLMs) in recent years, such a continuously updating long-term memory is still conspicuously absent from current AI systems." "11,Due in part to its ease of use and the limitations of other techniques such as model editing [35], retrieval-augmented generation (RAG) has become the de facto solution for long-term memory in LLMs, allowing users to present new knowledge to a static model [28, 33, 50]." "12,However, current RAG methods are still unable to help LLMs perform tasks that require integrating new knowledge across passage boundaries since each new passage is encoded in isolation." "13,Many important real-world tasks, such as scientific literature review, legal case briefing, and medical diagnosis, require knowledge integration across passages or documents." "14,Although less complex, standard multi-hop question answering (QA) also requires integrating information between passages in a retrieval corpus." "15,In order to solve such tasks, current RAG systems resort to using multiple retrieval and LLM generation steps iteratively to join disparate passages [49, 61]." "16,Nevertheless, even perfectly executed multi-step RAG is still oftentimes insufficient to accomplish many scenarios of knowledge integration, as we illustrate in what we call path-finding multi-hop questions in Figure 1." "17,In contrast, our brains are capable of solving challenging knowledge integration tasks like these with relative ease." "18,The hippocampal memory indexing theory [58], a well-established theory of human long-term memory, offers one plausible explanation for this remarkable ability." "19,Teyler and Discenna [58] propose that our powerful context-based, continually updating memory relies on interactions between the neocortex, which processes and stores actual memory representations, and the C-shaped hippocampus, which holds the hippocampal index, a set of interconnected indices which point to memory units on the neocortex and stores associations between them [15, 59]." "20,In this work, we propose HippoRAG, a RAG framework that serves as a long-term memory for LLMs by mimicking this model of human memory." "21,Our novel design first models the neocortex’s ability to process perceptual input by using an LLM to transform a corpus into a schemaless knowledge graph (KG) as our artificial hippocampal index." "22,Given a new query, HippoRAG identifies the key concepts in the query and runs the Personalized PageRank (PPR) algorithm [23] on the KG, using the query concepts as the seeds, to integrate information across passages for retrieval." "23,PPR enables HippoRAG to explore KG paths and identify relevant subgraphs, essentially performing multi-hop reasoning in a single retrieval step." "24,This capacity for single-step multi-hop retrieval yields strong performance improvements of around 3 and 20 points over current RAG methods [8, 27, 41, 53, 54] on two popular multi-hop QA benchmarks, MuSiQue [60] and 2WikiMultiHopQA [25]." "25,Additionally, HippoRAG’s online retrieval process is 10 to 30 times cheaper and 6 to 13 times faster than current iterative retrieval methods like IRCoT [61], while still achieving comparable performance." "26,Furthermore, our approach can be combined with IRCoT to provide complementary gains of up to 4% and 20% on the same datasets and even obtain improvements on HotpotQA, a less challenging multi-hop QA dataset." "27,Finally, we provide a case study illustrating the limitations of current methods as well as our method’s potential on the previously discussed path-finding multi-hop QA setting." "28,In this section, we first give a brief overview of the hippocampal memory indexing theory, followed by how HippoRAG’s indexing and retrieval design was inspired by this theory, and finally offer a more detailed account of our methodology." "29,The hippocampal memory indexing theory [58] is a well-established theory that provides a functional description of the components and circuitry involved in human long-term memory." "30,In this theory, Teyler and Discenna [58] propose that human long-term memory is composed of three components that work together to accomplish two main objectives: pattern separation, which ensures that the representations of distinct perceptual experiences are unique, and pattern completion, which enables the retrieval of complete memories from partial stimuli [15, 59]." "31,The theory suggests that pattern separation is primarily accomplished in the memory encoding process, which starts with the neocortex receiving and processing perceptual stimuli into more easily manipulatable, likely higher-level, features, which are then routed through the parahippocampal regions (PHR) to be indexed by the hippocampus." "32,When they reach the hippocampus, salient signals are included in the hippocampal index and associated with each other." "33,After the memory encoding process is completed, pattern completion drives the memory retrieval process whenever the hippocampus receives partial perceptual signals from the PHR pipeline." "34,The hippocampus then leverages its context-dependent memory system, thought to be implemented through a densely connected network of neurons in the CA3 sub-region [59], to identify complete and relevant memories within the hippocampal index and route them back through the PHR for simulation in the neocortex." "35,Thus, this complex process allows for new information to be integrated by changing only the hippocampal index instead of updating neocortical representations." "36,Our proposed approach, HippoRAG, is closely inspired by the process described above." "37,As shown in Figure 2, each component of our method corresponds to one of the three components of human long-term memory." "38,A detailed example of the HippoRAG process can be found in Appendix A." "39,Our offline indexing phase, analogous to memory encoding, starts by leveraging a strong instruction-tuned LLM, our artificial neocortex, to extract knowledge graph (KG) triples." "40,The KG is schemaless and this process is known as open information extraction (OpenIE) [3, 4, 45, 79]." "41,This process extracts salient signals from passages in a retrieval corpus as discrete noun phrases rather than dense vector representations, allowing for more fine-grained pattern separation." "42,It is therefore natural to define our artificial hippocampal index as this open KG, which is built on the whole retrieval corpus passage-by-passage." "43,Finally, to connect both components as is done by the parahippocampal regions, we use off-the-shelf dense encoders fine-tuned for retrieval (retrieval encoders)." "44,These retrieval encoders provide additional edges between similar but not identical noun phrases within this KG to aid in downstream pattern completion." "45,These same three components are then leveraged to perform online retrieval by mirroring the human brain’s memory retrieval process." "46,Just as the hippocampus receives input processed through the neocortex and PHR, our LLM-based neocortex extracts a set of salient named entities from a query which we call query named entities." "47,These named entities are then linked to nodes in our KG based on the similarity determined by retrieval encoders; we refer to these selected nodes as query nodes." "48,Once the query nodes are chosen, they become the partial cues from which our synthetic hippocampus performs pattern completion." "49,In the hippocampus, neural pathways between elements of the hippocampal index enable relevant neighborhoods to become activated and recalled upstream." "50,To imitate this efficient graph search process, we leverage the Personalized PageRank (PPR) algorithm [23], a version of PageRank that distributes probability across a graph only through a set of user-defined source nodes." "51,This constraint allows us to bias the PPR output only towards the set of query nodes, just as the hippocampus extracts associated signals from specific partial cues." "52,Finally, as is done when the hippocampal signal is sent upstream, we aggregate the output PPR node probability over the previously indexed passages and use that to rank them for retrieval." "53,Our indexing process involves processing a set of passages P using an instruction-tuned LLM L and a retrieval encoder M." "54,As seen in Figure 2 we first use L to extract a set of noun phrase nodes N and relation edges E from each passage in P via OpenIE." "55,This process is done via 1-shot prompting of the LLM with the prompts shown in Appendix I." "56,Specifically, we first extract a set of named entities from each passage." "57,We then add the named entities to the OpenIE prompt to extract the final triples, which also contain concepts (noun phrases) beyond named entities." "58,We find that this two-step prompt configuration leads to an appropriate balance between generality and bias towards named entities." "59,Finally, we use M to add the extra set of synonymy relations E′ discussed above when the cosine similarity between two entity representations in N is above a threshold τ." "60,As stated above, this introduces more edges to our hippocampal index and allows for more effective pattern completion." "61,This indexing process defines a |N | × |P | matrix P, which contains the number of times each noun phrase in the KG appears in each original passage." "62,During the retrieval process, we prompt L using a 1-shot prompt to extract a set of named entities from a query q, our previously defined query named entities Cq = {c1, ..., cn} (Stanford and Alzheimer’s in our Figure 2 example)." "63,These named entities Cq from the query are then encoded by the same retrieval encoder M." "64,Then, the previously defined query nodes are chosen as the set of nodes in N with the highest cosine similarity to the query named entities Cq." "65,More formally, query nodes are defined as Rq = {r1, ..., rn} such that ri = ek where k = argmaxj cosine_similarity(M(ci),M(ej)), represented as the Stanford logo and the Alzheimer’s purple ribbon symbol in Figure 2" "66,After the query nodes Rq are found, we run the PPR algorithm over the hippocampal index, i.e., a KG with |N | nodes and |E|+ |E′| edges (triple-based and synonymy-based), using a personalized probability distribution #»n defined over N, in which each query node has equal probability and all other nodes have a probability of zero." "67,This allows probability mass to be distributed to nodes that are primarily in the (joint) neighborhood of the query nodes, such as Professor Thomas, and contribute to eventual retrieval." "68,After running the PPR algorithm, we obtain an updated probability distribution #»n′ over N." "69,Finally, in order to obtain passage scores, we multiply #»n′ with the previously defined P matrix to obtain #»p, a ranking score for each passage, which we use for retrieval." "70,We introduce node specificity as a neurobiologically plausible way to further improve retrieval." "71,It is well known that global signals for word importance, like inverse document frequency (IDF), can improve information retrieval." "72,However, in order for our brain to leverage IDF for retrieval, the number of total “passages” encoded would need to be aggregated with all node activations before memory retrieval is complete." "73,While simple for normal computers, this process would require activating connections between an aggregator neuron and all nodes in the hippocampal index every time retrieval occurs, likely introducing prohibitive computational overhead." "74,Given these constraints, we propose node specificity as an alternative IDF signal which requires only local signals and is thus more neurobiologically plausible." "75,We define the node specificity of node i as si = |Pi|−1, where Pi is the set of passages in P from which node i was extracted, information that is already available at each node." "76,Node specificity is used in retrieval by multiplying each query node probability #»n with si before PPR; this allows us to modulate each of their neighborhood’s probability as well as their own." "77,We illustrate node specificity in Figure 2 through relative symbol size: the Stanford logo grows larger than the Alzheimer’s symbol since it appears in fewer documents." "78,We evaluate our method’s retrieval capabilities primarily on two challenging multi-hop QA bench-marks, MuSiQue (answerable) [60] and 2WikiMultiHopQA [25]." "79,For completeness, we also include the HotpotQA [70] dataset even though it has been found to be a much weaker test for multi-hop reasoning due to many spurious signals [60], as we also show in Appendix B." "80,To limit the experimental cost, we extract 1,000 questions from each validation set as done in previous work [48, 61]." "81,In order to create a more realistic retrieval setting, we follow IRCoT [61] and collect all candidate passages (including supporting and distractor passages) from our selected questions and form a retrieval corpus for each dataset." "82,The details of these datasets are shown in Table 1." "83,We compare against several strong and widely used retrieval methods: BM25 [52], Contriever [27], GTR [41] and ColBERTv2 [53]." "84,Additionally, we compare against two recent LLM-augmented baselines: Propositionizer [8], which rewrites passages into propositions, and RAPTOR [54], which constructs summary nodes to ease retrieval from long documents." "85,In addition to the single-step retrieval methods above, we also include the multi-step retrieval method IRCoT [61] as a baseline." "86,We report retrieval and QA performance on the datasets above using recall@2 and recall@5 (R@2 and R@5 below) for retrieval and exact match (EM) and F1 scores for QA performance." "87,By default, we use GPT-3.5-turbo-1106 [42] with temperature of 0 as our LLM L and Contriever [27] or ColBERTv2 [53] as our retriever M." "88,We use 100 examples from MuSiQue’s training data to tune HippoRAG’s two hyperparameters: the synonymy threshold τ at 0.8 and the PPR damping factor at 0.5, which determines the probability that PPR will restart a random walk from the query nodes instead of continuing to explore the graph." "89,Generally, we find that HippoRAG’s performance is rather robust to its hyperparameters." "90,More implementation details can be found in Appendix H." "91,We present our retrieval and QA experimental results below." "92,Given that our method indirectly affects QA performance, we report QA results on our best-performing retrieval backbone ColBERTv2 [53]." "93,However, we report retrieval results for several strong single-step and multi-step retrieval techniques." "94,As seen in Table 2, HippoRAG outperforms all other methods, including recent LLM-augmented baselines such as Propositionizer and RAPTOR, on our main datasets, MuSiQue and 2WikiMultiHopQA, while achieving competitive performance on HotpotQA." "95,We notice an impressive improvement of 11 and 20% for R@2 and R@5 on 2WikiMultiHopQA and around 3% on MuSiQue." "96,This difference can be partially explained by 2WikiMultiHopQA’s entity-centric design, which is particularly well-suited for HippoRAG." "97,Our lower performance on HotpotQA is mainly due to its lower knowledge integration requirements, as explained in Appendix B, as well as a due to a concept-context tradeoff which we alleviate with an ensembling technique described in Appendix F.2." "98,For multi-step or iterative retrieval, our experiments in Table 3 demonstrate that IRCoT [61] and HippoRAG are complementary." "99,Using HippoRAG as the retriever for IRCoT continues to bring R@5 improvements of around 4% for MuSiQue, 18% for 2WikiMultiHopQA and an additional 1% on HotpotQA." "100,We report QA results for HippoRAG, the strongest retrieval baselines, ColBERTv2 and IRCoT, as well as IRCoT using HippoRAG as a retriever in Table 4." "101,As expected, improved retrieval performance in both single and multi-step settings leads to strong overall im-provements of up to 3%, 17% and 1% F1 scores on MuSiQue, 2WikiMultiHopQA and HotpotQA respectively using the same QA reader." "102,Notably, single-step HippoRAG is on par or outperforms IRCoT while being 10-30 times cheaper and 6-13 times faster during online retrieval (Appendix G)." "103,To determine if using GPT-3.5 is essential for building our KG, we replace it with an end-to-end OpenIE model REBEL [26] and an instruction-tuned LLM Llama-3 [1]." "104,As shown in Table 5 row 2, building our KG using REBEL results in large performance drops, underscoring the importance of LLM flexibility." "105,Specifically, GPT-3.5 produces twice as many triples as REBEL, indicating its bias against producing triples with general concepts and leaving many useful associations behind." "106,In terms of open-source LLMs, Table 5 (rows 3-4) shows that the performance of Llama-3 8B is comparable to GPT-3.5, although its 70B counterpart performs worse." "107,This surprising behavior is due to this model’s production of ill-formatted outputs that result in the loss of around 20% of the passages, compared to about 4% for the 8B model and less than 1% for GPT-3.5." "108,The strong performance of Llama-3 8B is encouraging because that offers a cheaper alternative for indexing over large corpora." "109,The statistics for these OpenIE alternatives can be found in Appendix C." "110,As shown in Table 5 (rows 5-6), to examine how much of our results are due to the strength of PPR, we replace the PPR output with the query node probability #»n multiplied by node specificity values (row 5) and a version of this that also distributes a small amount of probability to the direct neighbors of each query node (row 6)." "111,First, we find that PPR is a much more effective method for including associations for retrieval on all three datasets compared to both simple baselines." "112,It is interesting to note that adding the neighborhood of Rq nodes without PPR leads to worse performance than only using the query nodes themselves." "113,As seen in Table 5 (rows 7-8), node specificity obtains considerable improvements on MuSiQue and HotpotQA and yields almost no change in 2WikiMultiHopQA." "114,This is likely because 2WikiMultiHopQA relies on named entities with little differences in terms of term weighting." "115,In contrast, synonymy edges have the largest effect on 2WikiMultiHopQA, suggesting that noisy entity standardization is useful when most relevant concepts are named entities, and improvements to synonymy detection could lead to stronger performance in other datasets." "116,A major advantage of HippoRAG over conventional RAG methods in multi-hop QA is its ability to perform multi-hop retrieval in a single step." "117,We demonstrate this by measuring the percentage of queries where all the supporting passages are retrieved successfully, a feat that can only be accomplished through successful multi-hop reasoning." "118,Table 8 in Appendix D shows that the gap between our method and ColBERTv2, using the top-5 passages, increases even more from 3% to 6% on MuSiQue and from 20% to 38% on 2WikiMultiHopQA, suggesting that large improvements come from obtaining all supporting documents rather than achieving partially retrieval on more questions." "119,We further illustrate HippoRAG’s unique single-step multi-hop retrieval ability through the first example in Table 6." "120,In this example, even though Alhandra was not mentioned in Vila de Xira’s passage, HippoRAG can directly leverage Vila de Xira’s connection to Alhandra as his place of birth to determine its importance, something that standard RAG methods would be unable to do directly." "121,Additionally, even though IRCoT can also solve this multi-hop retrieval problem, as shown in Appendix G, it is 10-30 times more expensive and 6-13 times slower than ours in terms of online retrieval, arguably the most important factor when it comes to serving end users." "122,The second example in Table 6, also present in Figure 1, shows a type of questions that is trivial for informed humans but out of reach for current retrievers without further training." "123,This type of questions, which we call path-finding multi-hop questions, requires identifying one path between a set of entities when many paths exist to explore instead of following a specific path, as in standard multi-hop questions." "124,More specifically, a simple iterative process can retrieve the appropriate passages for the first question by following the one path set by Alhandra’s one place of birth, as seen by IRCoT’s perfect performance." "125,However, an iterative process would struggle to answer the second question given the many possible paths to explore—either through professors at Stanford University or professors working on the neuroscience of Alzheimer’s." "126,It is only by associating disparate information about Thomas Südhof that someone who knows about this professor would be able to answer this question easily." "127,As seen in Table 6, both ColBERTv2 and IRCoT fail to extract the necessary passages since they cannot access these associations." "128,On the other hand, HippoRAG leverages its web of associations in its hippocampal index and graph search algorithm to determine that Professor Thomas is relevant to this query and retrieves his passages appropriately." "129,More examples of these path-finding multi-hop questions can be found in our case study in Appendix E." "130,It is well-accepted, even among skeptical researchers, that the parameters of modern LLMs encode a remarkable amount of world knowledge [2, 10, 17, 21, 24, 31, 47, 62], which can be leveraged by an LLM in flexible and robust ways [64, 65, 74]." "131,Nevertheless, our ability to update this vast knowledge store, an essential part of any long-term memory system, is still surprisingly limited." "132,Although many techniques to update LLMs exist, such as standard fine-tuning, model unlearning and model editing [12, 37, 38, 39, 40, 76], it is clear that no methodology has emerged as a robust solution for continual learning in LLMs [20, 35, 78]." "133,On the other hand, using RAG methods as a long-term memory system offers a simple way to update knowledge over time [28, 33, 50, 56]." "134,More sophisticated RAG methods, which perform multiple steps of retrieval and generation from an LLM, are even able to integrate information across new or updated knowledge elements[30, 49, 55, 61, 69, 71, 73], another crucial aspect of long-term memory systems." "135,As discussed above, however, this type of online information integration is unable to solve the more complex knowledge integration tasks that we illustrate with our path-finding multi-hop QA examples." "136,Some other methods, such as RAPTOR [54], MemWalker [7] and GraphRAG [14], integrate infor-mation during the offline indexing phase similarly to HippoRAG and might be able to handle these more complex tasks." "137,However, these methods integrate information by summarizing knowledge elements, which means that the summarization process must be repeated any time new data is added." "138,In contrast, HippoRAG can continuously integrate new knowledge by simply adding edges to its KG." "139,Context lengths for both open and closed source LLMs have increased dramatically in the past year [9, 13, 16, 46, 51]." "140,This scaling trend seems to indicate that future LLMs could perform long-term memory storage within massive context windows." "141,However, the viability of this future remains largely uncertain given the many engineering hurdles involved and the apparent limitations of long-context LLMs, even within current context lengths [32, 34, 77]." "142,Combining the strengths of language models and knowledge graphs has been an active research direction for many years, both for augmenting LLMs with a KG in different ways [36, 63, 66] or augmenting KGs by either distilling knowledge from an LLM’s parametric knowledge [5, 67] or using them to parse text directly [6, 22, 75]." "143,In an exceptionally comprehensive survey, Pan et al. [43] present a roadmap for this research direction and highlight the importance of work which synergizes these two important technologies [29, 57, 72, 80]." "144,Like these works, HippoRAG is a strong and principled example of the synergy we must strike between these two technologies, combining the power of LLMs for knowledge graph construction with the strengths of structured knowledge and graph search for improved augmentation of an LLM’s capacities." "145,Our proposed neurobiologically principled methodology, although simple, already shows promise for overcoming the inherent limitations of standard RAG systems while retaining their advantages over parametric memory." "146,HippoRAG’s knowledge integration capabilities, demonstrated by its strong results on path-following multi-hop QA and promise on path-finding multi-hop QA, as well as its dramatic efficiency improvements and continuously updating nature, makes it a powerful middle-ground framework between standard RAG methods and parametric memory and offers a compelling solution for long-term memory in LLMs." "147,Nevertheless, several limitations can be addressed in future work to enable HippoRAG to achieve this goal better." "148,First, we note that all components of HippoRAG are currently used off-the-shelf without any extra training." "149,There is therefore much room to improve our method’s practical viability by performing specific component fine-tuning." "150,This is evident in the error analysis discussed in Appendix F that shows most errors made by our system are due to NER and OpenIE, which could benefit from direct fine-tuning." "151,Given that the rest of the errors are graph search errors, also in Appendix F, we note that several avenues for improvements over simple PPR exist, such as allowing relations to guide graph traversal directly." "152,Finally, and perhaps most importantly, HippoRAG’s scalability still calls for further validation." "153,Although we show that Llama-3 could obtain similar performance to closed-source models and thus reduce costs considerably, we are yet to empirically prove the efficiency and efficacy of our synthetic hippocampal index as its size grows way beyond current benchmarks." "154,The authors would like to thank colleagues from the OSU NLP group and Percy Liang for their thoughtful comments." "155,This research was supported in part by NSF OAC 2112606, NIH R01LM014199, ARL W911NF2220144, and Cisco." "156,The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. government." "157,The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein."