Papers
arxiv:2509.03405

LMEnt: A Suite for Analyzing Knowledge in Language Models from Pretraining Data to Representations

Published on Sep 3
ยท Submitted by dhgottesman on Sep 4
Authors:
,
,
,
,
,
,

Abstract

LMEnt is a suite for analyzing knowledge acquisition in language models during pretraining, providing annotated corpora, retrieval methods, and pretrained models to study knowledge representations and learning dynamics.

AI-generated summary

Language models (LMs) increasingly drive real-world applications that require world knowledge. However, the internal processes through which models turn data into representations of knowledge and beliefs about the world, are poorly understood. Insights into these processes could pave the way for developing LMs with knowledge representations that are more consistent, robust, and complete. To facilitate studying these questions, we present LMEnt, a suite for analyzing knowledge acquisition in LMs during pretraining. LMEnt introduces: (1) a knowledge-rich pretraining corpus, fully annotated with entity mentions, based on Wikipedia, (2) an entity-based retrieval method over pretraining data that outperforms previous approaches by as much as 80.4%, and (3) 12 pretrained models with up to 1B parameters and 4K intermediate checkpoints, with comparable performance to popular open-sourced models on knowledge benchmarks. Together, these resources provide a controlled environment for analyzing connections between entity mentions in pretraining and downstream performance, and the effects of causal interventions in pretraining data. We show the utility of LMEnt by studying knowledge acquisition across checkpoints, finding that fact frequency is key, but does not fully explain learning trends. We release LMEnt to support studies of knowledge in LMs, including knowledge representations, plasticity, editing, attribution, and learning dynamics.

Community

Paper submitter
โ€ข
edited 1 day ago

LMEnt is an open-sourced suite for analyzing knowledge acquisition in language models during pretraining which contains:

๐Ÿ“„ A knowledge-rich pretraining corpus, fully annotated with entity mentions, based on Wikipedia.

๐ŸŒŸ An entity-based retrieval method over pretraining data that outperforms previous approaches by as much as 80.4%!

๐Ÿค– 12 pretrained models with up to 1B parameters and 4K intermediate checkpoints, with comparable performance to popular open-sourced models on knowledge benchmarks.

We release LMEnt to support studies of knowledge in LMs, including knowledge representations, plasticity, editing, attribution, and learning dynamics.

IMG_8851.jpeg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.03405 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.03405 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.03405 in a Space README.md to link it from this page.

Collections including this paper 3