Abstract
SR-Scientist, an autonomous AI framework, leverages LLMs to generate, implement, and optimize scientific equations, outperforming baselines across multiple disciplines.
Recently, Large Language Models (LLMs) have been applied to scientific equation discovery, leveraging their embedded scientific knowledge for hypothesis generation. However, current methods typically confine LLMs to the role of an equation proposer within search algorithms like genetic programming. In this paper, we present SR-Scientist, a framework that elevates the LLM from a simple equation proposer to an autonomous AI scientist that writes code to analyze data, implements the equation as code, submits it for evaluation, and optimizes the equation based on experimental feedback. Specifically, we wrap the code interpreter into a set of tools for data analysis and equation evaluation. The agent is instructed to optimize the equation by utilizing these tools over a long horizon with minimal human-defined pipelines. Empirical results show that SR-Scientist outperforms baseline methods by an absolute margin of 6% to 35% on datasets covering four science disciplines. Additionally, we demonstrate our method's robustness to noise, the generalization of the discovered equations to out-of-domain data, and their symbolic accuracy. Furthermore, we develop an end-to-end reinforcement learning framework to enhance the agent's capabilities.
Community
SR-Scientist: Scientific Equation Discovery With Agentic AI
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning (2025)
- On LLM-Based Scientific Inductive Reasoning Beyond Equations (2025)
- Mimicking the Physicist's Eye:A VLM-centric Approach for Physics Formula Discovery (2025)
- Data-Efficient Symbolic Regression via Foundation Model Distillation (2025)
- Knowledge Integration for Physics-informed Symbolic Regression Using Pre-trained Large Language Models (2025)
- AgenTracer: Who Is Inducing Failure in the LLM Agentic Systems? (2025)
- The Need for Verification in AI-Driven Scientific Discovery (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper