From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence
Abstract
A comprehensive guide to code LLMs, covering their lifecycle from data curation to deployment, including techniques, trade-offs, and research-practice gaps.
Large language models (LLMs) have fundamentally transformed automated software development by enabling direct translation of natural language descriptions into functional code, driving commercial adoption through tools like Github Copilot (Microsoft), Cursor (Anysphere), Trae (ByteDance), and Claude Code (Anthropic). While the field has evolved dramatically from rule-based systems to Transformer-based architectures, achieving performance improvements from single-digit to over 95\% success rates on benchmarks like HumanEval. In this work, we provide a comprehensive synthesis and practical guide (a series of analytic and probing experiments) about code LLMs, systematically examining the complete model life cycle from data curation to post-training through advanced prompting paradigms, code pre-training, supervised fine-tuning, reinforcement learning, and autonomous coding agents. We analyze the code capability of the general LLMs (GPT-4, Claude, LLaMA) and code-specialized LLMs (StarCoder, Code LLaMA, DeepSeek-Coder, and QwenCoder), critically examining the techniques, design decisions, and trade-offs. Further, we articulate the research-practice gap between academic research (e.g., benchmarks and tasks) and real-world deployment (e.g., software-related code tasks), including code correctness, security, contextual awareness of large codebases, and integration with development workflows, and map promising research directions to practical needs. Last, we conduct a series of experiments to provide a comprehensive analysis of code pre-training, supervised fine-tuning, and reinforcement learning, covering scaling law, framework selection, hyperparameter sensitivity, model architectures, and dataset comparisons.
Community
📜 Paper Overview
This is an authoritative survey on code large language models. Jointly authored by dozens of leading institutions worldwide, it systematically outlines the complete technical pathway for code intelligence, from foundational models to agent applications.
⏳ Evolutionary Context
The paper divides programming evolution into six stages: from manual coding and tool-assisted development to framework-driven and AI-assisted (current stage), with prospects for AI-driven and AI-autonomous programming in the future.
🧬 Core Framework
Foundation Models: Compares general-purpose LLMs with specialized code models, detailing the full lifecycle training from data preprocessing to reinforcement learning.
Evaluation System: Covers diverse task benchmarks ranging from code completion to repository-level development.
Capability Optimization: Focuses on reinforcement learning with verifiable rewards and multimodal code generation.
Engineering Agents: Explores autonomous agent systems operating across the software development lifecycle.
Safety & Governance: Proposes a full-chain risk framework covering data auditing to runtime monitoring.
Practical Guidelines: Provides specific configurations and optimization strategies for model training.
🔭 Key Insights
A gap exists between current research benchmarks and industrial practice.
Long-context understanding and agent collaboration are critical for advancement.
The demand for code safety and compliance is increasingly prominent.
💡 Value Proposition
This work serves as both a systematic summary of field development and a practical guide bridging academic research with industrial implementation, offering a clear technical roadmap for the further advancement of code intelligence.
Что интерестного
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EffiReasonTrans: RL-Optimized Reasoning for Code Translation (2025)
- KAT-Coder Technical Report (2025)
- Teaching Language Models to Reason with Tools (2025)
- Increasing LLM Coding Capabilities through Diverse Synthetic Coding Tasks (2025)
- EARL: Entropy-Aware RL Alignment of LLMs for Reliable RTL Code Generation (2025)
- CodeRL+: Improving Code Generation via Reinforcement with Execution Semantics Alignment (2025)
- Lifecycle-Aware code generation: Leveraging Software Engineering Phases in LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
It's truly remarkable that the organizer who brought together so many authors from various institutions exists.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper