Hugging Face Docs now expose llms.txt & llms-full.txt, making them easier for AI agents to read & process. Every page also includes buttons to view the Markdown source or chat about the content with various chatbots, including HuggingChat. When AI agents such as Cursor or Claude Code fetch a Hugging Face Docs page, Markdown versions of pages are served automatically, saving tokens and improving efficiency.
Changelog
Keep track of latest changes on the Hugging Face Hub
Repository owners can now set the default sorting for their repository’s Discussions and Pull Requests (Community Tab) from the repository settings page. Choose between “Trending,” “Most Reactions,” or “Recently Created” to determine how discussions and contributions are sorted by default when visiting your repository’s Community Tab.
Users and organizations can view their usage of Inference Providers from their settings. Go to your Inference Providers Settings to view your usage for the past month, broken down per model and per provider.
The same view is available for Organization subscribed to a paid plan under the organization's settings.
Authors can now tag an Organization when submitting a paper. Each Organization has a dedicated Papers page that automatically lists its tagged publications. See examples: https://huggingface.co/nvidia/papers and https://huggingface.co/google/papers.
This makes it easier for teams to showcase their research and for readers to discover work by lab, company, or community.




