Datasets:
Add links to paper, GitHub, and project page
Browse filesHi! I'm Niels from the Hugging Face community science team. I'm opening this PR to improve the dataset card for RPKB.
This PR:
- Adds links to the [associated paper](https://huggingface.co/papers/2603.04743), [official code](https://github.com/AMA-CMFAI/DARE), and [project page](https://ama-cmfai.github.io/DARE_webpage/).
- Updates the YAML metadata to include relevant task categories and tags.
- Refines the "How to Use" section with the official code snippet from the GitHub repository to ensure a smooth, plug-and-play experience for users.
These changes help make your artifact more discoverable and easier to use within the Hugging Face ecosystem.
README.md
CHANGED
|
@@ -1,24 +1,25 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- text-retrieval
|
| 5 |
-
- question-answering
|
| 6 |
-
language:
|
| 7 |
-
- en
|
| 8 |
tags:
|
| 9 |
- r-language
|
| 10 |
- chromadb
|
| 11 |
- tool-retrieval
|
| 12 |
- data-science
|
| 13 |
- llm-agent
|
| 14 |
-
size_categories:
|
| 15 |
-
- n<10K
|
| 16 |
---
|
| 17 |
|
| 18 |
# R-Package Knowledge Base (RPKB)
|
| 19 |

|
| 20 |
|
| 21 |
-
|
|
|
|
|
|
|
| 22 |
|
| 23 |
It contains **8,191 high-quality R functions** meticulously curated from CRAN, complete with extracted statistical metadata (Data Profiles) and pre-computed embeddings generated by the **[DARE model](https://huggingface.co/Stephen-SMJ/DARE-R-Retriever)**.
|
| 24 |
|
|
@@ -28,61 +29,66 @@ It contains **8,191 high-quality R functions** meticulously curated from CRAN, c
|
|
| 28 |
- **Embedding Model:** `Stephen-SMJ/DARE-R-Retriever`
|
| 29 |
- **Primary Use Case:** Tool retrieval for LLM Agents executing data science and statistical workflows in R.
|
| 30 |
|
| 31 |
-
## 🚀
|
| 32 |
|
| 33 |
You can easily download and load this database into your own Agentic workflows using the `huggingface_hub` and `chromadb` libraries.
|
| 34 |
|
| 35 |
-
### 1.
|
| 36 |
```bash
|
| 37 |
-
pip install huggingface_hub chromadb sentence-transformers
|
| 38 |
```
|
| 39 |
|
| 40 |
-
### 2.
|
| 41 |
-
|
|
|
|
|
|
|
| 42 |
from huggingface_hub import snapshot_download
|
|
|
|
| 43 |
import chromadb
|
|
|
|
|
|
|
| 44 |
|
| 45 |
-
# 1.
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
allow_patterns="RPKB/*" # Adjust this if your folder name is different
|
| 50 |
-
)
|
| 51 |
|
| 52 |
-
# 2. Connect to
|
| 53 |
-
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
|
| 56 |
collection = client.get_collection(name="inference")
|
| 57 |
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
### 3. Perform a R Pakcage Retrieval
|
| 62 |
-
|
| 63 |
-
To retrieve the best function, make sure you encode your query using the DARE model.
|
| 64 |
-
|
| 65 |
-
```Python
|
| 66 |
-
from sentence_transformers import SentenceTransformer
|
| 67 |
-
|
| 68 |
-
# Load the DARE embedding model
|
| 69 |
-
model = SentenceTransformer("Stephen-SMJ/DARE-R-Retriever")
|
| 70 |
-
|
| 71 |
-
# Formulate the query with data constraints
|
| 72 |
-
user_query = "I have a high-dimensional genomic dataset named hidra_ex_1_2000.csv in my environment. I need to identify driver elements by estimating regulatory scores based on the counts provided
|
| 73 |
-
in the data. Please set the random seed to 123 at the start. I need to filter for fragment lengths between 150 and 600 bp and use a DNA count filter of 5. For my evaluation, please print the
|
| 74 |
-
first value of the estimated scores (est_a) for the very first region identified."
|
| 75 |
|
| 76 |
-
# Generate embedding
|
| 77 |
-
query_embedding = model.encode(user_query).tolist()
|
| 78 |
-
|
| 79 |
-
# Search in the database with Hard Filters
|
| 80 |
results = collection.query(
|
| 81 |
query_embeddings=[query_embedding],
|
| 82 |
n_results=3,
|
| 83 |
-
include=["
|
| 84 |
)
|
| 85 |
|
| 86 |
-
# Display
|
| 87 |
-
|
|
|
|
| 88 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: apache-2.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- n<10K
|
| 7 |
task_categories:
|
| 8 |
- text-retrieval
|
|
|
|
|
|
|
|
|
|
| 9 |
tags:
|
| 10 |
- r-language
|
| 11 |
- chromadb
|
| 12 |
- tool-retrieval
|
| 13 |
- data-science
|
| 14 |
- llm-agent
|
|
|
|
|
|
|
| 15 |
---
|
| 16 |
|
| 17 |
# R-Package Knowledge Base (RPKB)
|
| 18 |

|
| 19 |
|
| 20 |
+
[**Project Page**](https://ama-cmfai.github.io/DARE_webpage/) | [**Paper**](https://huggingface.co/papers/2603.04743) | [**GitHub**](https://github.com/AMA-CMFAI/DARE)
|
| 21 |
+
|
| 22 |
+
This database is the official pre-computed **ChromaDB vector database** for the paper: *[DARE: Aligning LLM Agents with the R Statistical Ecosystem via Distribution-Aware Retrieval](https://huggingface.co/papers/2603.04743)*.
|
| 23 |
|
| 24 |
It contains **8,191 high-quality R functions** meticulously curated from CRAN, complete with extracted statistical metadata (Data Profiles) and pre-computed embeddings generated by the **[DARE model](https://huggingface.co/Stephen-SMJ/DARE-R-Retriever)**.
|
| 25 |
|
|
|
|
| 29 |
- **Embedding Model:** `Stephen-SMJ/DARE-R-Retriever`
|
| 30 |
- **Primary Use Case:** Tool retrieval for LLM Agents executing data science and statistical workflows in R.
|
| 31 |
|
| 32 |
+
## 🚀 Quick Start (Zero-Configuration Inference)
|
| 33 |
|
| 34 |
You can easily download and load this database into your own Agentic workflows using the `huggingface_hub` and `chromadb` libraries.
|
| 35 |
|
| 36 |
+
### 1. Installation
|
| 37 |
```bash
|
| 38 |
+
pip install huggingface_hub chromadb sentence-transformers torch
|
| 39 |
```
|
| 40 |
|
| 41 |
+
### 2. Run the DARE Retriever
|
| 42 |
+
The following script automatically downloads the DARE model and the RPKB database from Hugging Face and performs a distribution-aware search.
|
| 43 |
+
|
| 44 |
+
```python
|
| 45 |
from huggingface_hub import snapshot_download
|
| 46 |
+
from sentence_transformers import SentenceTransformer
|
| 47 |
import chromadb
|
| 48 |
+
import torch
|
| 49 |
+
import os
|
| 50 |
|
| 51 |
+
# 1. Load DARE Model
|
| 52 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 53 |
+
model = SentenceTransformer("Stephen-SMJ/DARE-R-Retriever", trust_remote_code=False)
|
| 54 |
+
model.to(device)
|
|
|
|
|
|
|
| 55 |
|
| 56 |
+
# 2. Download and Connect to RPKB Database
|
| 57 |
+
db_dir = "./rpkb_db"
|
| 58 |
+
if not os.path.exists(os.path.join(db_dir, "DARE_db")):
|
| 59 |
+
print("Downloading RPKB Database from Hugging Face...")
|
| 60 |
+
snapshot_download(repo_id="Stephen-SMJ/RPKB", repo_type="dataset", local_dir=db_dir, allow_patterns="DARE_db/*")
|
| 61 |
|
| 62 |
+
client = chromadb.PersistentClient(path=os.path.join(db_dir, "DARE_db"))
|
| 63 |
collection = client.get_collection(name="inference")
|
| 64 |
|
| 65 |
+
# 3. Perform Search
|
| 66 |
+
query = "I have a sparse matrix with high dimensionality. I need to perform PCA."
|
| 67 |
+
query_embedding = model.encode(query, convert_to_tensor=False).tolist()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
results = collection.query(
|
| 70 |
query_embeddings=[query_embedding],
|
| 71 |
n_results=3,
|
| 72 |
+
include=["documents", "metadatas"]
|
| 73 |
)
|
| 74 |
|
| 75 |
+
# Display Results
|
| 76 |
+
for rank, (doc_id, meta) in enumerate(zip(results['ids'][0], results['metadatas'][0])):
|
| 77 |
+
print(f"[{rank + 1}] Package: {meta.get('package_name')} :: Function: {meta.get('function_name')}")
|
| 78 |
```
|
| 79 |
+
|
| 80 |
+
## 📖 Citation
|
| 81 |
+
|
| 82 |
+
If you find DARE, RPKB, or RCodingAgent useful in your research, please cite our work:
|
| 83 |
+
|
| 84 |
+
```bibtex
|
| 85 |
+
@article{sun2026dare,
|
| 86 |
+
title={DARE: Aligning LLM Agents with the R Statistical Ecosystem via Distribution-Aware Retrieval},
|
| 87 |
+
author={Maojun Sun and Yue Wu and Yifei Xie and Ruijian Han and Binyan Jiang and Defeng Sun and Yancheng Yuan and Jian Huang},
|
| 88 |
+
year={2026},
|
| 89 |
+
eprint={2603.04743},
|
| 90 |
+
archivePrefix={arXiv},
|
| 91 |
+
primaryClass={cs.IR},
|
| 92 |
+
url={https://arxiv.org/abs/2603.04743},
|
| 93 |
+
}
|
| 94 |
+
```
|