Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ We explore **continued pre-training on domain-specific corpora** for large langu
|
|
28 |
### π€ We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! π€
|
29 |
|
30 |
**************************** **Updates** ****************************
|
31 |
-
* 2024/4/2: Released the
|
32 |
* 2024/1/16: π Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!π
|
33 |
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
|
34 |
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
|
@@ -49,52 +49,42 @@ Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is si
|
|
49 |
## Domain-Specific LLaMA-2-Chat
|
50 |
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
|
51 |
|
52 |
-
|
53 |
-
```python
|
54 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
55 |
|
56 |
-
|
57 |
-
|
58 |
|
59 |
-
|
60 |
-
user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
|
61 |
-
Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
|
62 |
-
MMM Chicago Stock Exchange, Inc.
|
63 |
-
1.500% Notes due 2026 MMM26 New York Stock Exchange
|
64 |
-
1.750% Notes due 2030 MMM30 New York Stock Exchange
|
65 |
-
1.500% Notes due 2031 MMM31 New York Stock Exchange
|
66 |
|
67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
|
69 |
-
#
|
70 |
-
|
71 |
-
prompt = f"<s>[INST] <<SYS>>{our_system_prompt}<</SYS>>\n\n{user_input} [/INST]"
|
72 |
|
73 |
-
#
|
74 |
-
|
75 |
-
# your_system_prompt = "Please, check if the answer can be inferred from the pieces of context provided."
|
76 |
-
# prompt = f"<s>[INST] <<SYS>>{our_system_prompt}<</SYS>>\n\n{your_system_prompt}\n{user_input} [/INST]"
|
77 |
|
78 |
-
|
79 |
-
|
80 |
|
81 |
-
|
82 |
-
|
83 |
|
84 |
-
|
|
|
85 |
```
|
86 |
|
87 |
-
## Domain-Specific Tasks
|
88 |
-
|
89 |
-
### Pre-templatized/Formatted Datasets
|
90 |
-
To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
|
91 |
-
|
92 |
-
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
|
93 |
-
|
94 |
-
### Raw Datasets
|
95 |
-
We have also uploaded all the raw datasets, for fine-tuning or other usage: [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt)
|
96 |
-
|
97 |
-
|
98 |
## Citation
|
99 |
If you find our work helpful, please cite us:
|
100 |
```bibtex
|
@@ -108,7 +98,7 @@ url={https://openreview.net/forum?id=y886UXPEZ0}
|
|
108 |
}
|
109 |
```
|
110 |
|
111 |
-
|
112 |
```bibtex
|
113 |
@article{ChemProt,
|
114 |
author = {Jens Kringelum and
|
|
|
28 |
### π€ We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! π€
|
29 |
|
30 |
**************************** **Updates** ****************************
|
31 |
+
* 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
|
32 |
* 2024/1/16: π Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!π
|
33 |
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
|
34 |
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
|
|
|
49 |
## Domain-Specific LLaMA-2-Chat
|
50 |
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
|
51 |
|
52 |
+
## Domain-Specific Tasks
|
|
|
|
|
53 |
|
54 |
+
### Pre-templatized/Formatted Testing Splits
|
55 |
+
To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
|
56 |
|
57 |
+
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
|
59 |
+
### Raw Datasets
|
60 |
+
We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages:
|
61 |
+
- [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt)
|
62 |
+
- [RCT](https://huggingface.co/datasets/AdaptLLM/RCT)
|
63 |
+
- [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA)
|
64 |
+
- [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA)
|
65 |
+
- [Headline](https://huggingface.co/datasets/AdaptLLM/Headline)
|
66 |
+
- [NER](https://huggingface.co/datasets/AdaptLLM/NER)
|
67 |
+
|
68 |
+
The other datasets used in our paper have already been available in huggingface, so you can directly load them with the following code
|
69 |
+
```python
|
70 |
+
from datasets import load_dataset
|
71 |
|
72 |
+
# MQP:
|
73 |
+
dataset = load_dataset('medical_questions_pairs')
|
|
|
74 |
|
75 |
+
# PubmedQA:
|
76 |
+
dataset = load_dataset('bigbio/pubmed_qa')
|
|
|
|
|
77 |
|
78 |
+
# SCOTUS
|
79 |
+
dataset = load_dataset("lex_glue", 'scotus')
|
80 |
|
81 |
+
# CaseHOLD
|
82 |
+
dataset = load_dataset("lex_glue", 'case_hold')
|
83 |
|
84 |
+
# UNFAIR-ToS
|
85 |
+
dataset = load_dataset("lex_glue", 'unfair_tos')
|
86 |
```
|
87 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
## Citation
|
89 |
If you find our work helpful, please cite us:
|
90 |
```bibtex
|
|
|
98 |
}
|
99 |
```
|
100 |
|
101 |
+
and the original dataset:
|
102 |
```bibtex
|
103 |
@article{ChemProt,
|
104 |
author = {Jens Kringelum and
|