AdaptLLM commited on
Commit
6b093e3
β€’
1 Parent(s): 0617d79

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -35,9 +35,10 @@ This repo contains the **evaluation datasets** for our **ICLR 2024** paper [Adap
35
 
36
  We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
37
 
38
- ### πŸ€— We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! πŸ€—
39
 
40
  **************************** **Updates** ****************************
 
41
  * 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
42
  * 2024/1/16: πŸŽ‰ Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!πŸŽ‰
43
  * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
 
35
 
36
  We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
37
 
38
+ ### πŸ€— [2024/6/21] We release the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain), effective for both general pre-training from scratch and domain-adaptive continual pre-training!!! πŸ€—
39
 
40
  **************************** **Updates** ****************************
41
+ * 2024/6/21: πŸ‘πŸ» Released the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain) πŸ‘πŸ»
42
  * 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
43
  * 2024/1/16: πŸŽ‰ Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!πŸŽ‰
44
  * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.