AdaptLLM commited on
Commit
171178d
1 Parent(s): 5b29cee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -3
README.md CHANGED
@@ -187,7 +187,7 @@ The evaluation results are stored in `./eval_results`, and the model prediction
187
  ## Citation
188
  If you find our work helpful, please cite us.
189
 
190
- AdaMLLM
191
  ```bibtex
192
  @article{adamllm,
193
  title={On Domain-Specific Post-Training for Multimodal Large Language Models},
@@ -197,10 +197,20 @@ AdaMLLM
197
  }
198
  ```
199
 
200
- [AdaptLLM](https://huggingface.co/papers/2309.09530) (ICLR 2024)
 
 
 
 
 
 
 
 
 
 
201
  ```bibtex
202
  @inproceedings{
203
- adaptllm,
204
  title={Adapting Large Language Models via Reading Comprehension},
205
  author={Daixuan Cheng and Shaohan Huang and Furu Wei},
206
  booktitle={The Twelfth International Conference on Learning Representations},
 
187
  ## Citation
188
  If you find our work helpful, please cite us.
189
 
190
+ [AdaMLLM](https://huggingface.co/papers/2411.19930)
191
  ```bibtex
192
  @article{adamllm,
193
  title={On Domain-Specific Post-Training for Multimodal Large Language Models},
 
197
  }
198
  ```
199
 
200
+ [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024)
201
+ ```bibtex
202
+ @article{cheng2024instruction,
203
+ title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
204
+ author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
205
+ journal={arXiv preprint arXiv:2406.14491},
206
+ year={2024}
207
+ }
208
+ ```
209
+
210
+ [Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024)
211
  ```bibtex
212
  @inproceedings{
213
+ cheng2024adapting,
214
  title={Adapting Large Language Models via Reading Comprehension},
215
  author={Daixuan Cheng and Shaohan Huang and Furu Wei},
216
  booktitle={The Twelfth International Conference on Learning Representations},