File size: 2,421 Bytes
032ef12 2908efd 032ef12 6e0e752 5188d88 065914d 5188d88 6e0e752 4686dec e26aeb2 4686dec 2908efd 4686dec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
---
# Model Card for Llama-3.1_OpenScholar-8B
<!-- Provide a quick summary of what the model is/does. -->
Llama-3.1_OpenScholar-8B is a fine-tuned 8B for scientific literature synthesis.
The Llama-3.1_OpenScholar-8B us trained on the [os-data](https://huggingface.co/datasets/OpenScholar/os-data) dataset.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** University of Washigton, Allen Institute for AI (AI2)
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Date cutoff:** Training data is based on peS2o v2, which includes papers up to January 2023. We also mix training data from Tulu3 and [SciRIFF](https://huggingface.co/datasets/allenai/SciRIFF-train-mix).
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://open-scholar.allen.ai/
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/AkariAsai/OpenScholar
- Evaluation code: https://github.com/AkariAsai/ScholarQABench
- **Paper:** [Link](https://openscholar.allen.ai/paper)
- **Technical blog post:** https://allenai.org/blog/openscholar
<!-- - **Press release:** TODO -->
## License
Llama-3.1_OpenScholar-8B is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It is licensed under Apache 2.0.
Citation
If you find this is useful in your work, please cite it with:
```
@misc{asai2024openscholarsynthesizingscientificliterature,
title={OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs},
author={Akari Asai and Jacqueline He and Rulin Shao and Weijia Shi and Amanpreet Singh and Joseph Chee Chang and Kyle Lo and Luca Soldaini and Sergey Feldman and Mike D'arcy and David Wadden and Matt Latzke and Minyang Tian and Pan Ji and Shengyan Liu and Hao Tong and Bohao Wu and Yanyu Xiong and Luke Zettlemoyer and Graham Neubig and Dan Weld and Doug Downey and Wen-tau Yih and Pang Wei Koh and Hannaneh Hajishirzi},
year={2024},
eprint={2411.14199},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.14199},
}
``` |