akariasai commited on
Commit
5188d88
·
verified ·
1 Parent(s): 7b1d342

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -7,6 +7,36 @@ base_model:
7
  ---
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ## License
11
 
12
- OpenScholar Retriever a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It is licensed under Apache 2.0.
 
7
  ---
8
 
9
 
10
+ # Model Card for Llama-3.1_OpenScholar-8B
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+ Llama-3.1_OpenScholar-8B is a fine-tuned 8B for scientific literature synthesis.
15
+ The Llama-3.1_OpenScholar-8B us trained on the [os-data](https://huggingface.co/datasets/OpenScholar/os-data) dataset.
16
+
17
+ ### Model Description
18
+
19
+ <!-- Provide a longer summary of what this model is. -->
20
+
21
+ - **Developed by:** University of Washigton, Allen Institute for AI (AI2)
22
+ - **Model type:** a Transformer style autoregressive language model.
23
+ - **Language(s) (NLP):** English
24
+ - **License:** The code and model are released under Apache 2.0.
25
+ - **Date cutoff:** Training data is based on peS2o v2, which includes papers up to January 2023. We also mix training data from Tulu3 and [SciRIFF](https://huggingface.co/datasets/allenai/SciRIFF-train-mix).
26
+
27
+
28
+ ### Model Sources
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Project Page:** https://open-scholar.allen.ai/
33
+ - **Repositories:**
34
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/AkariAsai/OpenScholar
35
+ - Evaluation code: https://github.com/AkariAsai/ScholarQABench
36
+ - **Paper:** [Link]()
37
+ - **Technical blog post:** https://allenai.org/blog/openscholar
38
+ <!-- - **Press release:** TODO -->
39
+
40
  ## License
41
 
42
+ Llama-3.1_OpenScholar-8B is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It is licensed under Apache 2.0.