pszemraj commited on
Commit
daeb05f
1 Parent(s): adf1478

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -7,6 +7,9 @@ metrics:
7
  model-index:
8
  - name: griffin-1024-llama3t-8layer-simple_wikipedia_LM-vN
9
  results: []
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -14,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # griffin-1024-llama3t-8layer-simple_wikipedia_LM-vN
16
 
17
- This model is a fine-tuned version of [griffin-1024-llama3t-8layer](https://huggingface.co/griffin-1024-llama3t-8layer) on the pszemraj/simple_wikipedia_LM dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 4.3584
20
  - Accuracy: 0.3789
@@ -66,4 +69,4 @@ The following hyperparameters were used during training:
66
  - Transformers 4.40.1
67
  - Pytorch 2.3.0+cu121
68
  - Datasets 2.19.0
69
- - Tokenizers 0.19.1
 
7
  model-index:
8
  - name: griffin-1024-llama3t-8layer-simple_wikipedia_LM-vN
9
  results: []
10
+ license: apache-2.0
11
+ datasets:
12
+ - pszemraj/simple_wikipedia_LM
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # griffin-1024-llama3t-8layer-simple_wikipedia_LM-vN
19
 
20
+ pretraining experiment on the pszemraj/simple_wikipedia_LM dataset.
21
  It achieves the following results on the evaluation set:
22
  - Loss: 4.3584
23
  - Accuracy: 0.3789
 
69
  - Transformers 4.40.1
70
  - Pytorch 2.3.0+cu121
71
  - Datasets 2.19.0
72
+ - Tokenizers 0.19.1