sequelbox commited on
Commit
ad2ef76
1 Parent(s): b1f285c

a07eaacd0b1d1c9f76f41f92fd5a1e195b3d1393bab6b71d520712d56a266845

Browse files
Files changed (4) hide show
  1. README.md +18 -5
  2. config.json +1 -1
  3. generation_config.json +1 -1
  4. tokenizer.json +1 -6
README.md CHANGED
@@ -15,10 +15,21 @@ tags:
15
  - llama-3-instruct
16
  - llama-3-instruct-8b
17
  - 8b
 
 
 
 
 
 
 
 
18
  - conversational
19
  - chat
20
  - instruct
21
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
 
 
 
22
  model_type: llama
23
  license: llama3.1
24
  ---
@@ -29,14 +40,16 @@ license: llama3.1
29
 
30
  Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
31
  - Finetuned on [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) for best available general performance
32
- - Trained on our data, focused on science, engineering, technical knowledge, and structured reasoning
33
 
34
 
35
  ## Version
36
 
37
- This is the **2024-08-06** release of Shining Valiant 2 for Llama 3.1 8b.
38
 
39
- Our newest dataset improves specialist knowledge and response consistency.
 
 
40
 
41
  Help us and recommend Shining Valiant 2 to your friends!
42
 
@@ -73,9 +86,9 @@ print(outputs[0]["generated_text"][-1])
73
  ## The Model
74
  Shining Valiant 2 is built on top of Llama 3.1 8b Instruct.
75
 
76
- The current version of Shining Valiant 2 is trained mostly on our private Shining Valiant data, supplemented by [LDJnr/Pure-Dove](https://huggingface.co/datasets/LDJnr/Pure-Dove) for response flexibility.
77
 
78
- Our private data adds specialist knowledge and Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.
79
 
80
 
81
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
 
15
  - llama-3-instruct
16
  - llama-3-instruct-8b
17
  - 8b
18
+ - science
19
+ - physics
20
+ - biology
21
+ - chemistry
22
+ - compsci
23
+ - computer-science
24
+ - engineering
25
+ - technical
26
  - conversational
27
  - chat
28
  - instruct
29
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
30
+ datasets:
31
+ - sequelbox/Celestia
32
+ - sequelbox/Supernova
33
  model_type: llama
34
  license: llama3.1
35
  ---
 
40
 
41
  Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
42
  - Finetuned on [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) for best available general performance
43
+ - Trained on a variety of high quality data; focused on science, engineering, technical knowledge, and structured reasoning
44
 
45
 
46
  ## Version
47
 
48
+ This is the **2024-09-16** release of Shining Valiant 2 for Llama 3.1 8b.
49
 
50
+ We've improved and open-sourced our new baseline [science-instruct dataset](https://huggingface.co/datasets/sequelbox/Celestia). This release features improvements in physics, chemistry, biology, and computer science.
51
+
52
+ Future upgrades will continue to expand Shining Valiant's technical knowledge base.
53
 
54
  Help us and recommend Shining Valiant 2 to your friends!
55
 
 
86
  ## The Model
87
  Shining Valiant 2 is built on top of Llama 3.1 8b Instruct.
88
 
89
+ The current version of Shining Valiant 2 is trained on technical knowledge using [sequelbox/Celestia](https://huggingface.co/datasets/sequelbox/Celestia) and general chat capability using [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)
90
 
91
+ Our private data adds specialist knowledge and Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical. (As a general note: we're hoping to replace and open-source this part of Shining Valiant's dataset with synthetic data soon!)
92
 
93
 
94
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
config.json CHANGED
@@ -33,7 +33,7 @@
33
  "rope_theta": 500000.0,
34
  "tie_word_embeddings": false,
35
  "torch_dtype": "float32",
36
- "transformers_version": "4.43.4",
37
  "use_cache": true,
38
  "vocab_size": 128256
39
  }
 
33
  "rope_theta": 500000.0,
34
  "tie_word_embeddings": false,
35
  "torch_dtype": "float32",
36
+ "transformers_version": "4.44.2",
37
  "use_cache": true,
38
  "vocab_size": 128256
39
  }
generation_config.json CHANGED
@@ -8,5 +8,5 @@
8
  ],
9
  "temperature": 0.6,
10
  "top_p": 0.9,
11
- "transformers_version": "4.43.4"
12
  }
 
8
  ],
9
  "temperature": 0.6,
10
  "top_p": 0.9,
11
+ "transformers_version": "4.44.2"
12
  }
tokenizer.json CHANGED
@@ -1,11 +1,6 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 6900,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
  "padding": null,
10
  "added_tokens": [
11
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {