Inkbot version 0.2 upload.
Browse files- README.md +206 -1
- added_tokens.json +3 -0
- config.json +27 -0
- generation_config.json +6 -0
- pytorch_model-00001-of-00003.bin +3 -0
- pytorch_model-00002-of-00003.bin +3 -0
- pytorch_model-00003-of-00003.bin +3 -0
- pytorch_model.bin.index.json +370 -0
- special_tokens_map.json +6 -0
- tokenizer.model +3 -0
- tokenizer_config.json +54 -0
README.md
CHANGED
@@ -1,3 +1,208 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Card for Inkbot
|
2 |
+
|
3 |
+
## Model Details
|
4 |
+
Inkbot is a conversational AI model designed to interpret and respond to structured prompts with or without contextual information. Its unique feature lies in the structured prompt system, allowing users to engage in dynamic dialogues that can evolve based on context, metadata, and user input.
|
5 |
+
|
6 |
+
### Performance
|
7 |
+
- The model excels in RAG type queries, answering from context, and overriding memory when necessary.
|
8 |
+
- It can handle very large contexts, but may sometimes enter a repeating text loop, especially during complex tasks.
|
9 |
+
- The model is intended to be more functional and less chatty, avoiding the waste of tokens on superfluous language.
|
10 |
+
|
11 |
+
## How to Use
|
12 |
+
Inkbot uses a structured prompt template system.
|
13 |
+
|
14 |
+
|
15 |
+
### Prompt Template Structure
|
16 |
+
|
17 |
+
#### With Context:
|
18 |
+
```
|
19 |
+
<#meta#>
|
20 |
+
- Date: {current_date}
|
21 |
+
- Task: {task_name}
|
22 |
+
<#system#>
|
23 |
+
{system_prompt}
|
24 |
+
<#chat#>
|
25 |
+
<#user#>
|
26 |
+
{user}
|
27 |
+
<#user_context#>
|
28 |
+
{user_context}
|
29 |
+
<#bot#>
|
30 |
+
{bot}
|
31 |
+
```
|
32 |
+
|
33 |
+
#### Without Context:
|
34 |
+
```
|
35 |
+
<#meta#>
|
36 |
+
- Date: {current_date}
|
37 |
+
- Task: {task_name}
|
38 |
+
<#system#>
|
39 |
+
{system_prompt}
|
40 |
+
<#chat#>
|
41 |
+
<#user#>
|
42 |
+
{user}
|
43 |
+
<#bot#>
|
44 |
+
{bot}
|
45 |
+
```
|
46 |
+
|
47 |
+
### Examples
|
48 |
+
|
49 |
+
#### With Context:
|
50 |
+
```
|
51 |
+
<#meta#>
|
52 |
+
- Date: 2023-07-20
|
53 |
+
- Task: general
|
54 |
+
<#system#>
|
55 |
+
A dialogue focused on determining dates based on context clues.
|
56 |
+
<#chat#>
|
57 |
+
<#user#>
|
58 |
+
What is today's date?
|
59 |
+
<#bot#>
|
60 |
+
The date is: 2023-07-20
|
61 |
+
<#user#>
|
62 |
+
When did the event in the provided context take place?
|
63 |
+
<#user_context#>
|
64 |
+
It's the day after new years eve, and I'm starting 2018 with a headache.
|
65 |
+
<#bot#>
|
66 |
+
The event in the context took place on: 2018-01-01
|
67 |
+
```
|
68 |
+
|
69 |
+
#### Without Context:
|
70 |
+
```
|
71 |
+
<#meta#>
|
72 |
+
- Date: 2023-07-20
|
73 |
+
- Task: general
|
74 |
+
<#system#>
|
75 |
+
A simple dialogue about daily queries.
|
76 |
+
<#chat#>
|
77 |
+
<#user#>
|
78 |
+
Will it rain today?
|
79 |
+
<#bot#>
|
80 |
+
I am not sure, without knowing your location I am unable to determine the weather.
|
81 |
+
<#user#>
|
82 |
+
Can you compose a haiku for me?
|
83 |
+
<#bot#>
|
84 |
+
Whispers of the breeze,
|
85 |
+
Nature's melodies unfold,
|
86 |
+
Dawn dispels the night.
|
87 |
+
```
|
88 |
+
|
89 |
+
## Task Options
|
90 |
+
|
91 |
+
The model has been trained on a variety of tasks. Below is a breakdown of these tasks, along with example prompts to help guide your usage.
|
92 |
+
|
93 |
---
|
94 |
+
|
95 |
+
### 1. Text Refinement
|
96 |
+
|
97 |
+
Tasks in this category aim to improve the quality and clarity of the text.
|
98 |
+
|
99 |
+
#### clarity (100 examples)
|
100 |
+
*Example Prompts:*
|
101 |
+
- "Please read the content below and revise it to ensure clear communication and understandability."
|
102 |
+
- "Ensure the following text is easy for any reader to understand."
|
103 |
+
|
104 |
+
#### coherence (750 examples)
|
105 |
+
*Example Prompts:*
|
106 |
+
- "Review the content and adjust it to ensure it has a logical consistency and flow."
|
107 |
+
- "Make sure the following paragraphs connect seamlessly."
|
108 |
+
|
109 |
+
#### formalize (800 examples)
|
110 |
+
*Example Prompts:*
|
111 |
+
- "Please convert the following informal text into a more formal tone."
|
112 |
+
- "Make the given content sound more professional."
|
113 |
+
|
114 |
+
#### gec (1,500 examples)
|
115 |
+
*Example Prompts:*
|
116 |
+
- "Refine the following content, focusing on fixing grammatical errors."
|
117 |
+
- "Correct any grammar mistakes in the text below."
|
118 |
+
|
119 |
+
#### neutralize (800 examples)
|
120 |
+
*Example Prompts:*
|
121 |
+
- "Rewrite the following content in a neutral tone, removing any biases or strong emotions."
|
122 |
+
- "Ensure the text below is objective and does not show any personal opinions."
|
123 |
+
|
124 |
+
#### text_correction (1,400 examples)
|
125 |
+
*Example Prompts:*
|
126 |
+
- "Please read the provided document to understand the author's intention. Focus on the fixes required in the document, such as mistranscriptions, punctuation, spelling mistakes, and consistency. Provide a fixed version of the document."
|
127 |
+
|
128 |
+
#### simplification (900 examples)
|
129 |
+
*Example Prompts:*
|
130 |
+
- "Use simpler wording to convey the message in the content below."
|
131 |
+
- "Make the following text easier for a child to understand."
|
132 |
+
|
133 |
+
---
|
134 |
+
|
135 |
+
### 2. Content Generation
|
136 |
+
|
137 |
+
Tasks in this category involve creating or expanding content.
|
138 |
+
|
139 |
+
#### kg_writer (800 examples)
|
140 |
+
*Example Prompts:*
|
141 |
+
- "Using the provided knowledge graph, write an article about the topics and entities in the graph, incorporating the linked ideas. Use idea tags while writing to help focus."
|
142 |
+
- "Construct a story based on the information in the knowledge graph."
|
143 |
+
|
144 |
+
#### summary (1,600 examples)
|
145 |
+
*Example Prompts:*
|
146 |
+
- "Generate an extensive summary of the given document."
|
147 |
+
- "Please read the provided document to understand the context and content. Use this understanding to generate a summary. Separate the article into chunks, and sequentially create a summary for each chunk. Give me a final summary in the end."
|
148 |
+
|
149 |
+
#### paraphrase (1,100 examples)
|
150 |
+
*Example Prompts:*
|
151 |
+
- "Rephrase the following sentence while retaining its original meaning."
|
152 |
+
- "Can you provide an alternative wording for the paragraph below?"
|
153 |
+
|
154 |
+
---
|
155 |
+
|
156 |
+
### 3. Content Analysis
|
157 |
+
|
158 |
+
Tasks in this category evaluate, grade, or filter content.
|
159 |
+
|
160 |
+
#### grading (400 examples)
|
161 |
+
*Example Prompts:*
|
162 |
+
- "Based on the provided document, please rate the usefulness as training data on a scale from 0-5."
|
163 |
+
|
164 |
+
#### sponsorblock (5,200 examples)
|
165 |
+
*Example Prompts:*
|
166 |
+
- "Read the document and extract any sentences or phrases that contain explicit mentions of sponsorship, promotional partnerships, or any form of paid content."
|
167 |
+
|
168 |
---
|
169 |
+
|
170 |
+
### 4. Information Structuring
|
171 |
+
|
172 |
+
Tasks in this category involve the structured representation or extraction of information.
|
173 |
+
|
174 |
+
#### kg (3,600 examples)
|
175 |
+
*Example Prompts:*
|
176 |
+
- "Create a Knowledge Graph from the document provided."
|
177 |
+
- "Extract key concepts and relationships from the conversation to form a knowledge graph."
|
178 |
+
|
179 |
+
---
|
180 |
+
|
181 |
+
### 5. General Interaction
|
182 |
+
|
183 |
+
Tasks in this category are designed for general questions and interactions.
|
184 |
+
|
185 |
+
#### general (1,600 examples)
|
186 |
+
*Example Prompts:*
|
187 |
+
- "What is the capital of France?"
|
188 |
+
- "Explain particle physics to a 5 years old."
|
189 |
+
|
190 |
+
|
191 |
+
|
192 |
+
## Limitations
|
193 |
+
- Adhere to the prompt structure for best results.
|
194 |
+
- When providing contextual details, clarity is essential for Inkbot to derive accurate and meaningful responses.
|
195 |
+
- The overriding memory from user_context property generally only works for the next prompt or two, after which the model reverts to its original behavior.
|
196 |
+
- On complex tasks, like creating a coherent story based on a set of facts from context, there's a potential for a repeating text loop as context fills.
|
197 |
+
- Sometimes the model doesn't know when to end a knowledge graph, which can result in adding nodes and edges until it runs out of context.
|
198 |
+
|
199 |
+
## Additional Notes
|
200 |
+
- Use rope-freq-scale=0.5 or compress_pos_emb=2 for 8k ctx
|
201 |
+
- The 'date', 'task', and 'system' are crucial metadata components that need to be provided outside the core dialogue.
|
202 |
+
- Use the 'user_context' when you want to offer supplementary context that guides Inkbot's response. You can interleave it in the chat log as necessary. It comes after the users instruction.
|
203 |
+
- The specific tag format, such as `<#word#>`, is used to because there are filters in a lot of APIs for <|word|> and this makes interactions easier.
|
204 |
+
|
205 |
+
|
206 |
+
---
|
207 |
+
license: llama2
|
208 |
+
---
|
added_tokens.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"[PAD]": 32000
|
3 |
+
}
|
config.json
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "./models/Llama-2-13b-longlora-8k-ft",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"attention_bias": false,
|
7 |
+
"bos_token_id": 1,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"hidden_act": "silu",
|
10 |
+
"hidden_size": 5120,
|
11 |
+
"initializer_range": 0.02,
|
12 |
+
"intermediate_size": 13824,
|
13 |
+
"max_position_embeddings": 4096,
|
14 |
+
"model_type": "llama",
|
15 |
+
"num_attention_heads": 40,
|
16 |
+
"num_hidden_layers": 40,
|
17 |
+
"num_key_value_heads": 40,
|
18 |
+
"pretraining_tp": 1,
|
19 |
+
"rms_norm_eps": 1e-05,
|
20 |
+
"rope_scaling": null,
|
21 |
+
"rope_theta": 10000.0,
|
22 |
+
"tie_word_embeddings": false,
|
23 |
+
"torch_dtype": "float16",
|
24 |
+
"transformers_version": "4.34.0.dev0",
|
25 |
+
"use_cache": false,
|
26 |
+
"vocab_size": 32001
|
27 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": 2,
|
5 |
+
"transformers_version": "4.34.0.dev0"
|
6 |
+
}
|
pytorch_model-00001-of-00003.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c6a37676adb32bdac218430883c663b722eaba2f3c4ee08faf928c69dcfd4c0b
|
3 |
+
size 9948729673
|
pytorch_model-00002-of-00003.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2a9bc893f1f8a63feb1939fd57c285f41c2e79393e8288e8cefdcf121767d339
|
3 |
+
size 9904155408
|
pytorch_model-00003-of-00003.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:377c7f6bcd0fe2bbc84734c44e392ff8bbb8bcf28a3bb2131c3a9ee0f86d0bab
|
3 |
+
size 6178988447
|
pytorch_model.bin.index.json
ADDED
@@ -0,0 +1,370 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"metadata": {
|
3 |
+
"total_size": 26031749120
|
4 |
+
},
|
5 |
+
"weight_map": {
|
6 |
+
"lm_head.weight": "pytorch_model-00003-of-00003.bin",
|
7 |
+
"model.embed_tokens.weight": "pytorch_model-00001-of-00003.bin",
|
8 |
+
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
9 |
+
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
10 |
+
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
11 |
+
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
12 |
+
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
13 |
+
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
14 |
+
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
15 |
+
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
16 |
+
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
17 |
+
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
18 |
+
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
19 |
+
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
20 |
+
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
21 |
+
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
22 |
+
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
23 |
+
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
24 |
+
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
25 |
+
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
26 |
+
"model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
27 |
+
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
28 |
+
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
29 |
+
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
30 |
+
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
31 |
+
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
32 |
+
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
33 |
+
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
34 |
+
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
35 |
+
"model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
36 |
+
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
37 |
+
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
38 |
+
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
39 |
+
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
40 |
+
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
41 |
+
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
42 |
+
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
43 |
+
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
44 |
+
"model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
45 |
+
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
46 |
+
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
47 |
+
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
48 |
+
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
49 |
+
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
50 |
+
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
51 |
+
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
52 |
+
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
53 |
+
"model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
54 |
+
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
55 |
+
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
56 |
+
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
57 |
+
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
58 |
+
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
59 |
+
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
60 |
+
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
61 |
+
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
62 |
+
"model.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
63 |
+
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
64 |
+
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
65 |
+
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
66 |
+
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
67 |
+
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
68 |
+
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
69 |
+
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
70 |
+
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
71 |
+
"model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
72 |
+
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
73 |
+
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
74 |
+
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
75 |
+
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
76 |
+
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
77 |
+
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
78 |
+
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
79 |
+
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
80 |
+
"model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
81 |
+
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
82 |
+
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
83 |
+
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
84 |
+
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
85 |
+
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
86 |
+
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
87 |
+
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
88 |
+
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
89 |
+
"model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
90 |
+
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
91 |
+
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
92 |
+
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
93 |
+
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
94 |
+
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
95 |
+
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
96 |
+
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
97 |
+
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
98 |
+
"model.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
99 |
+
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
100 |
+
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
101 |
+
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
102 |
+
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
103 |
+
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
104 |
+
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
105 |
+
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
106 |
+
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
107 |
+
"model.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
108 |
+
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
109 |
+
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
110 |
+
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
111 |
+
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
112 |
+
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
113 |
+
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
114 |
+
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
115 |
+
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
116 |
+
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
117 |
+
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
118 |
+
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
119 |
+
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
120 |
+
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
121 |
+
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
122 |
+
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
123 |
+
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
124 |
+
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
125 |
+
"model.layers.20.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
126 |
+
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
127 |
+
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
128 |
+
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
129 |
+
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
130 |
+
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
131 |
+
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
132 |
+
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
133 |
+
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
134 |
+
"model.layers.21.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
135 |
+
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
136 |
+
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
137 |
+
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
138 |
+
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
139 |
+
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
140 |
+
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
141 |
+
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
142 |
+
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
143 |
+
"model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
144 |
+
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
145 |
+
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
146 |
+
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
147 |
+
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
148 |
+
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
149 |
+
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
150 |
+
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
151 |
+
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
152 |
+
"model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
153 |
+
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
154 |
+
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
155 |
+
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
156 |
+
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
157 |
+
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
158 |
+
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
159 |
+
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
160 |
+
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
161 |
+
"model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
162 |
+
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
163 |
+
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
164 |
+
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
165 |
+
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
166 |
+
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
167 |
+
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
168 |
+
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
169 |
+
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
170 |
+
"model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
171 |
+
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
172 |
+
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
173 |
+
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
174 |
+
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
175 |
+
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
176 |
+
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
177 |
+
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
178 |
+
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
179 |
+
"model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
180 |
+
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
181 |
+
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
182 |
+
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
183 |
+
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
184 |
+
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
185 |
+
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
186 |
+
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
187 |
+
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
188 |
+
"model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
189 |
+
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
190 |
+
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
191 |
+
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
192 |
+
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
193 |
+
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
194 |
+
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
195 |
+
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
196 |
+
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
197 |
+
"model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
198 |
+
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
199 |
+
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
200 |
+
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
201 |
+
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
202 |
+
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
203 |
+
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
204 |
+
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
205 |
+
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
206 |
+
"model.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
207 |
+
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
208 |
+
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
209 |
+
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
210 |
+
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
211 |
+
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
212 |
+
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
213 |
+
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
214 |
+
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
215 |
+
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
216 |
+
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
217 |
+
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
218 |
+
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
219 |
+
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
220 |
+
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
221 |
+
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
222 |
+
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
223 |
+
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
224 |
+
"model.layers.30.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
225 |
+
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
226 |
+
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
227 |
+
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
228 |
+
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
229 |
+
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
230 |
+
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
231 |
+
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
232 |
+
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
233 |
+
"model.layers.31.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
234 |
+
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
235 |
+
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
236 |
+
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
237 |
+
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
238 |
+
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
239 |
+
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
240 |
+
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
241 |
+
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
242 |
+
"model.layers.32.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
243 |
+
"model.layers.32.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
244 |
+
"model.layers.32.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
245 |
+
"model.layers.32.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
246 |
+
"model.layers.32.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
247 |
+
"model.layers.32.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
248 |
+
"model.layers.32.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
249 |
+
"model.layers.32.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
250 |
+
"model.layers.32.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
251 |
+
"model.layers.33.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
252 |
+
"model.layers.33.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
253 |
+
"model.layers.33.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
254 |
+
"model.layers.33.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
255 |
+
"model.layers.33.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
256 |
+
"model.layers.33.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
257 |
+
"model.layers.33.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
258 |
+
"model.layers.33.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
259 |
+
"model.layers.33.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
260 |
+
"model.layers.34.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
261 |
+
"model.layers.34.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
262 |
+
"model.layers.34.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
263 |
+
"model.layers.34.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
264 |
+
"model.layers.34.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
265 |
+
"model.layers.34.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
266 |
+
"model.layers.34.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
267 |
+
"model.layers.34.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
268 |
+
"model.layers.34.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
269 |
+
"model.layers.35.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
270 |
+
"model.layers.35.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
271 |
+
"model.layers.35.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
272 |
+
"model.layers.35.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
273 |
+
"model.layers.35.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
274 |
+
"model.layers.35.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
275 |
+
"model.layers.35.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
276 |
+
"model.layers.35.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
277 |
+
"model.layers.35.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
278 |
+
"model.layers.36.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
279 |
+
"model.layers.36.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
280 |
+
"model.layers.36.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
281 |
+
"model.layers.36.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
282 |
+
"model.layers.36.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
283 |
+
"model.layers.36.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
284 |
+
"model.layers.36.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
285 |
+
"model.layers.36.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
286 |
+
"model.layers.36.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
287 |
+
"model.layers.37.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
288 |
+
"model.layers.37.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
289 |
+
"model.layers.37.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
290 |
+
"model.layers.37.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
291 |
+
"model.layers.37.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
292 |
+
"model.layers.37.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
293 |
+
"model.layers.37.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
294 |
+
"model.layers.37.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
295 |
+
"model.layers.37.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
296 |
+
"model.layers.38.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
297 |
+
"model.layers.38.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
298 |
+
"model.layers.38.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
299 |
+
"model.layers.38.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
300 |
+
"model.layers.38.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
301 |
+
"model.layers.38.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
302 |
+
"model.layers.38.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
303 |
+
"model.layers.38.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
304 |
+
"model.layers.38.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
305 |
+
"model.layers.39.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
306 |
+
"model.layers.39.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
307 |
+
"model.layers.39.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
308 |
+
"model.layers.39.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
309 |
+
"model.layers.39.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
310 |
+
"model.layers.39.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
311 |
+
"model.layers.39.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
312 |
+
"model.layers.39.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
313 |
+
"model.layers.39.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
314 |
+
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
315 |
+
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
316 |
+
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
317 |
+
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
318 |
+
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
319 |
+
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
320 |
+
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
321 |
+
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
322 |
+
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
323 |
+
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
324 |
+
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
325 |
+
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
326 |
+
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
327 |
+
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
328 |
+
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
329 |
+
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
330 |
+
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
331 |
+
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
332 |
+
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
333 |
+
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
334 |
+
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
335 |
+
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
336 |
+
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
337 |
+
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
338 |
+
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
339 |
+
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
340 |
+
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
341 |
+
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
342 |
+
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
343 |
+
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
344 |
+
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
345 |
+
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
346 |
+
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
347 |
+
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
348 |
+
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
349 |
+
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
350 |
+
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
351 |
+
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
352 |
+
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
353 |
+
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
354 |
+
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
355 |
+
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
356 |
+
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
357 |
+
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
358 |
+
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
359 |
+
"model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
360 |
+
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
361 |
+
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
362 |
+
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
363 |
+
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
364 |
+
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
365 |
+
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
366 |
+
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
367 |
+
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
368 |
+
"model.norm.weight": "pytorch_model-00003-of-00003.bin"
|
369 |
+
}
|
370 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<s>",
|
3 |
+
"eos_token": "</s>",
|
4 |
+
"pad_token": "</s>",
|
5 |
+
"unk_token": "<unk>"
|
6 |
+
}
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
|
3 |
+
size 499723
|
tokenizer_config.json
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": true,
|
3 |
+
"add_eos_token": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"0": {
|
6 |
+
"content": "<unk>",
|
7 |
+
"lstrip": true,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": true,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"1": {
|
14 |
+
"content": "<s>",
|
15 |
+
"lstrip": true,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": true,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"2": {
|
22 |
+
"content": "</s>",
|
23 |
+
"lstrip": true,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": true,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"32000": {
|
30 |
+
"content": "[PAD]",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
}
|
37 |
+
},
|
38 |
+
"additional_special_tokens": [],
|
39 |
+
"bos_token": "<s>",
|
40 |
+
"clean_up_tokenization_spaces": false,
|
41 |
+
"eos_token": "</s>",
|
42 |
+
"legacy": false,
|
43 |
+
"model_max_length": 8192,
|
44 |
+
"pad_token": "[PAD]",
|
45 |
+
"padding_side": "right",
|
46 |
+
"sp_model_kwargs": {},
|
47 |
+
"spaces_between_special_tokens": false,
|
48 |
+
"tokenizer_class": "LlamaTokenizer",
|
49 |
+
"tokenizer_file": null,
|
50 |
+
"trust_remote_code": false,
|
51 |
+
"unk_token": "<unk>",
|
52 |
+
"use_default_system_prompt": true,
|
53 |
+
"use_fast": false
|
54 |
+
}
|