prithivMLmods
commited on
Commit
•
6924da6
1
Parent(s):
7b3fe2e
Update README.md
Browse files
README.md
CHANGED
@@ -16,4 +16,26 @@ tags:
|
|
16 |
- text-generation-inference
|
17 |
- QwQ
|
18 |
- Math
|
19 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
- text-generation-inference
|
17 |
- QwQ
|
18 |
- Math
|
19 |
+
---
|
20 |
+
|
21 |
+
### QwQ-LCoT-7B-Instruct Model File
|
22 |
+
|
23 |
+
| **File Name** | **Size** | **Description** | **Upload Status** |
|
24 |
+
|----------------------------------------|----------------|-------------------------------------------------|--------------------|
|
25 |
+
| `.gitattributes` | 1.57 kB | Tracks large files with Git LFS. | Uploaded |
|
26 |
+
| `README.md` | 273 Bytes | Contains initial documentation, likely minimal. | Updated |
|
27 |
+
| `added_tokens.json` | 657 Bytes | Maps additional tokens for the tokenizer. | Uploaded |
|
28 |
+
| `config.json` | 848 Bytes | Model configuration (basic setup). | Uploaded |
|
29 |
+
| `generation_config.json` | 281 Bytes | Settings for text generation tasks. | Uploaded |
|
30 |
+
| `merges.txt` | 1.82 MB | Tokenizer merges for byte-pair encoding (BPE). | Uploaded |
|
31 |
+
| `model-00001-of-00004.safetensors` | 4.88 GB | First part of model weights (split for LFS). | Uploaded (LFS) |
|
32 |
+
| `model-00002-of-00004.safetensors` | 4.93 GB | Second part of model weights. | Uploaded (LFS) |
|
33 |
+
| `model-00003-of-00004.safetensors` | 4.33 GB | Third part of model weights. | Uploaded (LFS) |
|
34 |
+
| `model-00004-of-00004.safetensors` | 1.09 GB | Fourth part of model weights. | Uploaded (LFS) |
|
35 |
+
| `model.safetensors.index.json` | 29.5 kB | Index file for managing model shards. | Uploaded |
|
36 |
+
| `special_tokens_map.json` | 644 Bytes | Maps special tokens like `<pad>` or `<eos>`. | Uploaded |
|
37 |
+
| `tokenizer.json` | 11.4 MB | Pre-trained tokenizer file in JSON format. | Uploaded (LFS) |
|
38 |
+
| `tokenizer_config.json` | 7.73 kB | Configuration details for the tokenizer. | Uploaded |
|
39 |
+
| `vocab.json` | 2.78 MB | Tokenizer vocabulary. | Uploaded |
|
40 |
+
|
41 |
+
---
|