metadata
license: creativeml-openrail-m
datasets:
- amphora/QwQ-LongCoT-130K
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- Long-CoT
- Qwen2.5
- 7B
- safetensors
- text-generation-inference
- QwQ
- FP16 precision
- SFT
- Math
QwQ-LCoT-7B-Instruct Model File
The QwQ-LCoT-7B-Instruct is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5-7B base model and has been fine-tuned on the amphora/QwQ-LongCoT-130K dataset, focusing on chain-of-thought (CoT) reasoning.
File Name | Size | Description | Upload Status |
---|---|---|---|
.gitattributes |
1.57 kB | Tracks large files with Git LFS. | Uploaded |
README.md |
273 Bytes | Contains initial documentation, likely minimal. | Updated |
added_tokens.json |
657 Bytes | Maps additional tokens for the tokenizer. | Uploaded |
config.json |
848 Bytes | Model configuration (basic setup). | Uploaded |
generation_config.json |
281 Bytes | Settings for text generation tasks. | Uploaded |
merges.txt |
1.82 MB | Tokenizer merges for byte-pair encoding (BPE). | Uploaded |
model-00001-of-00004.safetensors |
4.88 GB | First part of model weights (split for LFS). | Uploaded (LFS) |
model-00002-of-00004.safetensors |
4.93 GB | Second part of model weights. | Uploaded (LFS) |
model-00003-of-00004.safetensors |
4.33 GB | Third part of model weights. | Uploaded (LFS) |
model-00004-of-00004.safetensors |
1.09 GB | Fourth part of model weights. | Uploaded (LFS) |
model.safetensors.index.json |
29.5 kB | Index file for managing model shards. | Uploaded |
special_tokens_map.json |
644 Bytes | Maps special tokens like <pad> or <eos> . |
Uploaded |
tokenizer.json |
11.4 MB | Pre-trained tokenizer file in JSON format. | Uploaded (LFS) |
tokenizer_config.json |
7.73 kB | Configuration details for the tokenizer. | Uploaded |
vocab.json |
2.78 MB | Tokenizer vocabulary. | Uploaded |
Sample Long CoT:
Key Features:
Model Size:
- 7.62B parameters (FP16 precision).
Model Sharding:
- The model weights are split into 4 shards (
safetensors
) for efficient storage and download:model-00001-of-00004.safetensors
(4.88 GB)model-00002-of-00004.safetensors
(4.93 GB)model-00003-of-00004.safetensors
(4.33 GB)model-00004-of-00004.safetensors
(1.09 GB)
- The model weights are split into 4 shards (
Tokenizer:
- Byte-pair encoding (BPE) based.
- Files included:
vocab.json
(2.78 MB)merges.txt
(1.82 MB)tokenizer.json
(11.4 MB)
- Special tokens mapped in
special_tokens_map.json
(e.g.,<pad>
,<eos>
).
Configuration Files:
config.json
: Defines model architecture and hyperparameters.generation_config.json
: Settings for inference and text generation tasks.
Training Dataset:
- Dataset Name: amphora/QwQ-LongCoT-130K
- Size: 133k examples.
- Focus: Chain-of-Thought reasoning for complex tasks.
Use Cases:
Instruction Following:
Handle user instructions effectively, even for multi-step tasks.Reasoning Tasks:
Perform logical reasoning and generate detailed step-by-step solutions.Text Generation:
Generate coherent, context-aware responses.