File size: 3,862 Bytes
fc0fa27 2720f9c 78700cd 2720f9c 78700cd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
license: creativeml-openrail-m
datasets:
- prithivMLmods/Math-IIO-68K-Mini
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-1.7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- safetensors
- pytorch
- llama
- trl
- ollama
- llama-cpp
- math
- instruct
---
### SmolLM2-Math-IIO-1.7B-Instruct
The **SmolLM2-Math-IIO-1.7B-Instruct** model is a fine-tuned variant of the **SmolLM2-1.7B** architecture, optimized for mathematical instruction and reasoning tasks. It is particularly suited for applications that require mathematical problem-solving, logical inference, and detailed step-by-step explanations.
| File Name | Size | Description | Upload Status |
|----------------------------------------|------------|------------------------------------------------|----------------|
| `.gitattributes` | 1.52 kB | Git attributes configuration file | Uploaded |
| `README.md` | 287 Bytes | Updated README file | Updated |
| `config.json` | 940 Bytes | Model configuration settings | Uploaded |
| `generation_config.json` | 162 Bytes | Generation-specific configurations | Uploaded |
| `merges.txt` | 515 kB | Merging information for tokenization | Uploaded |
| `pytorch_model.bin` | 3.42 GB | Full model weights (PyTorch format) | Uploaded (LFS) |
| `special_tokens_map.json` | 572 Bytes | Mapping for special tokens used by the model | Uploaded |
| `tokenizer.json` | 3.77 MB | Tokenizer configuration and vocabulary | Uploaded |
| `tokenizer_config.json` | 3.95 kB | Tokenizer configuration for loading and usage | Uploaded |
| `vocab.json` | 801 kB | Vocabulary for the tokenizer | Uploaded |
### **Key Features:**
1. **Math-Focused Capabilities:**
This model is fine-tuned to handle a wide range of mathematical queries, from simple arithmetic to complex equations and mathematical proofs.
2. **Instruction-Tuned:**
Specifically trained to follow structured queries and deliver clear, coherent outputs based on instructions, ensuring high-quality, relevant responses to prompts.
3. **Tokenizer & Custom Tokens:**
Includes a robust tokenizer configuration with support for mathematical notation, custom tokens, and an extended vocabulary for accurate understanding and output generation.
---
### **Training Details:**
- **Base Model:** [SmolLM2-1.7B](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct)
- **Dataset:** Trained on **Math-IIO-68K-Mini**, a dataset focused on mathematical instructions and logic-based queries, with a total of 68.8k examples.
### **Capabilities:**
- **Mathematical Problem-Solving:** Solves and explains complex mathematical problems, including algebra, calculus, and more advanced topics.
- **Instruction-Following:** Adheres to structured inputs and outputs, making it effective for generating step-by-step solutions.
- **Text Generation:** Capable of generating mathematical proofs, explanations, and educational content tailored to various user queries.
---
### **Usage Instructions:**
1. **Model Setup:** Download all model files and ensure the PyTorch model weights and tokenizer configurations are included.
2. **Inference:** Load the model in a Python environment using frameworks like PyTorch or Hugging Face's Transformers.
3. **Customization:** Configure the model with the `config.json` and `generation_config.json` files for optimal performance during inference.
--- |