QuantFactory Banner

QuantFactory/Math-IIO-7B-Instruct-GGUF

This is quantized version of prithivMLmods/Math-IIO-7B-Instruct created using llama.cpp

Original Model Card

aaa.png

Math IIO 7B Instruct

The Math IIO 7B Instruct is a fine-tuned language model based on the robust Qwen2.5-7B-Instruct architecture. This model has been specifically trained to excel in single-shot mathematical reasoning and instruction-based tasks, making it a reliable choice for educational, analytical, and problem-solving applications.

Key Features:

  1. Math-Optimized Capabilities:
    The model is designed to handle complex mathematical problems, step-by-step calculations, and reasoning tasks.

  2. Instruction-Tuned:
    Fine-tuned for better adherence to structured queries and task-oriented prompts, enabling clear and concise outputs.

  3. Large Vocabulary:
    Equipped with an extensive tokenizer configuration and custom tokens to ensure precise mathematical notation support.

File Name Size Description Upload Status
.gitattributes 1.57 kB Git attributes configuration file Uploaded
README.md 263 Bytes README file with minimal details Updated
added_tokens.json 657 Bytes Custom added tokens for tokenizer Uploaded
config.json 861 Bytes Model configuration file Uploaded
generation_config.json 281 Bytes Configuration for text generation settings Uploaded
merges.txt 1.82 MB Merge rules for byte pair encoding tokenizer Uploaded
pytorch_model-00001-of-00004.bin 4.88 GB First part of model weights (PyTorch) Uploaded (LFS)
pytorch_model-00002-of-00004.bin 4.93 GB Second part of model weights (PyTorch) Uploaded (LFS)
pytorch_model-00003-of-00004.bin 4.33 GB Third part of model weights (PyTorch) Uploaded (LFS)
pytorch_model-00004-of-00004.bin 1.09 GB Fourth part of model weights (PyTorch) Uploaded (LFS)
pytorch_model.bin.index.json 28.1 kB Index JSON file for model weights Uploaded
special_tokens_map.json 644 Bytes Map of special tokens used by the tokenizer Uploaded
tokenizer.json 11.4 MB Tokenizer settings and vocab Uploaded (LFS)
tokenizer_config.json 7.73 kB Configuration for tokenizer Uploaded
vocab.json 2.78 MB Vocabulary for tokenizer Uploaded

Training Details:

  • Base Model: Qwen/Qwen2.5-7B-Instruct
  • Dataset: Trained on Math-IIO-68K-Mini, a curated dataset with 68.8k high-quality examples focusing on mathematical instructions, equations, and logic-based queries.

Capabilities:

  • Problem-Solving: Solves mathematical problems ranging from basic arithmetic to advanced calculus and linear algebra.
  • Educational Use: Explains solutions step-by-step, making it a valuable teaching assistant.
  • Analysis & Reasoning: Handles logical reasoning tasks and computational queries effectively.

How to Use:

  1. Download all model files, ensuring the PyTorch weights and tokenizer configurations are included.
  2. Load the model in your Python environment using frameworks like PyTorch or Hugging Face Transformers.
  3. Use the provided configurations (config.json and generation_config.json) for optimal inference.
Downloads last month
854
GGUF
Model size
7.62B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/Math-IIO-7B-Instruct-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(99)
this model

Dataset used to train QuantFactory/Math-IIO-7B-Instruct-GGUF