|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- SLM |
|
- Conversational |
|
base_model: HuggingFaceTB/SmolLM-1.7B-Instruct |
|
--- |
|
# SandLogic Technology - Quantized SmolLM-1.7B-Instruct Models |
|
|
|
## Model Description |
|
|
|
We have quantized the SmolLM-1.7B-Instruct model into three variants: |
|
|
|
1. Q5_KM |
|
2. Q4_KM |
|
3. IQ4_XS |
|
|
|
These quantized models offer improved efficiency while maintaining performance. |
|
|
|
Discover our full range of quantized language models by visiting our [SandLogic Lexicon](https://github.com/sandlogic/SandLogic-Lexicon) GitHub. To learn more about our company and services, check out our website at [SandLogic](https://www.sandlogic.com). |
|
|
|
## Original Model Information |
|
|
|
- **Name**: SmolLM-1.7B-Instruct |
|
- **Model Type**: Small language model |
|
- **Parameters**: 1.7 billion |
|
- **Training Data**: SmolLM-Corpus (curated high-quality educational and synthetic data) |
|
|
|
## Model Capabilities |
|
|
|
SmolLM-1.7B-Instruct is designed for various natural language processing tasks, with capabilities including: |
|
|
|
- General knowledge question answering |
|
- Creative writing |
|
- Basic Python programming |
|
|
|
## Finetuning Details |
|
|
|
The model was finetuned on a mixture of datasets, including: |
|
|
|
- 2k simple everyday conversations generated by llama3.1-70B |
|
- Magpie-Pro-300K-Filtered |
|
- StarCoder2-Self-OSS-Instruct |
|
- A small subset of OpenHermes-2.5 |
|
|
|
## Limitations |
|
|
|
- English language only |
|
- May struggle with arithmetic, editing tasks, and complex reasoning |
|
- Generated content may not always be factually accurate or logically consistent |
|
- Potential biases from training data |
|
|
|
## Intended Use |
|
|
|
1. **Educational Assistance**: Helping students with general knowledge questions and basic programming concepts. |
|
2. **Creative Writing Aid**: Assisting in generating ideas or outlines for creative writing projects. |
|
3. **Conversational AI**: Powering chatbots for simple, everyday conversations. |
|
4. **Code Completion**: Providing suggestions for basic Python programming tasks. |
|
5. **General Knowledge Queries**: Answering straightforward questions on various topics. |
|
|
|
## Model Variants |
|
|
|
We offer three quantized versions of the SmolLM-1.7B-Instruct model: |
|
|
|
1. **Q5_KM**: 5-bit quantization using the KM method |
|
2. **Q4_KM**: 4-bit quantization using the KM method |
|
3. **IQ4_XS**: 4-bit quantization using the IQ4_XS method |
|
|
|
These quantized models aim to reduce model size and improve inference speed while maintaining performance as close to the original model as possible. |
|
|
|
## Usage |
|
|
|
```bash |
|
pip install llama-cpp-python |
|
``` |
|
Please refer to the llama-cpp-python [documentation](https://llama-cpp-python.readthedocs.io/en/latest/) to install with GPU support. |
|
|
|
### Basic Text Completion |
|
Here's an example demonstrating how to use the high-level API for basic text completion: |
|
|
|
```bash |
|
from llama_cpp import Llama |
|
|
|
llm = Llama( |
|
model_path="./models/SmolLM-1.7B-Instruct.Q5_K_M.gguf", |
|
verbose=False, |
|
# n_gpu_layers=-1, # Uncomment to use GPU acceleration |
|
# n_ctx=2048, # Uncomment to increase the context window |
|
) |
|
|
|
output = llm.create_chat_completion( |
|
messages = [ |
|
{"role": "system", "content": "You're an AI assistant who help the user to answer his questions"}, |
|
{ |
|
"role": "user", |
|
"content": "What is the capital of France." |
|
} |
|
] |
|
) |
|
|
|
print(output["choices"][0]['message']['content']) |
|
``` |
|
|
|
## Download |
|
You can download `Llama` models in `gguf` format directly from Hugging Face using the `from_pretrained` method. This feature requires the `huggingface-hub` package. |
|
|
|
To install it, run: `pip install huggingface-hub` |
|
|
|
```bash |
|
from llama_cpp import Llama |
|
|
|
llm = Llama.from_pretrained( |
|
repo_id="SandLogicTechnologies/SmolLM-1.7B-Instruct-GGUF", |
|
filename="*SmolLM-1.7B-Instruct.Q5_K_M.gguf", |
|
verbose=False |
|
) |
|
``` |
|
By default, from_pretrained will download the model to the Hugging Face cache directory. You can manage installed model files using the huggingface-cli tool. |
|
|
|
|
|
|
|
## Acknowledgements |
|
|
|
We thank the original developers of SmolLM for their contributions to the field of small language models. |
|
Special thanks to Georgi Gerganov and the entire llama.cpp development team for their outstanding contributions. |
|
|
|
|
|
## Contact |
|
|
|
For any inquiries or support, please contact us at support@sandlogic.com or visit our [support page](https://www.sandlogic.com). |