|
--- |
|
license: creativeml-openrail-m |
|
base_model: |
|
- prithivMLmods/Llama-Deepsync-3B |
|
pipeline_tag: text-generation |
|
tags: |
|
- Llama |
|
- Code |
|
- CoT |
|
- Math |
|
- Deepsync |
|
- 3b |
|
- Ollama |
|
- 240_Maxed_Out |
|
language: |
|
- en |
|
- de |
|
- fr |
|
- it |
|
- pt |
|
- hi |
|
- es |
|
- th |
|
library_name: transformers |
|
--- |
|
<pre align="center"> |
|
.___ ________ _____ _______ |
|
__| _/____ ____ ______ _________.__. ____ ____ \_____ \ / | |\ _ \ |
|
/ __ |/ __ \/ __ \\____ \/ ___< | |/ \_/ ___\ / ____/ / | |/ /_\ \ |
|
/ /_/ \ ___| ___/| |_> >___ \ \___ | | \ \___/ \/ ^ | \_/ \ |
|
\____ |\___ >___ > __/____ >/ ____|___| /\___ >_______ \____ | \_____ / |
|
\/ \/ \/|__| \/ \/ \/ \/ \/ |__| \/ |
|
</pre> |
|
|
|
The **Deepsync-240-GGUF** is a fine-tuned version of the **Llama-3.2-3B-Instruct** base model, designed for text generation tasks that require deep reasoning, logical structuring, and problem-solving. This model leverages its optimized architecture to provide accurate and contextually relevant outputs for complex queries, making it ideal for applications in education, programming, and creative writing. |
|
|
|
With its robust natural language processing capabilities, **Deepsync-240-GGUF** excels in generating step-by-step solutions, creative content, and logical analyses. Its architecture integrates advanced understanding of both structured and unstructured data, ensuring precise text generation aligned with user inputs. |
|
|
|
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. |
|
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. |
|
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens. |
|
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. |
|
|
|
# **Model Architecture** |
|
|
|
Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. |
|
|
|
# **Use with transformers** |
|
|
|
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. |
|
|
|
Make sure to update your transformers installation via `pip install --upgrade transformers`. |
|
|
|
```python |
|
import torch |
|
from transformers import pipeline |
|
|
|
model_id = "prithivMLmods/Llama-Deepsync-3B" |
|
pipe = pipeline( |
|
"text-generation", |
|
model=model_id, |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto", |
|
) |
|
messages = [ |
|
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, |
|
{"role": "user", "content": "Who are you?"}, |
|
] |
|
outputs = pipe( |
|
messages, |
|
max_new_tokens=256, |
|
) |
|
print(outputs[0]["generated_text"][-1]) |
|
``` |
|
|
|
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) |
|
|
|
# **Run with Ollama [Ollama Run]** |
|
|
|
Ollama makes running machine learning models simple and efficient. Follow these steps to set up and run your GGUF models quickly. |
|
|
|
## Quick Start: Step-by-Step Guide |
|
|
|
| Step | Description | Command / Instructions | |
|
|------|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
| 1 | **Install Ollama 🦙** | Download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your system. | |
|
| 2 | **Create Your Model File** | - Create a file named after your model, e.g., `metallama`. | |
|
| | | - Add the following line to specify the base model: | |
|
| | | ```bash | |
|
| | | FROM Llama-3.2-1B.F16.gguf | |
|
| | | ``` | |
|
| | | - Ensure the base model file is in the same directory. | |
|
| 3 | **Create and Patch the Model** | Run the following commands to create and verify your model: | |
|
| | | ```bash | |
|
| | | ollama create metallama -f ./metallama | |
|
| | | ollama list | |
|
| | | ``` | |
|
| 4 | **Run the Model** | Use the following command to start your model: | |
|
| | | ```bash | |
|
| | | ollama run metallama | |
|
| | | ``` | |
|
| 5 | **Interact with the Model** | Once the model is running, interact with it: | |
|
| | | ```plaintext | |
|
| | | >>> Tell me about Space X. | |
|
| | | Space X, the private aerospace company founded by Elon Musk, is revolutionizing space exploration... | |
|
| | | ``` | |
|
|
|
## Conclusion |
|
With Ollama, running and interacting with models is seamless. Start experimenting today! |