prithivMLmods commited on
Commit
b3f31c7
1 Parent(s): 855a211

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md CHANGED
@@ -31,3 +31,79 @@ library_name: transformers
31
  \____ |\___ >___ > __/____ >/ ____|___| /\___ >_______ \____ | \_____ /
32
  \/ \/ \/|__| \/ \/ \/ \/ \/ |__| \/
33
  </pre>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  \____ |\___ >___ > __/____ >/ ____|___| /\___ >_______ \____ | \_____ /
32
  \/ \/ \/|__| \/ \/ \/ \/ \/ |__| \/
33
  </pre>
34
+
35
+ The **Deepsync-240-GGUF** is a fine-tuned version of the **Llama-3.2-3B-Instruct** base model, designed for text generation tasks that require deep reasoning, logical structuring, and problem-solving. This model leverages its optimized architecture to provide accurate and contextually relevant outputs for complex queries, making it ideal for applications in education, programming, and creative writing.
36
+
37
+ With its robust natural language processing capabilities, **Deepsync-240-GGUF** excels in generating step-by-step solutions, creative content, and logical analyses. Its architecture integrates advanced understanding of both structured and unstructured data, ensuring precise text generation aligned with user inputs.
38
+
39
+ - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
40
+ - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
41
+ - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
42
+ - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
43
+
44
+ # **Model Architecture**
45
+
46
+ Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
47
+
48
+ # **Use with transformers**
49
+
50
+ Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
51
+
52
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
53
+
54
+ ```python
55
+ import torch
56
+ from transformers import pipeline
57
+
58
+ model_id = "prithivMLmods/Llama-Deepsync-3B"
59
+ pipe = pipeline(
60
+ "text-generation",
61
+ model=model_id,
62
+ torch_dtype=torch.bfloat16,
63
+ device_map="auto",
64
+ )
65
+ messages = [
66
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
67
+ {"role": "user", "content": "Who are you?"},
68
+ ]
69
+ outputs = pipe(
70
+ messages,
71
+ max_new_tokens=256,
72
+ )
73
+ print(outputs[0]["generated_text"][-1])
74
+ ```
75
+
76
+ Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
77
+
78
+ # **Run with Ollama [Ollama Run]**
79
+
80
+ Ollama makes running machine learning models simple and efficient. Follow these steps to set up and run your GGUF models quickly.
81
+
82
+ ## Quick Start: Step-by-Step Guide
83
+
84
+ | Step | Description | Command / Instructions |
85
+ |------|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
86
+ | 1 | **Install Ollama 🦙** | Download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your system. |
87
+ | 2 | **Create Your Model File** | - Create a file named after your model, e.g., `metallama`. |
88
+ | | | - Add the following line to specify the base model: |
89
+ | | | ```bash |
90
+ | | | FROM Llama-3.2-1B.F16.gguf |
91
+ | | | ``` |
92
+ | | | - Ensure the base model file is in the same directory. |
93
+ | 3 | **Create and Patch the Model** | Run the following commands to create and verify your model: |
94
+ | | | ```bash |
95
+ | | | ollama create metallama -f ./metallama |
96
+ | | | ollama list |
97
+ | | | ``` |
98
+ | 4 | **Run the Model** | Use the following command to start your model: |
99
+ | | | ```bash |
100
+ | | | ollama run metallama |
101
+ | | | ``` |
102
+ | 5 | **Interact with the Model** | Once the model is running, interact with it: |
103
+ | | | ```plaintext |
104
+ | | | >>> Tell me about Space X. |
105
+ | | | Space X, the private aerospace company founded by Elon Musk, is revolutionizing space exploration... |
106
+ | | | ``` |
107
+
108
+ ## Conclusion
109
+ With Ollama, running and interacting with models is seamless. Start experimenting today!