Commit
165c921
1 Parent(s): 96957fd

Update readme.md

Browse files
Files changed (1) hide show
  1. README.md +214 -3
README.md CHANGED
@@ -1,3 +1,214 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - ar
5
+ - en
6
+ tags:
7
+ - llama3.1
8
+ - arabic
9
+ - unsloth
10
+
11
+ pipeline_tag: text-generation
12
+ ---
13
+
14
+ # 🚀 Arabic LLaMa 3.1 16 bit Model (Version #1)
15
+
16
+
17
+ This fine-tuned model is based on the newly released LLaMA 3.1 model and has been specifically trained on the Arabic BigScience xP3 dataset. It is designed to respond to various types of questions in Arabic, leveraging the rich linguistic data provided by the [Arabic BigScience xP3](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo/viewer/Ara--bigscience--xP3).
18
+
19
+ ## Model Summary
20
+
21
+ - **Model Type:** Llama3.1 Lora Model
22
+ - **Language(s):** Arabic
23
+ - **Base Model:** [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B)
24
+
25
+ ## Model Details
26
+
27
+ - The model was fine-tuned in 4-bit precision using [unsloth](https://github.com/unslothai/unsloth) for 16k Step on 1 GPU
28
+
29
+
30
+ ## I prepared for you a Gradio App to do inference with the model and compare its results with the base llama3.1 model
31
+
32
+ ## <span style="color:Red">Note</span>
33
+ just run the following code in colab:
34
+
35
+ ### Gradio APP (Colab T4 GPU is enough to run the app)
36
+ ```python
37
+
38
+ !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
39
+ !pip install --no-deps "xformers<0.0.27" "trl<0.9.0" peft accelerate bitsandbytes
40
+ !pip install gradio
41
+
42
+ import gradio as gr
43
+ from unsloth import FastLanguageModel
44
+ import torch
45
+
46
+ # Load base model and tokenizer
47
+ base_model, base_tokenizer = FastLanguageModel.from_pretrained(
48
+ model_name="unsloth/Meta-Llama-3.1-8B",
49
+ max_seq_length=2048,
50
+ dtype=None,
51
+ load_in_4bit=True,
52
+ )
53
+ FastLanguageModel.for_inference(base_model) # Enable native 2x faster inference
54
+
55
+ # Load LoRA model and tokenizer
56
+ lora_model, lora_tokenizer = FastLanguageModel.from_pretrained(
57
+ model_name="Omartificial-Intelligence-Space/Arabic-llama3.1-16bit-FT", # Replace with your LoRA model path/name
58
+ max_seq_length=2048,
59
+ dtype=None,
60
+ load_in_4bit=True,
61
+ )
62
+ FastLanguageModel.for_inference(lora_model) # Enable native 2x faster inference
63
+
64
+ simplified_prompt = """Input: {}
65
+ Response: {}"""
66
+
67
+ def extract_response(text):
68
+ """ Extracts the Response part from the generated text """
69
+ response_marker = "Response:"
70
+ if response_marker in text:
71
+ return text.split(response_marker, 1)[1].strip()
72
+ return text.strip()
73
+
74
+ def generate_responses(input_text):
75
+ prompt = simplified_prompt.format(input_text, "")
76
+
77
+ # Tokenize input for base model
78
+ base_inputs = base_tokenizer([prompt], return_tensors="pt").to("cuda")
79
+ # Generate output using base model
80
+ base_outputs = base_model.generate(**base_inputs, max_new_tokens=128, use_cache=True)
81
+ # Decode base model output
82
+ base_decoded_outputs = base_tokenizer.batch_decode(base_outputs, skip_special_tokens=True)[0]
83
+ base_response = extract_response(base_decoded_outputs)
84
+
85
+ # Tokenize input for LoRA model
86
+ lora_inputs = lora_tokenizer([prompt], return_tensors="pt").to("cuda")
87
+ # Generate output using LoRA model
88
+ lora_outputs = lora_model.generate(**lora_inputs, max_new_tokens=128, use_cache=True)
89
+ # Decode LoRA model output
90
+ lora_decoded_outputs = lora_tokenizer.batch_decode(lora_outputs, skip_special_tokens=True)[0]
91
+ lora_response = extract_response(lora_decoded_outputs)
92
+
93
+ return base_response, lora_response
94
+
95
+ # Custom CSS for the interface
96
+ css = """
97
+ h1 {
98
+ color: #1E90FF;
99
+ font-family: 'Arial', sans-serif;
100
+ text-align: center;
101
+ margin-bottom: 20px;
102
+ }
103
+
104
+ .description {
105
+ color: #4682B4;
106
+ font-family: 'Arial', sans-serif;
107
+ text-align: center;
108
+ font-size: 18px;
109
+ margin-bottom: 20px;
110
+ }
111
+
112
+ .gradio-container {
113
+ background-color: #F0F0F0;
114
+ border-radius: 10px;
115
+ padding: 20px;
116
+ }
117
+
118
+ .gr-button {
119
+ background-color: #FFA500;
120
+ color: white;
121
+ border: none;
122
+ padding: 10px 20px;
123
+ text-align: center;
124
+ display: inline-block;
125
+ font-size: 16px;
126
+ margin: 4px 2px;
127
+ cursor: pointer;
128
+ }
129
+
130
+ .gr-button:hover {
131
+ background-color: #FF8C00;
132
+ }
133
+
134
+ .gr-textbox {
135
+ border: 2px solid #1E90FF;
136
+ border-radius: 5px;
137
+ padding: 10px;
138
+ }
139
+ """
140
+
141
+ # JavaScript for additional functionality (if needed)
142
+ js = """
143
+ function createGradioAnimation() {
144
+ var container = document.createElement('div');
145
+ container.id = 'gradio-animation';
146
+ container.style.fontSize = '2em';
147
+ container.style.fontWeight = 'bold';
148
+ container.style.textAlign = 'center';
149
+ container.style.marginBottom = '20px';
150
+
151
+ var text = 'Omartificial Intelligence Space';
152
+ for (var i = 0; i < text.length; i++) {
153
+ (function(i){
154
+ setTimeout(function(){
155
+ var letter = document.createElement('span');
156
+ letter.style.opacity = '0';
157
+ letter.style.transition = 'opacity 0.5s';
158
+ letter.innerText = text[i];
159
+
160
+ container.appendChild(letter);
161
+
162
+ setTimeout(function() {
163
+ letter.style.opacity = '1';
164
+ }, 50);
165
+ }, i * 250);
166
+ })(i);
167
+ }
168
+
169
+ var gradioContainer = document.querySelector('.gradio-container');
170
+ gradioContainer.insertBefore(container, gradioContainer.firstChild);
171
+
172
+ return 'Animation created';
173
+ }
174
+ """
175
+
176
+ with gr.Blocks(css=css, js=js) as demo:
177
+ gr.Markdown("<h1>Arabic llaMa3.1 Lora Model (Version 1)</h1>")
178
+ gr.Markdown("<p class='description'>This model is the Arabic version of Llama3.1, utilized to answer in Arabic for different types of prompts.</p>")
179
+
180
+ with gr.Row():
181
+ input_text = gr.Textbox(lines=5, placeholder="Enter input text here...", elem_classes="gr-textbox")
182
+ base_output = gr.Textbox(label="Base Model Output", elem_classes="gr-textbox")
183
+ lora_output = gr.Textbox(label="LoRA Model Output", elem_classes="gr-textbox")
184
+
185
+ generate_button = gr.Button("Generate Responses", elem_classes="gr-button")
186
+
187
+ generate_button.click(generate_responses, inputs=input_text, outputs=[base_output, lora_output])
188
+
189
+ demo.launch(debug = True)
190
+
191
+ ```
192
+
193
+ ### Recommendations
194
+
195
+ - [unsloth](https://github.com/unslothai/unsloth) for finetuning models. You can get a 2x faster finetuned model which can be exported to any format or uploaded to Hugging Face.
196
+
197
+ ## <span style="color:blue">Acknowledgments</span>
198
+
199
+ The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
200
+
201
+
202
+ ```markdown
203
+ ## Citation
204
+
205
+ If you use the Arabic llama3.1 Lora Model, please cite it as follows:
206
+
207
+ ```bibtex
208
+ @model{nacar2024,
209
+ author = {Omer Nacar},
210
+ title = {Arabic llama3.1 Lora Model},
211
+ year = 2024,
212
+ url = {https://huggingface.co/Omartificial-Intelligence-Space/Arabic-llama3.1-Chat-lora},
213
+ version = {1.0.0},
214
+ }