aashish1904 commited on
Commit
911e2ee
1 Parent(s): b8d164f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +183 -0
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: llama3.1
5
+ language:
6
+ - uz
7
+ - en
8
+ base_model: models/Meta-Llama-3.1-8B-Instruct
9
+ library_name: transformers
10
+ tags:
11
+ - llama
12
+ - text-generation-inference
13
+ - summarization
14
+ - translation
15
+ - question-answering
16
+ datasets:
17
+ - yahma/alpaca-cleaned
18
+ - behbudiy/alpaca-cleaned-uz
19
+ - behbudiy/translation-instruction
20
+ metrics:
21
+ - bleu
22
+ - comet
23
+ - accuracy
24
+ pipeline_tag: text-generation
25
+
26
+ ---
27
+
28
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
29
+
30
+
31
+ # QuantFactory/Llama-3.1-8B-Instuct-Uz-GGUF
32
+ This is quantized version of [behbudiy/Llama-3.1-8B-Instuct-Uz](https://huggingface.co/behbudiy/Llama-3.1-8B-Instuct-Uz) created using llama.cpp
33
+
34
+ # Original Model Card
35
+
36
+
37
+ ### Model Description
38
+
39
+ The LLaMA-3.1-8B-Instruct-Uz model has been instruction-tuned using a mix of publicly available and syntheticly constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
40
+ For details regarding the performance metrics compared to the base model, see [this post.](https://www.linkedin.com/feed/update/urn:li:activity:7241389815559008256/)
41
+
42
+ - **Developed by:**
43
+ - [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/)
44
+ - [Azimjon Urinov](https://azimjonn.github.io/)
45
+ - [Khurshid Juraev](https://kjuraev.com/)
46
+
47
+ ## How to use
48
+
49
+ The Llama-3.1-8B-Instruct-Uz model can be used with transformers and with the original `llama` codebase.
50
+
51
+ ### Use with transformers
52
+
53
+ Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
54
+
55
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
56
+
57
+ ```python
58
+ import transformers
59
+ import torch
60
+
61
+ model_id = "behbuidy/Llama-3.1-8B-Instruct-Uz"
62
+
63
+ pipeline = transformers.pipeline(
64
+ "text-generation",
65
+ model=model_id,
66
+ model_kwargs={"torch_dtype": torch.bfloat16},
67
+ device_map="auto",
68
+ )
69
+
70
+ messages = [
71
+ {"role": "system", "content": "Berilgan gap bo'yicha hissiyot tahlilini bajaring."},
72
+ {"role": "user", "content": "Men bu filmni yaxshi ko'raman!"},
73
+ ]
74
+
75
+ outputs = pipeline(
76
+ messages,
77
+ max_new_tokens=256,
78
+ )
79
+ print(outputs[0]["generated_text"][-1])
80
+ ```
81
+
82
+ Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
83
+
84
+ ### Use with `llama`
85
+
86
+ Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
87
+
88
+ ## Information on Evaluation Method
89
+
90
+ To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets, where we merged the dev and test sets to create a bigger evaluation data for each Uz-En and En-Uz subsets.
91
+ We used the following prompt to do one-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").
92
+
93
+ ```python
94
+ prompt = f'''You are a professional Uzbek-English translator. Your task is to accurately translate the given Uzbek text into English.
95
+
96
+ Instructions:
97
+ 1. Translate the text from Uzbek to English.
98
+ 2. Maintain the original meaning and tone.
99
+ 3. Use appropriate English grammar and vocabulary.
100
+ 4. If you encounter an ambiguous or unfamiliar word, provide the most likely translation based on context.
101
+ 5. Output only the English translation, without any additional comments.
102
+
103
+ Example:
104
+ Uzbek: "Bugun ob-havo juda yaxshi, quyosh charaqlab turibdi."
105
+ English: "The weather is very nice today, the sun is shining brightly."
106
+
107
+ Now, please translate the following Uzbek text into English:
108
+ "{sentence}"
109
+ '''
110
+ ```
111
+
112
+ To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset, for which we created binary labels (0: Negative, 1: Positive) using GPT-4o API (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
113
+ We used the following prompt for the evaluation:
114
+
115
+ ```python
116
+ prompt = f'''Given the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation.
117
+
118
+ Text: {text}"
119
+ '''
120
+ ```
121
+ For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:
122
+
123
+ ```python
124
+ prompt = f'''Classify the given Uzbek news article into one of the following categories. Provide only the category number as the answer.
125
+
126
+ Categories:
127
+ 0 - Politics (Siyosat)
128
+ 1 - Economy (Iqtisodiyot)
129
+ 2 - Technology (Texnologiya)
130
+ 3 - Sports (Sport)
131
+ 4 - Culture (Madaniyat)
132
+ 5 - Health (Salomatlik)
133
+ 6 - Family and Society (Oila va Jamiyat)
134
+ 7 - Education (Ta'lim)
135
+ 8 - Ecology (Ekologiya)
136
+ 9 - Foreign News (Xorijiy Yangiliklar)
137
+
138
+ Now classify this article:
139
+ "{text}"
140
+
141
+ Answer (number only):"
142
+ '''
143
+ ```
144
+
145
+ On MMLU, we performed 5-shot evaluation using the following **template** and extracted the first token generated by the model for measuring accuracy:
146
+ ```python
147
+ template = "The following are multiple choice questions (with answers) about [subject area].
148
+
149
+ [Example question 1]
150
+ A. text
151
+ B. text
152
+ C. text
153
+ D. text
154
+ Answer: [Correct answer letter]
155
+
156
+ .
157
+ .
158
+ .
159
+
160
+ [Example question 5]
161
+ A. text
162
+ B. text
163
+ C. text
164
+ D. text
165
+ Answer: [Correct answer letter]
166
+
167
+ Now, let's think step by step and then provide only the letter corresponding to the correct answer for the below question, without any additional explanation or comments.
168
+
169
+ [Actual MMLU test question]
170
+ A. text
171
+ B. text
172
+ C. text
173
+ D. text
174
+ Answer:"
175
+ ```
176
+
177
+
178
+
179
+ ## More
180
+ For more details and examples, refer to the base model below:
181
+ https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
182
+
183
+