Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
-
|
|
|
3 |
tags:
|
4 |
- llama
|
5 |
- text-generation
|
@@ -9,7 +10,9 @@ tags:
|
|
9 |
- fine-tuned
|
10 |
datasets:
|
11 |
- Jr23xd23/Arabic_LLaMA_Math_Dataset
|
|
|
12 |
base_model: meta-llama/Llama-3.2-3B-Instruct
|
|
|
13 |
inference: true
|
14 |
---
|
15 |
|
@@ -21,7 +24,7 @@ inference: true
|
|
21 |
|
22 |
## Model Details
|
23 |
|
24 |
-
- **Model Type**: Transformer-based language model fine-tuned for text generation
|
25 |
- **Languages**: Arabic
|
26 |
- **Base Model**: [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
|
27 |
- **Dataset**: [Arabic LLaMA Math Dataset](https://github.com/jaberjaber23/Arabic-LLaMA-Math-Dataset)
|
@@ -31,6 +34,7 @@ inference: true
|
|
31 |
## Training Data
|
32 |
|
33 |
The model was fine-tuned on the **Arabic LLaMA Math Dataset**, which consists of 12,496 examples covering various mathematical topics, such as:
|
|
|
34 |
- Basic Arithmetic
|
35 |
- Algebra
|
36 |
- Geometry
|
@@ -38,12 +42,13 @@ The model was fine-tuned on the **Arabic LLaMA Math Dataset**, which consists of
|
|
38 |
- Combinatorics
|
39 |
|
40 |
Each example in the dataset includes:
|
41 |
-
- **Instruction**: The problem statement in Arabic
|
42 |
-
- **Solution**: The solution to the problem in Arabic
|
43 |
|
44 |
## Intended Use
|
45 |
|
46 |
### Primary Use Cases:
|
|
|
47 |
- Solving mathematical problems in Arabic
|
48 |
- Educational applications
|
49 |
- Tutoring systems for Arabic-speaking students
|
@@ -51,7 +56,7 @@ Each example in the dataset includes:
|
|
51 |
|
52 |
### How to Use
|
53 |
|
54 |
-
You can use the model in Python with the Hugging Face
|
55 |
|
56 |
```python
|
57 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
@@ -60,7 +65,36 @@ tokenizer = AutoTokenizer.from_pretrained("Jr23xd23/Math_Arabic_Llama-3.2-3B-Ins
|
|
60 |
model = AutoModelForCausalLM.from_pretrained("Jr23xd23/Math_Arabic_Llama-3.2-3B-Instruct")
|
61 |
|
62 |
# Example: Solving a math problem in Arabic
|
63 |
-
input_text = "ู
ุง ูู ู
ุฌู
ูุน ุงูุฒูุงูุง ูู ู
ุซูุซุ"
|
64 |
inputs = tokenizer(input_text, return_tensors="pt")
|
65 |
-
output = model.generate(**inputs)
|
66 |
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- ar
|
4 |
tags:
|
5 |
- llama
|
6 |
- text-generation
|
|
|
10 |
- fine-tuned
|
11 |
datasets:
|
12 |
- Jr23xd23/Arabic_LLaMA_Math_Dataset
|
13 |
+
license: apache-2.0
|
14 |
base_model: meta-llama/Llama-3.2-3B-Instruct
|
15 |
+
pipeline_tag: text-generation
|
16 |
inference: true
|
17 |
---
|
18 |
|
|
|
24 |
|
25 |
## Model Details
|
26 |
|
27 |
+
- **Model Type**: Transformer-based language model fine-tuned for text generation
|
28 |
- **Languages**: Arabic
|
29 |
- **Base Model**: [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
|
30 |
- **Dataset**: [Arabic LLaMA Math Dataset](https://github.com/jaberjaber23/Arabic-LLaMA-Math-Dataset)
|
|
|
34 |
## Training Data
|
35 |
|
36 |
The model was fine-tuned on the **Arabic LLaMA Math Dataset**, which consists of 12,496 examples covering various mathematical topics, such as:
|
37 |
+
|
38 |
- Basic Arithmetic
|
39 |
- Algebra
|
40 |
- Geometry
|
|
|
42 |
- Combinatorics
|
43 |
|
44 |
Each example in the dataset includes:
|
45 |
+
- **Instruction**: The problem statement in Arabic
|
46 |
+
- **Solution**: The solution to the problem in Arabic
|
47 |
|
48 |
## Intended Use
|
49 |
|
50 |
### Primary Use Cases:
|
51 |
+
|
52 |
- Solving mathematical problems in Arabic
|
53 |
- Educational applications
|
54 |
- Tutoring systems for Arabic-speaking students
|
|
|
56 |
|
57 |
### How to Use
|
58 |
|
59 |
+
You can use the model in Python with the Hugging Face transformers library:
|
60 |
|
61 |
```python
|
62 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
65 |
model = AutoModelForCausalLM.from_pretrained("Jr23xd23/Math_Arabic_Llama-3.2-3B-Instruct")
|
66 |
|
67 |
# Example: Solving a math problem in Arabic
|
68 |
+
input_text = "ู
ุง ูู ู
ุฌู
ูุน ุงูุฒูุงูุง ูู ู
ุซูุซุ" # What is the sum of angles in a triangle?
|
69 |
inputs = tokenizer(input_text, return_tensors="pt")
|
70 |
+
output = model.generate(**inputs, max_length=100)
|
71 |
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
72 |
+
```
|
73 |
+
|
74 |
+
## Limitations
|
75 |
+
|
76 |
+
- The model is not designed for non-mathematical language tasks.
|
77 |
+
- Performance may degrade when applied to highly complex mathematical problems beyond the scope of the training dataset.
|
78 |
+
- The model's outputs should be verified for critical applications.
|
79 |
+
|
80 |
+
## License
|
81 |
+
|
82 |
+
This model is licensed under the **Apache 2.0 License**.
|
83 |
+
|
84 |
+
## Citation
|
85 |
+
|
86 |
+
If you use this model in your research or projects, please cite it as follows:
|
87 |
+
|
88 |
+
```bibtex
|
89 |
+
@model{Math_Arabic_Llama_3.2_3B_Instruct,
|
90 |
+
title={Math_Arabic_Llama-3.2-3B-Instruct},
|
91 |
+
author={Jr23xd23},
|
92 |
+
year={2024},
|
93 |
+
publisher={Hugging Face},
|
94 |
+
url={https://huggingface.co/Jr23xd23/Math_Arabic_Llama-3.2-3B-Instruct},
|
95 |
+
}
|
96 |
+
```
|
97 |
+
|
98 |
+
## Acknowledgements
|
99 |
+
|
100 |
+
Special thanks to the creators of the **Arabic LLaMA Math Dataset** for providing a rich resource for fine-tuning the model.
|