mobarmg commited on
Commit
4469351
·
1 Parent(s): 170853d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -1,3 +1,53 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for AskMe
2
+
3
+ ## Model Details
4
+ **Model name:** AskMe
5
+
6
+ **Model type:** GPT-based, fine-tuned on Arabic instruction-based dataset
7
+
8
+ **Base model:** aubmindlab/aragpt2-large
9
+
10
+ **Languages:** Arabic
11
+
12
+ **Author:** Research team at Naseej
13
+
14
+ ## Introduction
15
+ AskMe is a GPT-based model, fine-tuned on an Arabic instruction-based dataset generated from ChatGPT. The research team at Naseej has fine-tuned this model using the aubmindlab/aragpt2-large model as the base. The model aims to provide a high-quality, context-aware language model that can assist users in generating human-like responses in Arabic, specifically when given instructions or prompts.
16
+
17
+ ## Dataset
18
+ The dataset used for fine-tuning AskMe consists of Arabic instruction-based conversations generated from ChatGPT. The research team at Naseej has made sure to curate and clean the dataset for better model performance and to reduce any biases that might be present in the data.
19
+
20
+ ## Fine-tuning
21
+ AskMe is fine-tuned using the aubmindlab/aragpt2-large model, which is specifically designed for Arabic language understanding and generation tasks. The research team at Naseej has carefully fine-tuned the model to improve its performance on instruction-based tasks, ensuring that the model is capable of generating accurate and contextually relevant responses.
22
+
23
+ ## Usage
24
+ AskMe can be used for a variety of tasks that involve understanding and responding to instructions or prompts in Arabic. This includes tasks such as:
25
+
26
+ - Question-answering
27
+ - Conversation modeling
28
+ - Summarization
29
+ - Translation
30
+ - Generating instructions
31
+ - Text completion
32
+
33
+ You can use the model with the Hugging Face Transformers library by loading it using the `from_pretrained` method:
34
+
35
+ ```python
36
+ from transformers import AutoTokenizer
37
+ from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel
38
+
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained("naseej/askme")
41
+ model = GPT2LMHeadModel.from_pretrained("naseej/askme")
42
+ ```
43
+
44
+ ## Limitations and Bias
45
+ Although AskMe has been fine-tuned on a curated dataset, it is still susceptible to biases present in the training data. This can result in the generation of biased or politically incorrect responses. Users should be cautious and critically evaluate the generated outputs.
46
+
47
+ Additionally, as a language model, AskMe may produce incorrect or nonsensical answers, especially when handling complex or ambiguous prompts. It is recommended to use the model as a tool to assist in decision-making and content generation rather than as a standalone solution.
48
+
49
+ ## Feedback and Contributions
50
+ We welcome feedback and contributions to improve the AskMe model. If you have any issues, suggestions, or questions, please feel free to open an issue on our GitHub repository, or reach out to the research team at Naseej.
51
+
52
+ ## License
53
+ AskMe is released under the [MIT License](https://opensource.org/licenses/MIT).