aidal commited on
Commit
a5d14ed
1 Parent(s): fe742cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -65,7 +65,7 @@ print(tokenizer.decode(outputs[0]))
65
  - **Pre-training:** In the following step, we expanded the embedding layer of the base model to match the size of the Persian tokenizer. We then employed the LoRA method to train the model on three distinct datasets: Wikipedia-Farsi, an Islamic book collection, and content from Khamenei.ir.
66
  - <p align="center">
67
  <picture>
68
- <img alt="Hugging Face Transformers Library" src="https://i.postimg.cc/LXSD4HnZ/Stakehozlder-Map-1-page-0001-modified.png" width="200" height="200" style="max-width: 100%;">
69
  </picture>
70
  </p>
71
  - **Instruction Fine-tuning:** For the final step, we fine-tuned the model using the LoRA method on a translated version of the Stanford-alpaca to enhance the model's question-answering capabilities.
 
65
  - **Pre-training:** In the following step, we expanded the embedding layer of the base model to match the size of the Persian tokenizer. We then employed the LoRA method to train the model on three distinct datasets: Wikipedia-Farsi, an Islamic book collection, and content from Khamenei.ir.
66
  - <p align="center">
67
  <picture>
68
+ <img alt="Hugging Face Transformers Library" src="https://i.postimg.cc/LXSD4HnZ/Stakehozlder-Map-1-page-0001-modified.png" width="270" height="270" style="max-width: 100%;">
69
  </picture>
70
  </p>
71
  - **Instruction Fine-tuning:** For the final step, we fine-tuned the model using the LoRA method on a translated version of the Stanford-alpaca to enhance the model's question-answering capabilities.