Update README.md
Browse files
README.md
CHANGED
@@ -30,17 +30,13 @@ It achieves the following results on the evaluation set:
|
|
30 |
This is a fine-tuned version of the [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) model using Parameter Efficient Fine Tuning (PEFT) with Low Rank Adaptation (LoRA) on Intel(R) Data Center GPU Max 1100 and Intel(R) Xeon(R) Platinum 8480+ CPU .
|
31 |
This model can be used for various text generation tasks including chatbots, content creation, and other NLP applications.
|
32 |
|
33 |
-
## Intended uses & limitations
|
34 |
-
|
35 |
-
More information needed
|
36 |
-
|
37 |
-
## Training and evaluation data
|
38 |
-
|
39 |
-
More information needed
|
40 |
|
41 |
## Training Hardware
|
42 |
|
43 |
-
This model was trained using: GPU:
|
|
|
|
|
|
|
44 |
|
45 |
## Training procedure
|
46 |
|
@@ -69,7 +65,7 @@ The following hyperparameters were used during training:
|
|
69 |
| 2.4248 | 4.0323 | 500 | 2.3998 |
|
70 |
|
71 |
|
72 |
-
|
73 |
|
74 |
- PEFT 0.11.1
|
75 |
- Transformers 4.41.2
|
|
|
30 |
This is a fine-tuned version of the [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) model using Parameter Efficient Fine Tuning (PEFT) with Low Rank Adaptation (LoRA) on Intel(R) Data Center GPU Max 1100 and Intel(R) Xeon(R) Platinum 8480+ CPU .
|
31 |
This model can be used for various text generation tasks including chatbots, content creation, and other NLP applications.
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## Training Hardware
|
35 |
|
36 |
+
This model was trained using: GPU:
|
37 |
+
- Intel(R) Data Center GPU Max 1100
|
38 |
+
- CPU: Intel(R) Xeon(R) Platinum 8480+
|
39 |
+
|
40 |
|
41 |
## Training procedure
|
42 |
|
|
|
65 |
| 2.4248 | 4.0323 | 500 | 2.3998 |
|
66 |
|
67 |
|
68 |
+
## Framework versions
|
69 |
|
70 |
- PEFT 0.11.1
|
71 |
- Transformers 4.41.2
|