Locutusque
commited on
Commit
•
c7adacc
1
Parent(s):
61b0766
Update README.md
Browse files
README.md
CHANGED
@@ -75,8 +75,9 @@ The model is evaluated based on several metrics, including loss, reward, penalty
|
|
75 |
- BLEU Score: 9
|
76 |
- Average perplexity: 49
|
77 |
- Loss: 1.7
|
|
|
78 |
Although these metrics seem mediocre, it's actually better because that way the model is able to make open-ended responses, but is still coherent to the user's input.
|
79 |
-
|
80 |
## Limitations and Bias
|
81 |
This model is not suitable for all use cases due to its limited training time on a weak computer. As a result, it may produce irrelevant or nonsensical responses. Additionally, it has not been fine-tuned to remember the chat history, is unable to provide follow-up responses, and it does not know the answer to many questions (it was only fine-tuned to respond in a conversational way). For optimal performance, we recommend using a GPU with at least 4GB of VRAM and downloading the model manually instead of using the Transformers library. Here's how you should deploy the model:
|
82 |
|
|
|
75 |
- BLEU Score: 9
|
76 |
- Average perplexity: 49
|
77 |
- Loss: 1.7
|
78 |
+
---
|
79 |
Although these metrics seem mediocre, it's actually better because that way the model is able to make open-ended responses, but is still coherent to the user's input.
|
80 |
+
---
|
81 |
## Limitations and Bias
|
82 |
This model is not suitable for all use cases due to its limited training time on a weak computer. As a result, it may produce irrelevant or nonsensical responses. Additionally, it has not been fine-tuned to remember the chat history, is unable to provide follow-up responses, and it does not know the answer to many questions (it was only fine-tuned to respond in a conversational way). For optimal performance, we recommend using a GPU with at least 4GB of VRAM and downloading the model manually instead of using the Transformers library. Here's how you should deploy the model:
|
83 |
|