Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
3 |
tags:
|
4 |
- LLMs
|
5 |
- mistral
|
@@ -67,6 +68,9 @@ This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from
|
|
67 |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
|
68 |
|
69 |
## How To Use
|
|
|
|
|
|
|
70 |
### Reproduce the model
|
71 |
Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model:
|
72 |
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
base_model: Intel/neural-chat-7b-v3-1
|
4 |
tags:
|
5 |
- LLMs
|
6 |
- mistral
|
|
|
68 |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
|
69 |
|
70 |
## How To Use
|
71 |
+
|
72 |
+
Context length for this model: 8192 tokens (same as https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
73 |
+
|
74 |
### Reproduce the model
|
75 |
Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model:
|
76 |
|