Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,8 @@ We adopted exactly the same architecture and tokenizer as Llama 2. This means Ti
|
|
24 |
|
25 |
#### This Model
|
26 |
This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
|
27 |
-
|
|
|
28 |
#### How to use
|
29 |
You will need the transformers>=4.31
|
30 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
|
|
24 |
|
25 |
#### This Model
|
26 |
This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
|
27 |
+
|
28 |
+
**Update from V0.1: 1. Different dataset. 2. Different chat format (now [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) formatted conversations).**
|
29 |
#### How to use
|
30 |
You will need the transformers>=4.31
|
31 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|