viethoangtranduong
commited on
Commit
•
e0e5ae5
1
Parent(s):
7b46b80
Update README.md
Browse files
README.md
CHANGED
@@ -5,9 +5,10 @@ datasets:
|
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
8 |
-
Note:
|
9 |
-
|
10 |
-
|
|
|
11 |
|
12 |
```
|
13 |
import requests
|
@@ -44,7 +45,9 @@ We utilize ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets
|
|
44 |
This overview provides a high-level summary of our approach.
|
45 |
We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog.](https://snorkel.ai/blog/)
|
46 |
|
47 |
-
The prompt format follows Mistral model:
|
|
|
|
|
48 |
|
49 |
### Training recipe:
|
50 |
- The provided data is formatted to be compatible with the Hugging Face's [Zephyr recipe](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta).
|
|
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
8 |
+
Note: Our temporary HF inference endpoint is available for community testing.
|
9 |
+
It may initially take a few minutes to activate, but will eventually operate at the standard speed of HF's 7B model text inference endpoint.
|
10 |
+
The speed of inference depends on HF endpoint performance and is unrelated to Snorkel offerings.
|
11 |
+
This endpoint is designed for initial trials, not for ongoing production use. Have fun!
|
12 |
|
13 |
```
|
14 |
import requests
|
|
|
45 |
This overview provides a high-level summary of our approach.
|
46 |
We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog.](https://snorkel.ai/blog/)
|
47 |
|
48 |
+
The prompt format follows the Mistral model:
|
49 |
+
|
50 |
+
```[INST] {prompt} [/INST]```
|
51 |
|
52 |
### Training recipe:
|
53 |
- The provided data is formatted to be compatible with the Hugging Face's [Zephyr recipe](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta).
|