Update README.md
Browse files
README.md
CHANGED
@@ -48,7 +48,7 @@ RWKV-4-Pile-7B-20221115-8047.pth : Trained on the Pile for 332B tokens.
|
|
48 |
* SC2016 acc 73.44%
|
49 |
* Hellaswag acc_norm 65.51%
|
50 |
|
51 |
-
### Instruct-test models: only useful if you construct your prompt following dataset templates
|
52 |
|
53 |
Note I am using "Q: instruct\n\nA: result" prompt for all instructs.
|
54 |
|
@@ -61,7 +61,3 @@ instruct-tuned on https://huggingface.co/datasets/Muennighoff/flan & NIv2
|
|
61 |
### Chinese models
|
62 |
|
63 |
RWKV-4-Pile-7B-EngChn-testNovel-xxx for writing Chinese novels (trained on 200G Chinese novels.)
|
64 |
-
|
65 |
-
RWKV-4-Pile-7B-EngChn-testxxx for Chinese Q&A (trained on 10G Chinese text. only for testing purposes.)
|
66 |
-
|
67 |
-
RWKV-4-Pile-7B-EngChn-test5 is tuned on more ChatGPT-like data and it's pretty decent. Try "+i 开题报告" "+i 世界各国美食" in latest ChatRWKV v2.
|
|
|
48 |
* SC2016 acc 73.44%
|
49 |
* Hellaswag acc_norm 65.51%
|
50 |
|
51 |
+
### Instruct-test models (OLD): only useful if you construct your prompt following dataset templates
|
52 |
|
53 |
Note I am using "Q: instruct\n\nA: result" prompt for all instructs.
|
54 |
|
|
|
61 |
### Chinese models
|
62 |
|
63 |
RWKV-4-Pile-7B-EngChn-testNovel-xxx for writing Chinese novels (trained on 200G Chinese novels.)
|
|
|
|
|
|
|
|