Update README.md
Browse files
README.md
CHANGED
@@ -13,8 +13,8 @@ Training Details:
|
|
13 |
|
14 |
Dataset Modifications:
|
15 |
<br>\- Further Cleaned up Roleplaying Samples -> Quality Check
|
16 |
-
<br>\- Removed Low Quality Samples from Manual Check
|
17 |
-
<br>\- More Creative Writing Samples -> 2x
|
18 |
<br>\- Remade and Refined Detailed Instruct Data
|
19 |
|
20 |
Coherent at 32K Context. Obviously not as good as a native 32K Context Model, but good enough. Has some of the usual memory issues in the middle of context, it has some problems with long-context understanding and reasoning, but it does not break down into incoherency like regular rope scaling does.
|
|
|
13 |
|
14 |
Dataset Modifications:
|
15 |
<br>\- Further Cleaned up Roleplaying Samples -> Quality Check
|
16 |
+
<br>\- Removed Low Quality Samples from Manual Check -> Increased Baseline Quality Floor
|
17 |
+
<br>\- More Creative Writing Samples -> 2x Samples
|
18 |
<br>\- Remade and Refined Detailed Instruct Data
|
19 |
|
20 |
Coherent at 32K Context. Obviously not as good as a native 32K Context Model, but good enough. Has some of the usual memory issues in the middle of context, it has some problems with long-context understanding and reasoning, but it does not break down into incoherency like regular rope scaling does.
|