AuriAetherwiing commited on
Commit
68a5f7d
1 Parent(s): 752f89c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -2
README.md CHANGED
@@ -4,13 +4,70 @@ base_model:
4
  - EVA-UNIT-01/LLaMA-EVA-3.33-70B-v0.0
5
  - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
6
  library_name: transformers
 
 
7
  tags:
8
  - mergekit
9
  - merge
10
-
 
 
 
 
 
 
 
 
 
 
11
  ---
12
- # EVA
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
 
16
  ## Merge Details
 
4
  - EVA-UNIT-01/LLaMA-EVA-3.33-70B-v0.0
5
  - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
6
  library_name: transformers
7
+ license: other
8
+ license_name: eva-llama3.3
9
  tags:
10
  - mergekit
11
  - merge
12
+ datasets:
13
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
14
+ - Nopm/Opus_WritingStruct
15
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
16
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
17
+ - Gryphe/ChatGPT-4o-Writing-Prompts
18
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
19
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
20
+ - nothingiisreal/Reddit-Dirty-And-WritingPrompts
21
+ - allura-org/Celeste-1.x-data-mixture
22
+ - cognitivecomputations/dolphin-2.9.3
23
  ---
 
24
 
25
+ # EVA LLaMA 3.33 70B v0.1
26
+ A RP/storywriting specialist model, full-parameter finetune of Llama-3.3-70B-Instruct on mixture of synthetic and natural data.<br>
27
+ It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
28
+ This model was built with Llama by Meta.
29
+
30
+ ## Version notes for v0.1
31
+
32
+ DELLA linear merge of v0.0 with an unreleased checkpoint from a different run. Reduced overfitting, better long context comprehension and recall, less repetition, more stability.
33
+
34
+ <p>
35
+ <p>Prompt format is Llama3.</p><br>
36
+ <h3>Recommended sampler values:</h3>
37
+ <ul>
38
+ <li>Temperature: 1</li>
39
+ <li>Min-P: 0.05</li>
40
+ <li>Repetition Penalty: 1.03</li>
41
+ </ul>
42
+
43
+ <h3>Recommended SillyTavern preset (via Virt-io):</h3>
44
+ <ul><li><a href="https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0/blob/main/EV01-llama.json">Master import</a></li></ul>
45
+ </p>
46
+
47
+ <h3>
48
+ Training data:
49
+ </h3>
50
+ <ul>
51
+ <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
52
+ <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
53
+ <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
54
+ <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
55
+ <li>Synthstruct and SynthRP datasets by Epiculous</li>
56
+ <li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li>
57
+ </ul>
58
+
59
+ <p>Model was created by Kearm, Auri and Cahvay.</p>
60
+ <h4>Special thanks:</h4><ul>
61
+ <li>to Cahvay for his work on dataset filtering.</li>
62
+ <li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CognitiveComputations for the data</li>
63
+ <li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul>
64
+
65
+ <h3>Licensing</h3>
66
+ <p>Llama-3.3-70B-Instruct by Meta is licensed under <a href=https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE>Llama 3.3 Community License Agreement (further referred as L3.3 license)</a> and is a subject to <a href=https://www.llama.com/llama3_3/use-policy>Acceptable Use Policy for Llama Materials</a>.<br>
67
+ This derivative is free for personal, research and commercial use on terms of L3.3 license with one extra clause: <br>
68
+ - Infermatic Inc and any of its employees or paid associates cannot utilize, distribute, download, or otherwise make use of EVA models for any purpose.</p>
69
+
70
+
71
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
72
 
73
  ## Merge Details