AzureBlack commited on
Commit
c704c7a
·
1 Parent(s): 5dd7694

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +71 -0
  2. huggingface-metadata.txt +20 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ ---
6
+
7
+ # DreamGen Opus V0 70B
8
+
9
+ **DreamGen Opus** is a family of **uncensored** models fine-tuned for **(steerable) story writing** and the model also works great for **chat / RP**.
10
+ The DreamGen Opus V0.5 70B model is derived from [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf).
11
+
12
+ You can **try the Opus V0 70B** (AWQ) model for free on [dreamgen.com](https://dreamgen.com).
13
+
14
+ Other sizes:
15
+
16
+ - 7B: [dreamgen/opus-v0-7b](https://huggingface.co/dreamgen/opus-v0-7b)
17
+
18
+ ## Difference from [dreamgen/opus-v0-70b](https://huggingface.co/dreamgen/opus-v0-70b)
19
+
20
+ The model should be even better at role-play and chat, and be slighly more "open-minded" in NSFW contexts.
21
+
22
+ ## Prompting
23
+
24
+ Please see the [official documentation](https://dreamgen.com/docs/stories) for more detailed guide, including how to prompt the model for chat / RP.
25
+
26
+ The (collaborative / steerable) story writing task teaches the model to respect `<setting>` and `<instruction>` inserted into the prompt.
27
+
28
+ Example prompt:
29
+
30
+ ```
31
+ <setting>
32
+ (Setting provides general overview of the story and characters)
33
+ This story is a twist on the traditional Little Red Riding Hood story.
34
+ In this variation, the Little Red Riding Hood and her grandma are secretely werevoles.
35
+ </setting>
36
+
37
+ (Previous part of the story, potentially empty)
38
+
39
+ <instruction>
40
+ (Setting tells the model what should happen in the next few sentences / paragraphs)
41
+ The Little Red Riding hood confronts The Big Bad Wolf, transforming into her wolf form.
42
+ </instruction>
43
+ ```
44
+
45
+ ## Dataset
46
+
47
+ The fine-tuning dataset consisted of >1M tokens of collaborative writing task examples, each example being up to 4096 tokens. On top of that, >20M tokens of more general, but less instructed examples were included to help preserve generalization.
48
+
49
+ All prose in the dataset is from actual humans, not AI generated.
50
+
51
+ ## Community
52
+
53
+ Join the DreamGen community on [**Discord**](https://dreamgen.com/discord), or follow our [**X/Twitter account**](https://dreamgen.com/twitter) for new model releases and other news.
54
+ We will soon be releasing models with longer context window, as well as models specifically fine-tuned for character chat & roleplay.
55
+
56
+ Help us shape the future of DreamGen.
57
+
58
+ ## Running the model
59
+
60
+ The model is should be compatible with any software that supports [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf).
61
+ Note that because this is a 70B model, the resource requirements are large. You can try the quantized versions linked at the top, but expect a quality drop.
62
+
63
+ ### Running on DreamGen.com (free)
64
+
65
+ You can try the 70B (AWQ) model for free at [dreamgen.com](https://dreamgen.com) — note that an account is required.
66
+ The version used for the website is the official AWQ 4bit quant [dreamgen/opus-v0-70b-awq](https://huggingface.co/dreamgen/opus-v0-70b-awq).
67
+
68
+ ## License
69
+
70
+ - For personal and academic use: Same license as the base model, in this case https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
71
+ - For commercial use: Please reach out to hello@dreamgen.com.
huggingface-metadata.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ url: https://huggingface.co/dreamgen/opus-v0.5-70b
2
+ branch: main
3
+ download date: 2023-11-18 07:48:35
4
+ sha256sum:
5
+ d486e3e93034ccbdb88ac1ee5bf235fa2e82ff561df2ceba73560bfab53e1704 pytorch_model-00001-of-00015.bin
6
+ 3bcbe4416005dd41e4c877e0b271a9cf364e5adca8accf6200300351c9dda93d pytorch_model-00002-of-00015.bin
7
+ a541cc0314176f12922cc3daba2cdc215e9e61a18b3a6b4f02c7f007758be648 pytorch_model-00003-of-00015.bin
8
+ bab03692d875e916edefaecf356f0626624ae1de7e10fd42a4cd7e5a4d9de087 pytorch_model-00004-of-00015.bin
9
+ 6b1ba5de0e69bc24d2c65da16ce80d1ed1e2c2224ca1ce16cea5170a0f9cb199 pytorch_model-00005-of-00015.bin
10
+ 533c71cb21186baf0cfce4f2ebcf46ae02084b7f6880385e01cd4f49331fba41 pytorch_model-00006-of-00015.bin
11
+ e46bcb1067dfec22bdab1952d6eb995eea7f64f8c5c5f5416af6a33a82d520c8 pytorch_model-00007-of-00015.bin
12
+ c7f1bde98d09e7d5e37fd568272732777c4a9c8f1ea3e672e38b386826365cbf pytorch_model-00008-of-00015.bin
13
+ 2a69ce500e53d37e65c014682e8d5b0be9b859a188bee919b0e1c253ab1c49cf pytorch_model-00009-of-00015.bin
14
+ bf13930f0f3107a96425d80ae2a83fba0931ed38988d0fbeaeacf05a9f2dc925 pytorch_model-00010-of-00015.bin
15
+ d3f37f002b1b9bb070e98d648ec04e11d9e1ca7da34102c84c38ed38973bd5f7 pytorch_model-00011-of-00015.bin
16
+ a498db61424a013bb4c1a716aca856082aef24ee1f875a69ca692cfbac85b7f6 pytorch_model-00012-of-00015.bin
17
+ bfa3ad3082a72f4b25c725e05ba883085671f8a21fd81863928c62a672f4895a pytorch_model-00013-of-00015.bin
18
+ bf08a98dbdff77ee7f1642f107dba836a0eff85509c62d2058f95147bb0a3f24 pytorch_model-00014-of-00015.bin
19
+ 389b333f28df8197e857b33d0d3415177b5ca2c0977a654b29b90b21d043fc18 pytorch_model-00015-of-00015.bin
20
+ 9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347 tokenizer.model