11b
commited on
Commit
•
68bd605
0
Parent(s):
feat: initial commit
Browse files- .gitattributes +34 -0
- README.md +59 -0
- config.json +47 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +5 -0
- tensorboard_runs/2022-12-26T_19-55-05/events.out.tfevents.1672095305.lavidP6000.20829.0 +3 -0
- tensorboard_runs/2022-12-26T_20-36-16/events.out.tfevents.1672097776.lavidP6000.1415.0 +3 -0
- tensorboard_runs/2022-12-26T_21-04-54/events.out.tfevents.1672099494.lavidP6000.9498.0 +3 -0
- tensorboard_runs/2022-12-26T_21-17-43/events.out.tfevents.1672100263.lavidP6000.13066.0 +3 -0
- tensorboard_runs/2022-12-26T_22-05-19/events.out.tfevents.1672103119.lavidP6000.27877.0 +3 -0
- tensorboard_runs/2022-12-27T_12-19-34/events.out.tfevents.1672154374.lavidP6000.9711.0 +3 -0
- tensorboard_runs/2022-12-27T_14-17-28/events.out.tfevents.1672161448.lavidP6000.18901.0 +3 -0
- tokenizer.json +0 -0
- tokenizer_config.json +9 -0
.gitattributes
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: agpl-3.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
thumbnail:
|
6 |
+
tags:
|
7 |
+
- text generation
|
8 |
+
- conversational
|
9 |
+
inference: false
|
10 |
+
|
11 |
+
---
|
12 |
+
|
13 |
+
# Pygmalion 1.3B
|
14 |
+
|
15 |
+
## Model description
|
16 |
+
|
17 |
+
Pymalion 1.3B is a proof-of-concept dialogue model based on EleutherAI's [pythia-1.3b-deduped](https://huggingface.co/EleutherAI/pythia-1.3b-deduped).
|
18 |
+
|
19 |
+
**Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances.
|
20 |
+
|
21 |
+
## Training data
|
22 |
+
|
23 |
+
The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations.
|
24 |
+
|
25 |
+
## Training procedure
|
26 |
+
|
27 |
+
Fine-tuning was done using [ColossalAI](https://github.com/hpcaitech/ColossalAI) (specifically, with a slightly modified version of their [OPT fine-tune example](https://github.com/hpcaitech/ColossalAI/blob/78509124d32b63b7fc36f6508e0576a326d51422/examples/language/opt/run_clm.py)) for around 11.4 million tokens over 5440 steps on a single 24GB GPU. The run took just under 21 hours.
|
28 |
+
|
29 |
+
## Intended use
|
30 |
+
|
31 |
+
### The easy way
|
32 |
+
|
33 |
+
We plan to provide a notebook with a Gradio UI for playing around with the model shortly. Until then, please refer to the section below for manual usage.
|
34 |
+
|
35 |
+
### The manual way
|
36 |
+
|
37 |
+
The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format:
|
38 |
+
|
39 |
+
```
|
40 |
+
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
|
41 |
+
|
42 |
+
[DIALOGUE HISTORY]
|
43 |
+
You: [Your input message here]
|
44 |
+
[CHARACTER]:
|
45 |
+
```
|
46 |
+
|
47 |
+
Where `[CHARACTER] `is, as you can probably guess, the name of the character you want the model to portray, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like:
|
48 |
+
|
49 |
+
```
|
50 |
+
[CHARACTER]: [some dialogue here]
|
51 |
+
You: [your response to the dialogue above]
|
52 |
+
```
|
53 |
+
|
54 |
+
Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition.
|
55 |
+
|
56 |
+
## Known issues
|
57 |
+
|
58 |
+
- The model can get stuck repeating certain phrases, or sometimes even entire sentences.
|
59 |
+
- We believe this is due to that behavior being present in the training data itself, and plan to investigate and adjust accordingly for future versions.
|
config.json
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "EleutherAI/pythia-1.3b-deduped",
|
3 |
+
"architectures": [
|
4 |
+
"GPTNeoXForCausalLM"
|
5 |
+
],
|
6 |
+
"bad_words_ids": [
|
7 |
+
[
|
8 |
+
434,
|
9 |
+
15694,
|
10 |
+
66,
|
11 |
+
27,
|
12 |
+
209
|
13 |
+
],
|
14 |
+
[
|
15 |
+
15362
|
16 |
+
],
|
17 |
+
[
|
18 |
+
1713
|
19 |
+
],
|
20 |
+
[
|
21 |
+
1713,
|
22 |
+
64
|
23 |
+
],
|
24 |
+
[
|
25 |
+
2391
|
26 |
+
]
|
27 |
+
],
|
28 |
+
"bos_token_id": 0,
|
29 |
+
"eos_token_id": 0,
|
30 |
+
"hidden_act": "gelu",
|
31 |
+
"hidden_size": 2048,
|
32 |
+
"initializer_range": 0.02,
|
33 |
+
"intermediate_size": 8192,
|
34 |
+
"layer_norm_eps": 1e-05,
|
35 |
+
"max_position_embeddings": 2048,
|
36 |
+
"model_type": "gpt_neox",
|
37 |
+
"num_attention_heads": 16,
|
38 |
+
"num_hidden_layers": 24,
|
39 |
+
"rotary_emb_base": 10000,
|
40 |
+
"rotary_pct": 0.25,
|
41 |
+
"tie_word_embeddings": false,
|
42 |
+
"torch_dtype": "float16",
|
43 |
+
"transformers_version": "4.26.0.dev0",
|
44 |
+
"use_cache": true,
|
45 |
+
"use_parallel_residual": true,
|
46 |
+
"vocab_size": 50304
|
47 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8bfdcd3c115400fedbc36bfb2561edd4e7b424b0f8ac81c4b02f970fbf72380a
|
3 |
+
size 2930076797
|
special_tokens_map.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<|endoftext|>",
|
3 |
+
"eos_token": "<|endoftext|>",
|
4 |
+
"unk_token": "<|endoftext|>"
|
5 |
+
}
|
tensorboard_runs/2022-12-26T_19-55-05/events.out.tfevents.1672095305.lavidP6000.20829.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:75a0d7b08dd9db7fe5cfa86f2160b5907ed98f27187a68e80982c648075bbc5d
|
3 |
+
size 14890
|
tensorboard_runs/2022-12-26T_20-36-16/events.out.tfevents.1672097776.lavidP6000.1415.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1d7b5cc42203fb0878cb3ef4c5b10ee26a4c2bf1ebd520f8d1003c221c268784
|
3 |
+
size 23464
|
tensorboard_runs/2022-12-26T_21-04-54/events.out.tfevents.1672099494.lavidP6000.9498.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ce0315c65928850fb05ff521feacd66e4548e1805e72b4a7b3c2ba93bad9e34c
|
3 |
+
size 11752
|
tensorboard_runs/2022-12-26T_21-17-43/events.out.tfevents.1672100263.lavidP6000.13066.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:839e9d6814b341244d9559c8d8da790924cddfde52ec7553839e0680cdb24775
|
3 |
+
size 35176
|
tensorboard_runs/2022-12-26T_22-05-19/events.out.tfevents.1672103119.lavidP6000.27877.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2a86a31c7adaf2d7e19fb08bbf934a9a325becc02fbe28bf01f1972b04275277
|
3 |
+
size 597718
|
tensorboard_runs/2022-12-27T_12-19-34/events.out.tfevents.1672154374.lavidP6000.9711.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e98e45100f5830efc847fe0900b5eba223eacd4b1f26990245f0621252faedcb
|
3 |
+
size 82024
|
tensorboard_runs/2022-12-27T_14-17-28/events.out.tfevents.1672161448.lavidP6000.18901.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:49a383522d051485967feefab91e1e6499f2a25a099c611fdd4bfe400a20aa87
|
3 |
+
size 234280
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"bos_token": "<|endoftext|>",
|
4 |
+
"eos_token": "<|endoftext|>",
|
5 |
+
"name_or_path": "EleutherAI/gpt-neox-20b",
|
6 |
+
"special_tokens_map_file": "/fsx/home-hailey/.cache/huggingface/hub/models--EleutherAI--gpt-neox-20b/snapshots/3523781c8df75f7741687a4284f6f70e1afa12f4/special_tokens_map.json",
|
7 |
+
"tokenizer_class": "GPTNeoXTokenizer",
|
8 |
+
"unk_token": "<|endoftext|>"
|
9 |
+
}
|