shi-zheng-qxhs commited on
Commit
90246b2
1 Parent(s): b0ad3b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md CHANGED
@@ -1,3 +1,73 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ base_model: gpt2
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: gpt2_oasst2_curated
8
+ results: []
9
+ datasets:
10
+ - sablo/oasst2_curated
11
+ language:
12
+ - en
13
+ pipeline_tag: text-generation
14
+ widget:
15
+ - text: Who is the president of the United States?
16
+ - text: Hi, my name is Superman. How are you!
17
+ - text: Do you know the history of Chelsea Football Club?
18
+ inference:
19
+ parameters:
20
+ max_length: 128
21
+ penalty_alpha: 0.6
22
+ top_k: 6
23
+ pad_token_id: 50256
24
+ eos_token_id: 50256
25
+ library_name: transformers.js
26
  ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # gpt2_oasst2_curated
32
+
33
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on [sablo/oasst2_curated](https://huggingface.co/datasets/sablo/oasst2_curated) dataset
34
+
35
+ ## Model description
36
+
37
+ For experimental purpose - chatbot
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 0.0005
53
+ - train_batch_size: 32
54
+ - eval_batch_size: 32
55
+ - seed: 42
56
+ - gradient_accumulation_steps: 8
57
+ - total_train_batch_size: 256
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: cosine
60
+ - lr_scheduler_warmup_steps: 1000
61
+ - num_epochs: 20
62
+ - mixed_precision_training: Native AMP
63
+
64
+ ### Training results
65
+
66
+
67
+
68
+ ### Framework versions
69
+
70
+ - Transformers 4.36.2
71
+ - Pytorch 2.0.0
72
+ - Datasets 2.16.1
73
+ - Tokenizers 0.15.0