Upload README.md
#2
by
leelearn
- opened
README.md
CHANGED
@@ -4,9 +4,6 @@ language:
|
|
4 |
- zh
|
5 |
tags:
|
6 |
- GENIUS
|
7 |
-
- conditional text generation
|
8 |
-
- sketch-based text generation
|
9 |
-
- data augmentation
|
10 |
|
11 |
license: apache-2.0
|
12 |
datasets:
|
@@ -24,7 +21,7 @@ widget:
|
|
24 |
|
25 |
inference:
|
26 |
parameters:
|
27 |
-
max_length:
|
28 |
num_beams: 3
|
29 |
do_sample: True
|
30 |
---
|
@@ -47,7 +44,7 @@ inference:
|
|
47 |
```python
|
48 |
# genius-chinese
|
49 |
from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
|
50 |
-
checkpoint = '
|
51 |
tokenizer = BertTokenizer.from_pretrained(checkpoint)
|
52 |
genius_model = BartForConditionalGeneration.from_pretrained(checkpoint)
|
53 |
genius_generator = Text2TextGenerationPipeline(genius_model, tokenizer, device=0)
|
@@ -120,7 +117,6 @@ GENIUS-chinese output:
|
|
120 |
|
121 |
可以看出,BART只能填补简单的一些词,无法对这些片段进行很连贯的连接,而GENIUS则可以扩写成连贯的句子甚至段落。
|
122 |
|
123 |
-
|
124 |
---
|
125 |
|
126 |
If you find our paper/code/demo useful, please cite our paper:
|
|
|
4 |
- zh
|
5 |
tags:
|
6 |
- GENIUS
|
|
|
|
|
|
|
7 |
|
8 |
license: apache-2.0
|
9 |
datasets:
|
|
|
21 |
|
22 |
inference:
|
23 |
parameters:
|
24 |
+
max_length: 1000
|
25 |
num_beams: 3
|
26 |
do_sample: True
|
27 |
---
|
|
|
44 |
```python
|
45 |
# genius-chinese
|
46 |
from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
|
47 |
+
checkpoint = 'leelearn/genius-yiyan-prompt-generator'
|
48 |
tokenizer = BertTokenizer.from_pretrained(checkpoint)
|
49 |
genius_model = BartForConditionalGeneration.from_pretrained(checkpoint)
|
50 |
genius_generator = Text2TextGenerationPipeline(genius_model, tokenizer, device=0)
|
|
|
117 |
|
118 |
可以看出,BART只能填补简单的一些词,无法对这些片段进行很连贯的连接,而GENIUS则可以扩写成连贯的句子甚至段落。
|
119 |
|
|
|
120 |
---
|
121 |
|
122 |
If you find our paper/code/demo useful, please cite our paper:
|