Edit model card
from transformers import BertTokenizer, GPT2LMHeadModel
device = "cpu"

model_path = "svjack/prompt-extend-chinese-gpt"
tokenizer = BertTokenizer.from_pretrained(model_path)
model = GPT2LMHeadModel.from_pretrained(model_path)

prompt = "一只凶猛的老虎,咬死了一只豺狼。"

encode = tokenizer(prompt, return_tensors='pt').to(device)
answer = model.generate(encode.input_ids,
                       max_length = 128,
    num_beams=2,
    top_p = 0.95,
    top_k = 50,
    repetition_penalty = 2.5,
    length_penalty=1.0,
    early_stopping=True,
                       )[0]
decoded = tokenizer.decode(answer, skip_special_tokens=True)
decoded

'一 只 凶 猛 的 老 虎 , 咬 死 了 一 只 豺 狼 。 高 度 详 细 , 数 字 绘 画 , 艺 术 站 , 概 念 艺 术 , 锐 利 的 焦 点 , 插 图 , 电 影 照 明 , 艺 术 由 artgerm 和 greg rutkowski 和 alphonse mucha 8 k 彩 色 辛 烷 渲 染 。'
Downloads last month
30
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using svjack/prompt-extend-chinese-gpt 2