--- language: - ko tags: - generated_from_keras_callback model-index: - name: t5-base-korean-news-title-klue-ynat results: [] --- # t5-large-korean-text-summary 이 모델은 lcw99 / t5-base-korean-text-summary을 klue-ynat으로 훈련시켜 만든 모델입니다. Input = ['IT과학','경제','사회','생활문화','세계','스포츠','정치'] OUTPUT = 각 label에 맞는 뉴스 기사 제목을 생성합니다. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_dir = "kfkas/t5-base-korean-news-title-klue-ynat" tokenizer = AutoTokenizer.from_pretrained(model_dir) model = AutoModelForSeq2SeqLM.from_pretrained(model_dir) model.to(device) label_list = ['IT과학','경제','사회','생활문화','세계','스포츠','정치'] text = "IT과학" inputs = tokenizer.encode(text, max_length=256, truncation=True, return_tensors="pt") with torch.no_grad(): output = model.generate( input_ids, do_sample=True, #샘플링 전략 사용 max_length=128, # 최대 디코딩 길이는 50 top_k=50, # 확률 순위가 50위 밖인 토큰은 샘플링에서 제외 top_p=0.95, # 누적 확률이 95%인 후보집합에서만 생성 ) decoded_output = tokenizer.decode(output, skip_special_tokens=True)[0] print(predicted_title) ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float16 ### Training results ### Framework versions - Transformers 4.22.1 - TensorFlow 2.10.0 - Datasets 2.5.1 - Tokenizers 0.12.1