--- language: - ko tags: - generated_from_keras_callback model-index: - name: t5-large-korean-P2G results: [] --- # t5-large-korean-P2G 이 모델은 lcw99 / t5-large-korean-text-summary을 국립 국어원 신문 말뭉치 50만개의 문장을 2021을 g2pK로 훈련시켜 G2P된 데이터를 원본으로 돌립니다.
git : https://github.com/taemin6697
## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_dir = "kfkas/t5-large-korean-P2G" tokenizer = AutoTokenizer.from_pretrained(model_dir) model = AutoModelForSeq2SeqLM.from_pretrained(model_dir) text = "서규왕국 싸우디 태양광·풍녁 빨쩐 중심지 될 껃" inputs = tokenizer.encode(text,return_tensors="pt") output = model.generate(inputs) decoded_output = tokenizer.batch(output[0], skip_special_tokens=True) print(decoded_output)#석유왕국 사우디 태양광·풍력 발전 중심지 될 것 ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float16 ### Training results ### Framework versions - Transformers 4.22.1 - TensorFlow 2.10.0 - Datasets 2.5.1 - Tokenizers 0.12.1