Edit model card

KorFinASC-XLM-RoBERTa

Pretrained XLM-RoBERTA-Large transfered to the Finance domain on Korean Language.
See paper for more details.

Data

KorFinASC-XLM-RoBERTa is extensively trained on multiple datasets including KorFin-ASC, Ko-FinSA, Ko-ABSA and ModuABSA.

How to use.

>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("amphora/KorFinASC-XLM-RoBERTa")
>>> model = AutoModelForSequenceClassification.from_pretrained("amphora/KorFinASC-XLM-RoBERTa")

>>> input_str = "장 전체가 폭락한 가운데 삼성전자만 상승세를 이어갔다. </s> 삼성전자"
>>> input = tokenizer(input_str, return_tensors='pt')
>>> output = model.generate(**input, max_length=20)
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.