Edit model card

Model Info

This model was developed/finetuned for product review task for Turkish Language. Model was finetuned via hepsiburada.com product review dataset.

  • LABEL_0: negative review
  • LABEL_1: positive review

Model Sources

Preprocessing

You must apply removing stopwords, stemming, or lemmatization process for Turkish.

Results

  • auprc = 0.9703364794020499
  • auroc = 0.9740012964967856
  • eval_loss = 0.358846469963511
  • fn = 193
  • fp = 207
  • mcc = 0.8537512867685785
  • tn = 2493
  • tp = 2578
  • Accuracy: %92.68

Citation

BibTeX:

@INPROCEEDINGS{9559007, author={Guven, Zekeriya Anil}, booktitle={2021 6th International Conference on Computer Science and Engineering (UBMK)}, title={The Effect of BERT, ELECTRA and ALBERT Language Models on Sentiment Analysis for Turkish Product Reviews}, year={2021}, volume={}, number={}, pages={629-632}, keywords={Computer science;Sentiment analysis;Analytical models;Computational modeling;Bit error rate;Time factors;Random forests;Sentiment Analysis;Language Model;Product Review;Machine Learning;E-commerce}, doi={10.1109/UBMK52708.2021.9559007}}

APA:

Guven, Z. A. (2021, September). The effect of bert, electra and albert language models on sentiment analysis for turkish product reviews. In 2021 6th International Conference on Computer Science and Engineering (UBMK) (pp. 629-632). IEEE.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train anilguven/bert_tr_turkish_product_reviews

Collection including anilguven/bert_tr_turkish_product_reviews