|
--- |
|
language: en |
|
tags: |
|
- exbert |
|
|
|
license: apache-2.0 |
|
datasets: |
|
- drug-review |
|
--- |
|
|
|
# DistilRoBERTa base fintuned condition classifier |
|
|
|
# Table of Contents |
|
|
|
1. [Model Details](#model-details) |
|
2. [Training Details](#training-details) |
|
3. [Evaluation](#evaluation) |
|
4. [How To Get Started With the Model](#how-to-get-started-with-the-model) |
|
|
|
# Model Details |
|
|
|
## Model Description |
|
|
|
This model is fine-tuned for a condition classification version of the [DistilRoBERTa-base model](https://huggingface.co/distilroberta-base). |
|
This model is case-sensitive: it makes a difference between english and English. |
|
|
|
|
|
- **Fine-tuned by:** Ban Ursus |
|
- **Model type:** Transformer-based language model |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache 2.0 |
|
- **Related Models:** [DistilRoBERTa-base model](https://huggingface.co/distilroberta-base) |
|
- **Resources for more information:** |
|
- [GitHub Repository](https://github.com/BanSangSu/Hugging_Face_NLP_Course/tree/main/Chapter5) |
|
|
|
|
|
|
|
# Training Details |
|
|
|
This model was fine-tuned 5 epochs using [Drug Review Dataset](https://archive.ics.uci.edu/dataset/462/drug+review+dataset+drugs+com). Therefore, you can improve the accuracy of this model just by training more. |
|
|
|
|
|
# Evaluation |
|
|
|
Validation results: |
|
|
|
| Accuracy | F1 score | |
|
|:----:|:----:| |
|
| 0.63 | 0.58 | |
|
|
|
Note: Rounded to 2 decimal places |
|
|
|
|
|
# How to Get Started With the Model |
|
|
|
Follow the Section 2 **Try it out!** of the [GitHub Repository](https://github.com/BanSangSu/Hugging_Face_NLP_Course/tree/main/Chapter5). |