File size: 4,970 Bytes
4f0a323 b02f94a 4f0a323 b02f94a 4f0a323 b02f94a 4f0a323 b02f94a 4f0a323 b02f94a 4f0a323 b02f94a 4f0a323 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
---
language:
- en
license: mit
base_model: prajjwal1/bert-tiny
tags:
- pytorch
- BertForTokenClassification
- named-entity-recognition
- roberta-base
- generated_from_trainer
metrics:
- recall
- precision
- f1
- accuracy
model-index:
- name: bert-tiny-ontonotes
results:
- task:
type: token-classification
metrics:
- type: accuracy
value: 0.9476
name: accuracy
- type: precision
value: 0.6817
name: precision
- type: accuracy
value: 0.7193
name: recall
- type: accuracy
value: 0.7
name: F1
datasets:
- tner/ontonotes5
library_name: transformers
pipeline_tag: token-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-ontonotes
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the [tner/ontonotes5](https://huggingface.co/datasets/tner/ontonotes5) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1917
- Recall: 0.7193
- Precision: 0.6817
- F1: 0.7000
- Accuracy: 0.9476
## How to use the Model
### Using pipeline
```python
from transformers import pipeline
import torch
# Initiate the pipeline
device = 0 if torch.cuda.is_available() else 'cpu'
ner = pipeline("token-classification", "arnabdhar/bert-tiny-ontonotes", device=device)
# use the pipeline
input_text = "My name is Clara and I live in Berkeley, California."
results = ner(input_text)
```
## Intended uses & limitations
This model is fine-tuned for **Named Entity Recognition** task and you can use the model as it is or can use this model as a base model for further fine tuning on your custom dataset.
The following entities were fine-tuned on:
CARDINAL, DATE, PERSON, NORP, GPE, LAW, PERCENT, ORDINAL, MONEY, WORK_OF_ART, FAC, TIME, QUANTITY, PRODUCT, LANGUAGE, ORG, LOC, EVENT
## Training and evaluation data
The dataset has 3 partitions, `train`, `validation` and `test`, all the 3 partitions were combined and then a 80:20 train-test split was made for finet uning process. The following `ID2LABEL` mapping was used.
```json
{
"0": "O",
"1": "B-CARDINAL",
"2": "B-DATE",
"3": "I-DATE",
"4": "B-PERSON",
"5": "I-PERSON",
"6": "B-NORP",
"7": "B-GPE",
"8": "I-GPE",
"9": "B-LAW",
"10": "I-LAW",
"11": "B-ORG",
"12": "I-ORG",
"13": "B-PERCENT",
"14": "I-PERCENT",
"15": "B-ORDINAL",
"16": "B-MONEY",
"17": "I-MONEY",
"18": "B-WORK_OF_ART",
"19": "I-WORK_OF_ART",
"20": "B-FAC",
"21": "B-TIME",
"22": "I-CARDINAL",
"23": "B-LOC",
"24": "B-QUANTITY",
"25": "I-QUANTITY",
"26": "I-NORP",
"27": "I-LOC",
"28": "B-PRODUCT",
"29": "I-TIME",
"30": "B-EVENT",
"31": "I-EVENT",
"32": "I-FAC",
"33": "B-LANGUAGE",
"34": "I-PRODUCT",
"35": "I-ORDINAL",
"36": "I-LANGUAGE"
}
```
## Training procedure
The model was finetuned on Google Colab with a __NVIDIA T4__ GPU with 15GB of VRAM. It took around 5 minutes to fine tune and evaluate the model with 6000 steps of total training steps. For more details, you can look into the [Weights & Biases](https://wandb.ai/2wb2ndur/NER-ontonotes/runs/d93imv8j/overview?workspace=user-2wb2ndur) log history.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 32
- eval_batch_size: 160
- seed: 75241309
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 6000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Precision | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
| 0.4283 | 0.31 | 600 | 0.3864 | 0.4561 | 0.4260 | 0.4405 | 0.9058 |
| 0.3214 | 0.63 | 1200 | 0.2865 | 0.5865 | 0.5485 | 0.5669 | 0.9265 |
| 0.2886 | 0.94 | 1800 | 0.2439 | 0.6432 | 0.6165 | 0.6295 | 0.9354 |
| 0.2511 | 1.25 | 2400 | 0.2233 | 0.6765 | 0.6250 | 0.6497 | 0.9389 |
| 0.2224 | 1.56 | 3000 | 0.2088 | 0.6878 | 0.6642 | 0.6758 | 0.9433 |
| 0.2181 | 1.88 | 3600 | 0.2001 | 0.7105 | 0.6684 | 0.6888 | 0.9451 |
| 0.215 | 2.19 | 4200 | 0.1954 | 0.7140 | 0.6795 | 0.6963 | 0.9469 |
| 0.1907 | 2.5 | 4800 | 0.1934 | 0.7169 | 0.6776 | 0.6967 | 0.9470 |
| 0.209 | 2.82 | 5400 | 0.1918 | 0.7185 | 0.6812 | 0.6994 | 0.9475 |
| 0.2073 | 3.13 | 6000 | 0.1917 | 0.7193 | 0.6817 | 0.7000 | 0.9476 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|