File size: 1,381 Bytes
fad6be3
 
1f56ac4
 
 
fad6be3
 
 
1f56ac4
 
 
fad6be3
1f56ac4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fad6be3
1f56ac4
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
library_name: peft
language: tr
widget:
- text: "Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a çıktı."
---
## Training procedure

This is a fine-tuned model of base model "dbmdz/bert-base-turkish-cased" using the Parameter Efficient Fine Tuning (PEFT) with Low-Rank Adaptation (LoRA) technique 
using a reviewed version of well known Turkish NER dataset 
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).

# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/bert-base-turkish-cased"
batch_size = 16 
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512 
learning_rate = 1e-3 
num_train_epochs = 10 
weight_decay = 0.01
```
# PEFT Parameters
```
inference_mode=False
r=16
lora_alpha=16
lora_dropout=0.1
bias="all"
```
# How to use: 
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("your text here")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.

# Reference test results:
* accuracy: 0.993104
* f1: 0.948413
* precision: 0.939789
* recall: 0.957196