Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,53 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
3 |
---
|
4 |
## Training procedure
|
5 |
|
6 |
-
|
|
|
|
|
7 |
|
8 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
language: tr
|
4 |
---
|
5 |
## Training procedure
|
6 |
|
7 |
+
This is a fine-tuned model of base model "dbmdz/bert-base-turkish-cased" using the Parameter Efficient Fine Tuning (PEFT) with Low-Rank Adaptation (LoRA) technique
|
8 |
+
using a reviewed version of well known Turkish NER dataset
|
9 |
+
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
|
10 |
|
11 |
+
# Fine-tuning parameters:
|
12 |
+
```
|
13 |
+
task = "ner"
|
14 |
+
model_checkpoint = "dbmdz/bert-base-turkish-cased"
|
15 |
+
batch_size = 16
|
16 |
+
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
|
17 |
+
max_length = 512
|
18 |
+
learning_rate = 1e-3
|
19 |
+
num_train_epochs = 7
|
20 |
+
weight_decay = 0.01
|
21 |
+
```
|
22 |
+
# PEFT Parameters
|
23 |
+
```
|
24 |
+
inference_mode=False
|
25 |
+
r=16
|
26 |
+
lora_alpha=16
|
27 |
+
lora_dropout=0.1
|
28 |
+
bias="all"
|
29 |
+
```
|
30 |
+
# How to use:
|
31 |
+
```
|
32 |
+
peft_model_id = "akdeniz27/bert-base-turkish-cased-ner-lora"
|
33 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
34 |
+
inference_model = AutoModelForTokenClassification.from_pretrained(
|
35 |
+
config.base_model_name_or_path, num_labels=7, id2label=id2label, label2id=label2id
|
36 |
+
)
|
37 |
+
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|
38 |
+
model = PeftModel.from_pretrained(inference_model, peft_model_id)
|
39 |
+
text = "Mustafa Kemal Atatürk 1919 yılında Samsun'a çıktı."
|
40 |
+
inputs = tokenizer(text, return_tensors="pt")
|
41 |
+
with torch.no_grad():
|
42 |
+
logits = model(**inputs).logits
|
43 |
+
tokens = inputs.tokens()
|
44 |
+
predictions = torch.argmax(logits, dim=2)
|
45 |
+
for token, prediction in zip(tokens, predictions[0].numpy()):
|
46 |
+
print((token, model.config.id2label[prediction]))
|
47 |
+
```
|
48 |
|
49 |
+
# Reference test results:
|
50 |
+
* accuracy: 0.993297
|
51 |
+
* f1: 0.949696
|
52 |
+
* precision: 0.942554
|
53 |
+
* recall: 0.956946
|