File size: 2,888 Bytes
aa8c11f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a20f5e
 
 
aa8c11f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---

library_name: transformers
license: mit
language:
- fa
tags:
- named-entity-recognition
- ner
- nlp
- transformers
- persian
- farsi
- persian_ner
- bert
metrics:
- accuracy
pipeline_tag: token-classification
---


# Named-Entity-Recognition for Persian using Transformers

## Model Details

**Model Description:**
This Named-Entity-Recognition (NER) model is designed to identify and classify named entities in Persian (Farsi) text into predefined categories such as person names, organizations, locations, dates, and more. The model is built using the Hugging Face Transformers library and fine-tuned on the [PartAI/TookaBERT-Base](https://huggingface.co/PartAI/TookaBERT-Base) model.

**Intended Use:**
The model is intended for use in applications where identifying and classifying entities in Persian text is required. It can be used for information retrieval, content analysis, customer support automation, and more.

**Model Architecture:**
- **Model Type:** Transformers-based NER
- **Language:** Persian (fa)
- **Base Model:** [PartAI/TookaBERT-Base](https://huggingface.co/PartAI/TookaBERT-Base)

## Training Data

**Dataset:**
The model was trained on a diverse corpus of Persian text, with a training dataset of 15,000 sentences and a test dataset of 2,000 sentences, to ensure broad applicability and high accuracy.

**Data Preprocessing:**
- Text normalization and cleaning were performed to ensure consistency.
- Tokenization was done using the BERT tokenizer.

## Training Procedure

**Training Configuration:**
- **Number of Epochs:** 4
- **Batch Size:** 8
- **Learning Rate:** 1e-5
- **Optimizer:** AdamW

**Training and Validation Losses:**
- **Epoch 1:** 
  - Loss: 0.0610
  - Validation Loss: 0.0347
- **Epoch 2:**
  - Loss: 0.1363
  - Validation Loss: 0.0167
- **Epoch 3:**
  - Loss: 0.0327
  - Validation Loss: 0.0125
- **Epoch 4:**
  - Loss: 0.0016
  - Validation Loss: 0.0062

**Hardware:**
- **Training Environment:** NVIDIA P100 GPU
- **Training Time:** Approximately 1 hour

## Model Prediction Tags
The model predicts the following tags:
- "O"
- "I-product"
- "I-person"
- "I-location"
- "I-group"
- "I-creative-work"
- "I-corporation"
- "B-product"
- "B-person"
- "B-location"
- "B-group"
- "B-creative-work"
- "B-corporation"

## How To Use

```python
import torch
from transformers import pipeline

# Load the NER pipeline
ner_pipeline = pipeline("ner", model="NLPclass/Named-entity-recognition")

# Example text in Persian
text = "باراک اوباما در هاوایی متولد شد."

# Perform NER
entities = ner_pipeline(text)

# Output the entities
print(entities)

```

```bibtex
@misc{NLPclass,
  author = {NLPclass},
  title = {Named-Entity-Recognition for Persian using Transformers},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/NLPclass/Named-entity-recognition}},
}
```