eddiegulay commited on
Commit
d074ac6
1 Parent(s): 9e83f05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -27
README.md CHANGED
@@ -38,55 +38,95 @@ model-index:
38
  value: 0.9782491655001615
39
  ---
40
 
41
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
- should probably proofread and complete it, then remove this comment. -->
43
 
44
- # base-NER
45
 
46
- This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the conll2003 dataset.
47
- It achieves the following results on the evaluation set:
48
- - Loss: 0.1129
49
- - Precision: 0.8845
50
- - Recall: 0.9017
51
- - F1: 0.8930
52
- - Accuracy: 0.9782
53
 
54
- ## Model description
55
 
56
- More information needed
 
57
 
58
- ## Intended uses & limitations
 
59
 
60
- More information needed
 
 
 
61
 
62
- ## Training and evaluation data
63
 
64
- More information needed
65
 
66
- ## Training procedure
 
 
 
 
67
 
68
- ### Training hyperparameters
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
  The following hyperparameters were used during training:
71
- - learning_rate: 2e-05
72
- - train_batch_size: 16
73
- - eval_batch_size: 16
74
- - seed: 42
75
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
76
- - lr_scheduler_type: linear
77
- - num_epochs: 2
78
 
79
- ### Training results
80
 
81
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
82
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
83
  | 0.0595 | 1.0 | 878 | 0.1046 | 0.8676 | 0.8909 | 0.8791 | 0.9762 |
84
  | 0.0319 | 2.0 | 1756 | 0.1129 | 0.8845 | 0.9017 | 0.8930 | 0.9782 |
85
 
 
 
 
86
 
87
- ### Framework versions
 
88
 
89
  - Transformers 4.44.2
90
  - Pytorch 2.4.0+cu121
91
  - Datasets 2.21.0
92
  - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
38
  value: 0.9782491655001615
39
  ---
40
 
 
 
41
 
42
+ # base-NER: A Named Entity Recognition (NER) Model
43
 
44
+ `base-NER` is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the CoNLL2003 dataset, designed for the task of **Named Entity Recognition (NER)**. This model can identify entities like people, organizations, locations, and more from text.
 
 
 
 
 
 
45
 
 
46
 
47
+ ```python
48
+ from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
49
 
50
+ model = AutoModelForTokenClassification.from_pretrained("eddiegulay/base-NER")
51
+ tokenizer = AutoTokenizer.from_pretrained("eddiegulay/base-NER")
52
 
53
+ classifier = pipeline("ner", model=model, tokenizer=tokenizer)
54
+ result = classifier("My name is Edgar and I stay in Dar es Salaam")
55
+ print(result)
56
+ ```
57
 
 
58
 
59
+ ## Model Performance
60
 
61
+ The model achieved the following results on the CoNLL2003 test set:
62
+ - **Precision**: 0.8845
63
+ - **Recall**: 0.9017
64
+ - **F1-Score**: 0.8930
65
+ - **Accuracy**: 0.9782
66
 
67
+ The loss during training was 0.1129 on the validation set.
68
+
69
+ ## Model Description
70
+
71
+ This model leverages the DistilBERT architecture, which is a smaller and faster version of BERT, designed for efficiency while maintaining strong performance. The model is specifically fine-tuned for NER tasks, making it ideal for entity extraction in various domains like finance, healthcare, or general text analytics.
72
+
73
+ ## Intended Uses & Limitations
74
+
75
+ **Intended Uses**:
76
+ - Text extraction tasks for recognizing names of people, organizations, locations, dates, and other named entities in a sentence.
77
+ - Suitable for use in production applications where lightweight models are preferred due to memory or speed constraints.
78
+
79
+ **Limitations**:
80
+ - The model is limited to English texts, as it was trained on the CoNLL2003 dataset.
81
+ - Performance may degrade when used on domain-specific entities not present in the CoNLL2003 dataset (e.g., technical or biomedical domains).
82
+ - May struggle with ambiguous or context-dependent entity classifications.
83
+
84
+ ## Training and Evaluation Data
85
+
86
+ The model was trained on the **CoNLL2003** dataset, which contains annotations for named entities in English text. It is a widely-used dataset for NER tasks, consisting of four entity types: **person**, **organization**, **location**, and **miscellaneous**.
87
+
88
+ ### Dataset Configuration
89
+ - **Dataset**: CoNLL2003
90
+ - **Split**: Test set used for evaluation
91
+ - **Entity Types**: Person, Organization, Location, Miscellaneous
92
+
93
+ ## Training Procedure
94
+
95
+ The model was fine-tuned for 2 epochs using a linear learning rate scheduler and an Adam optimizer.
96
+
97
+ ### Training Hyperparameters
98
 
99
  The following hyperparameters were used during training:
100
+ - **Learning Rate**: 2e-5
101
+ - **Batch Size**: 16 (train and eval)
102
+ - **Seed**: 42
103
+ - **Optimizer**: Adam (betas=(0.9,0.999), epsilon=1e-8)
104
+ - **Scheduler**: Linear
105
+ - **Epochs**: 2
 
106
 
107
+ ### Training Results
108
 
109
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
110
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
111
  | 0.0595 | 1.0 | 878 | 0.1046 | 0.8676 | 0.8909 | 0.8791 | 0.9762 |
112
  | 0.0319 | 2.0 | 1756 | 0.1129 | 0.8845 | 0.9017 | 0.8930 | 0.9782 |
113
 
114
+ ## Usage Example
115
+
116
+ You can use this model with Hugging Face's `transformers` library for token classification tasks:
117
 
118
+
119
+ ## Framework Versions
120
 
121
  - Transformers 4.44.2
122
  - Pytorch 2.4.0+cu121
123
  - Datasets 2.21.0
124
  - Tokenizers 0.19.1
125
+
126
+ ## Future Improvements
127
+
128
+ - Fine-tuning the model on more domain-specific datasets for improved generalization.
129
+ - Implementing entity recognition for additional entity types, including products, dates, and technical terms.
130
+
131
+ Feel free to modify or add more details, especially for sections like model description, intended uses, and limitations.
132
+