tomaarsen HF staff commited on
Commit
c9235e0
1 Parent(s): c4db5ae

Upload model

Browse files
README.md ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc-by-sa-4.0
5
+ library_name: span-marker
6
+ tags:
7
+ - span-marker
8
+ - token-classification
9
+ - ner
10
+ - named-entity-recognition
11
+ - generated_from_span_marker_trainer
12
+ datasets:
13
+ - tomaarsen/ner-orgs
14
+ metrics:
15
+ - precision
16
+ - recall
17
+ - f1
18
+ widget:
19
+ - text: 'The entourage was the largest ever to accompany an ROC president abroad,
20
+ and included: Chuang Ming - yao -LRB- secretary - general, National Security Council
21
+ -RRB-, Tien Hung - mao -LRB- minister of foreign affairs -RRB-, Lin Hsin - yi
22
+ -LRB- minister of economic affairs -RRB-, Chen Po - chih -LRB- chairman, Council
23
+ of Economic Planning and Development -RRB-, Chen Hsi - huang -LRB- chairman, Council
24
+ of Agriculture -RRB-, Chung Chin -LRB- head of the Government Information Office
25
+ -RRB-, Jeffrey Koo -LRB- chairman of the National Association of Industry and
26
+ Commerce -RRB-, Wang Yu - tseng -LRB- chairman of the General Chamber of Commerce
27
+ of the ROC -RRB-, and Lin Kun - chung -LRB- chairman of the Chinese National Federation
28
+ of Industries -RRB-.'
29
+ - text: During the period, IPC monopolized oil exploration inside the Red Line; excluding
30
+ Saudi Arabia and Bahrain, where ARAMCO (formed in 1944 by renaming of the Saudi
31
+ subsidiary of Standard Oil of California (Socal)) and Bahrain Petroleum Company
32
+ (BAPCO) respectively held controlling position.
33
+ - text: In the early decades of the 20th century, Benoytosh Bhattacharya – an expert
34
+ on Tantra and the then director of the Oriental Institute of Baroda – studied
35
+ various texts such as the Buddhist "Sadhanamala "(1156CE), the Hindu "Chhinnamastakalpa
36
+ "(uncertain date), and the "Tantrasara "by Krishnananda Agamavagisha (late 16th
37
+ century).
38
+ - text: A united opposition of fourteen political parties organized into the National
39
+ Opposition Union (Unión Nacional Oppositora, UNO) with the support of the United
40
+ States National Endowment for Democracy.
41
+ - text: Lockheed said the U.S. Navy may also buy an additional 340 trainer aircraft
42
+ to replace its T34C trainers made by the Beech Aircraft Corp. unit of Raytheon
43
+ Corp.
44
+ pipeline_tag: token-classification
45
+ co2_eq_emissions:
46
+ emissions: 67.50149039261815
47
+ source: codecarbon
48
+ training_type: fine-tuning
49
+ on_cloud: false
50
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
51
+ ram_total_size: 31.777088165283203
52
+ hours_used: 0.629
53
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
54
+ base_model: prajjwal1/bert-small
55
+ model-index:
56
+ - name: SpanMarker with prajjwal1/bert-small on FewNERD, CoNLL2003, and OntoNotes
57
+ v5
58
+ results:
59
+ - task:
60
+ type: token-classification
61
+ name: Named Entity Recognition
62
+ dataset:
63
+ name: FewNERD, CoNLL2003, and OntoNotes v5
64
+ type: tomaarsen/ner-orgs
65
+ split: test
66
+ metrics:
67
+ - type: f1
68
+ value: 0.7438057260629957
69
+ name: F1
70
+ - type: precision
71
+ value: 0.7474561008554705
72
+ name: Precision
73
+ - type: recall
74
+ value: 0.7401908328874621
75
+ name: Recall
76
+ ---
77
+
78
+ # SpanMarker with prajjwal1/bert-small on FewNERD, CoNLL2003, and OntoNotes v5
79
+
80
+ This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [FewNERD, CoNLL2003, and OntoNotes v5](https://huggingface.co/datasets/tomaarsen/ner-orgs) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) as the underlying encoder.
81
+
82
+ ## Model Details
83
+
84
+ ### Model Description
85
+ - **Model Type:** SpanMarker
86
+ - **Encoder:** [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small)
87
+ - **Maximum Sequence Length:** 256 tokens
88
+ - **Maximum Entity Length:** 8 words
89
+ - **Training Dataset:** [FewNERD, CoNLL2003, and OntoNotes v5](https://huggingface.co/datasets/tomaarsen/ner-orgs)
90
+ - **Language:** en
91
+ - **License:** cc-by-sa-4.0
92
+
93
+ ### Model Sources
94
+
95
+ - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
96
+ - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
97
+
98
+ ### Model Labels
99
+ | Label | Examples |
100
+ |:------|:---------------------------------------------|
101
+ | ORG | "Texas Chicken", "Church 's Chicken", "IAEA" |
102
+
103
+ ## Evaluation
104
+
105
+ ### Metrics
106
+ | Label | Precision | Recall | F1 |
107
+ |:--------|:----------|:-------|:-------|
108
+ | **all** | 0.7475 | 0.7402 | 0.7438 |
109
+ | ORG | 0.7475 | 0.7402 | 0.7438 |
110
+
111
+ ## Uses
112
+
113
+ ### Direct Use for Inference
114
+
115
+ ```python
116
+ from span_marker import SpanMarkerModel
117
+
118
+ # Download from the 🤗 Hub
119
+ model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-small-orgs")
120
+ # Run inference
121
+ entities = model.predict("Lockheed said the U.S. Navy may also buy an additional 340 trainer aircraft to replace its T34C trainers made by the Beech Aircraft Corp. unit of Raytheon Corp.")
122
+ ```
123
+
124
+ ### Downstream Use
125
+ You can finetune this model on your own dataset.
126
+
127
+ <details><summary>Click to expand</summary>
128
+
129
+ ```python
130
+ from span_marker import SpanMarkerModel, Trainer
131
+
132
+ # Download from the 🤗 Hub
133
+ model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-small-orgs")
134
+
135
+ # Specify a Dataset with "tokens" and "ner_tag" columns
136
+ dataset = load_dataset("conll2003") # For example CoNLL2003
137
+
138
+ # Initialize a Trainer using the pretrained model & dataset
139
+ trainer = Trainer(
140
+ model=model,
141
+ train_dataset=dataset["train"],
142
+ eval_dataset=dataset["validation"],
143
+ )
144
+ trainer.train()
145
+ trainer.save_model("tomaarsen/span-marker-bert-small-orgs-finetuned")
146
+ ```
147
+ </details>
148
+
149
+ <!--
150
+ ### Out-of-Scope Use
151
+
152
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
153
+ -->
154
+
155
+ <!--
156
+ ## Bias, Risks and Limitations
157
+
158
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
159
+ -->
160
+
161
+ <!--
162
+ ### Recommendations
163
+
164
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
165
+ -->
166
+
167
+ ## Training Details
168
+
169
+ ### Training Set Metrics
170
+ | Training set | Min | Median | Max |
171
+ |:----------------------|:----|:--------|:----|
172
+ | Sentence length | 1 | 23.5706 | 263 |
173
+ | Entities per sentence | 0 | 0.7865 | 39 |
174
+
175
+ ### Training Hyperparameters
176
+ - learning_rate: 5e-05
177
+ - train_batch_size: 128
178
+ - eval_batch_size: 128
179
+ - seed: 42
180
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
181
+ - lr_scheduler_type: linear
182
+ - lr_scheduler_warmup_ratio: 0.1
183
+ - num_epochs: 3
184
+
185
+ ### Training Results
186
+ | Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
187
+ |:------:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
188
+ | 0.5720 | 600 | 0.0085 | 0.7230 | 0.6552 | 0.6874 | 0.9641 |
189
+ | 1.1439 | 1200 | 0.0078 | 0.7324 | 0.7021 | 0.7169 | 0.9663 |
190
+ | 1.7159 | 1800 | 0.0074 | 0.7499 | 0.7213 | 0.7353 | 0.9679 |
191
+ | 2.2879 | 2400 | 0.0074 | 0.7611 | 0.7318 | 0.7462 | 0.9701 |
192
+ | 2.8599 | 3000 | 0.0072 | 0.772 | 0.7268 | 0.7487 | 0.9700 |
193
+
194
+ ### Environmental Impact
195
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
196
+ - **Carbon Emitted**: 0.068 kg of CO2
197
+ - **Hours Used**: 0.629 hours
198
+
199
+ ### Training Hardware
200
+ - **On Cloud**: No
201
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
202
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
203
+ - **RAM Size**: 31.78 GB
204
+
205
+ ### Framework Versions
206
+ - Python: 3.9.16
207
+ - SpanMarker: 1.5.1.dev
208
+ - Transformers: 4.30.0
209
+ - PyTorch: 2.0.1+cu118
210
+ - Datasets: 2.14.0
211
+ - Tokenizers: 0.13.3
212
+
213
+ ## Citation
214
+
215
+ ### BibTeX
216
+ ```
217
+ @software{Aarsen_SpanMarker,
218
+ author = {Aarsen, Tom},
219
+ license = {Apache-2.0},
220
+ title = {{SpanMarker for Named Entity Recognition}},
221
+ url = {https://github.com/tomaarsen/SpanMarkerNER}
222
+ }
223
+ ```
224
+
225
+ <!--
226
+ ## Glossary
227
+
228
+ *Clearly define terms in order to be accessible across audiences.*
229
+ -->
230
+
231
+ <!--
232
+ ## Model Card Authors
233
+
234
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
235
+ -->
236
+
237
+ <!--
238
+ ## Model Card Contact
239
+
240
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
241
+ -->
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<end>": 30523,
3
+ "<start>": 30522
4
+ }
config.json ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "SpanMarkerModel"
4
+ ],
5
+ "encoder": {
6
+ "_name_or_path": "prajjwal1/bert-small",
7
+ "add_cross_attention": false,
8
+ "architectures": null,
9
+ "attention_probs_dropout_prob": 0.1,
10
+ "bad_words_ids": null,
11
+ "begin_suppress_tokens": null,
12
+ "bos_token_id": null,
13
+ "chunk_size_feed_forward": 0,
14
+ "classifier_dropout": null,
15
+ "cross_attention_hidden_size": null,
16
+ "decoder_start_token_id": null,
17
+ "diversity_penalty": 0.0,
18
+ "do_sample": false,
19
+ "early_stopping": false,
20
+ "encoder_no_repeat_ngram_size": 0,
21
+ "eos_token_id": null,
22
+ "exponential_decay_length_penalty": null,
23
+ "finetuning_task": null,
24
+ "forced_bos_token_id": null,
25
+ "forced_eos_token_id": null,
26
+ "hidden_act": "gelu",
27
+ "hidden_dropout_prob": 0.1,
28
+ "hidden_size": 512,
29
+ "id2label": {
30
+ "0": "O",
31
+ "1": "B-ORG",
32
+ "2": "I-ORG"
33
+ },
34
+ "initializer_range": 0.02,
35
+ "intermediate_size": 2048,
36
+ "is_decoder": false,
37
+ "is_encoder_decoder": false,
38
+ "label2id": {
39
+ "B-ORG": 1,
40
+ "I-ORG": 2,
41
+ "O": 0
42
+ },
43
+ "layer_norm_eps": 1e-12,
44
+ "length_penalty": 1.0,
45
+ "max_length": 20,
46
+ "max_position_embeddings": 512,
47
+ "min_length": 0,
48
+ "model_type": "bert",
49
+ "no_repeat_ngram_size": 0,
50
+ "num_attention_heads": 8,
51
+ "num_beam_groups": 1,
52
+ "num_beams": 1,
53
+ "num_hidden_layers": 4,
54
+ "num_return_sequences": 1,
55
+ "output_attentions": false,
56
+ "output_hidden_states": false,
57
+ "output_scores": false,
58
+ "pad_token_id": 0,
59
+ "position_embedding_type": "absolute",
60
+ "prefix": null,
61
+ "problem_type": null,
62
+ "pruned_heads": {},
63
+ "remove_invalid_values": false,
64
+ "repetition_penalty": 1.0,
65
+ "return_dict": true,
66
+ "return_dict_in_generate": false,
67
+ "sep_token_id": null,
68
+ "suppress_tokens": null,
69
+ "task_specific_params": null,
70
+ "temperature": 1.0,
71
+ "tf_legacy_loss": false,
72
+ "tie_encoder_decoder": false,
73
+ "tie_word_embeddings": true,
74
+ "tokenizer_class": null,
75
+ "top_k": 50,
76
+ "top_p": 1.0,
77
+ "torch_dtype": null,
78
+ "torchscript": false,
79
+ "transformers_version": "4.30.0",
80
+ "type_vocab_size": 2,
81
+ "typical_p": 1.0,
82
+ "use_bfloat16": false,
83
+ "use_cache": true,
84
+ "vocab_size": 30524
85
+ },
86
+ "entity_max_length": 8,
87
+ "id2label": {
88
+ "0": "O",
89
+ "1": "ORG"
90
+ },
91
+ "id2reduced_id": {
92
+ "0": 0,
93
+ "1": 1,
94
+ "2": 1
95
+ },
96
+ "label2id": {
97
+ "O": 0,
98
+ "ORG": 1
99
+ },
100
+ "marker_max_length": 128,
101
+ "max_next_context": null,
102
+ "max_prev_context": null,
103
+ "model_max_length": 256,
104
+ "model_max_length_default": 512,
105
+ "model_type": "span-marker",
106
+ "span_marker_version": "1.5.1.dev",
107
+ "torch_dtype": "float32",
108
+ "trained_with_document_context": false,
109
+ "transformers_version": "4.30.0",
110
+ "vocab_size": 30524
111
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b2746c1382c0ffeeeefb0de829f38c75ba3ddf9e5de8ad14277e0c443470c2e
3
+ size 115096015
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "clean_up_tokenization_spaces": true,
4
+ "cls_token": "[CLS]",
5
+ "do_basic_tokenize": true,
6
+ "do_lower_case": true,
7
+ "entity_max_length": 8,
8
+ "marker_max_length": 128,
9
+ "mask_token": "[MASK]",
10
+ "model_max_length": 256,
11
+ "never_split": null,
12
+ "pad_token": "[PAD]",
13
+ "sep_token": "[SEP]",
14
+ "strip_accents": null,
15
+ "tokenize_chinese_chars": true,
16
+ "tokenizer_class": "BertTokenizer",
17
+ "unk_token": "[UNK]"
18
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff