English
BenjiELCA commited on
Commit
e1d1b29
1 Parent(s): 35a4b37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -7
README.md CHANGED
@@ -1,13 +1,12 @@
1
  ---
2
- license: mpl-2.0
3
- language:
4
- - en
5
  metrics:
 
 
6
  - f1
7
  - accuracy
8
- - recall
9
- - precision
10
- pipeline_tag: image-to-text
11
  widget:
12
  - text: The process starts when the customer enters the shop. The customer then takes
13
  the product from the shelf. The customer then pays for the product and leaves
@@ -29,4 +28,83 @@ widget:
29
  order is packed, the shipping department delivers the order to the customer. Finally,
30
  the process ends with an 'End' event, when the customer receives their order.
31
  example_title: Example 3
32
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
  metrics:
6
+ - precision
7
+ - recall
8
  - f1
9
  - accuracy
 
 
 
10
  widget:
11
  - text: The process starts when the customer enters the shop. The customer then takes
12
  the product from the shelf. The customer then pays for the product and leaves
 
28
  order is packed, the shipping department delivers the order to the customer. Finally,
29
  the process ends with an 'End' event, when the customer receives their order.
30
  example_title: Example 3
31
+ base_model: bert-base-cased
32
+ model-index:
33
+ - name: bert-finetuned-v4
34
+ results: []
35
+ ---
36
+
37
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
38
+ should probably proofread and complete it, then remove this comment. -->
39
+
40
+ # bpmn-information-extraction
41
+
42
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on a dataset containing 90 textual process descriptions.
43
+
44
+ The dataset contains 5 target labels:
45
+
46
+ * `AGENT`
47
+ * `TASK`
48
+ * `TASK_INFO`
49
+ * `PROCESS_INFO`
50
+ * `CONDITION`
51
+
52
+ It achieves the following results on the evaluation set:
53
+ - Loss: 0.2909
54
+ - Precision: 0.8557
55
+ - Recall: 0.9247
56
+ - F1: 0.8889
57
+ - Accuracy: 0.9285
58
+
59
+ ## Model description
60
+
61
+ More information needed
62
+
63
+ ## Intended uses & limitations
64
+
65
+ More information needed
66
+
67
+ ## Training and evaluation data
68
+
69
+ More information needed
70
+
71
+ ## Training procedure
72
+
73
+ ### Training hyperparameters
74
+
75
+ The following hyperparameters were used during training:
76
+ - learning_rate: 2e-05
77
+ - train_batch_size: 8
78
+ - eval_batch_size: 8
79
+ - seed: 42
80
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
81
+ - lr_scheduler_type: linear
82
+ - num_epochs: 15
83
+
84
+ ### Training results
85
+
86
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
87
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
88
+ | 2.0586 | 1.0 | 10 | 1.5601 | 0.1278 | 0.1559 | 0.1404 | 0.4750 |
89
+ | 1.3702 | 2.0 | 20 | 1.0113 | 0.3947 | 0.5645 | 0.4646 | 0.7150 |
90
+ | 0.8872 | 3.0 | 30 | 0.6645 | 0.5224 | 0.6882 | 0.5940 | 0.8051 |
91
+ | 0.5341 | 4.0 | 40 | 0.4741 | 0.6754 | 0.8280 | 0.7440 | 0.8541 |
92
+ | 0.3221 | 5.0 | 50 | 0.3831 | 0.7523 | 0.8817 | 0.8119 | 0.8883 |
93
+ | 0.2168 | 6.0 | 60 | 0.3297 | 0.7731 | 0.8978 | 0.8308 | 0.9079 |
94
+ | 0.1565 | 7.0 | 70 | 0.2998 | 0.8195 | 0.9032 | 0.8593 | 0.9128 |
95
+ | 0.1227 | 8.0 | 80 | 0.3227 | 0.8038 | 0.9032 | 0.8506 | 0.9099 |
96
+ | 0.0957 | 9.0 | 90 | 0.2840 | 0.8431 | 0.9247 | 0.8821 | 0.9216 |
97
+ | 0.077 | 10.0 | 100 | 0.2914 | 0.8252 | 0.9140 | 0.8673 | 0.9216 |
98
+ | 0.0691 | 11.0 | 110 | 0.2850 | 0.8431 | 0.9247 | 0.8821 | 0.9285 |
99
+ | 0.059 | 12.0 | 120 | 0.2886 | 0.8564 | 0.9301 | 0.8918 | 0.9285 |
100
+ | 0.0528 | 13.0 | 130 | 0.2838 | 0.8564 | 0.9301 | 0.8918 | 0.9305 |
101
+ | 0.0488 | 14.0 | 140 | 0.2881 | 0.8515 | 0.9247 | 0.8866 | 0.9305 |
102
+ | 0.049 | 15.0 | 150 | 0.2909 | 0.8557 | 0.9247 | 0.8889 | 0.9285 |
103
+
104
+
105
+ ### Framework versions
106
+
107
+ - Transformers 4.25.1
108
+ - Pytorch 1.13.0+cu116
109
+ - Datasets 2.8.0
110
+ - Tokenizers 0.13.2