LauritsT commited on
Commit
f6073dc
1 Parent(s): 0f38de1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +144 -3
README.md CHANGED
@@ -1,3 +1,144 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - tau
7
+ - hep
8
+ - fcc
9
+ - clic
10
+ - ee
11
+ - reconstruction
12
+ - identification
13
+ - decay_mode
14
+ - foundation_model
15
+ - omnijet_alpha
16
+ ---
17
+ # Model Card for Model ID
18
+
19
+ <!-- Provide a quick summary of what the model is/does. -->
20
+
21
+ This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
22
+
23
+ ## Model Details
24
+
25
+ ### Model Description
26
+
27
+ <!-- Provide a longer summary of what this model is. -->
28
+
29
+ - **Developed by:** Joschka Birk, Anna Hallin, Gregor Kasieczka
30
+ - **Model type:** Transformer
31
+ - **Language(s) (NLP):** Pytorch
32
+ - **Finetuned from model:** https://doi.org/10.1088/2632-2153/ad66ad
33
+
34
+ The OmniJet- \\(\alpha\\) model was published in [here](https://doi.org/10.1088/2632-2153/ad66ad) was used as the base model for identifying hadronically decaying taus, reconstructing their kinematics and predicting their decay mode.
35
+ The base model, initially trained on [JetClass dataset](https://doi.org/10.5281/zenodo.6619768), was now fine-tuned on [Fu \\(\tau\\)ure](https://doi.org/10.5281/zenodo.13881061) dataset.
36
+ The models included here are for 3 separate tasks:
37
+
38
+ - Tau-tagging (binary classification)
39
+ - Tau kinematic reconstruction (regression)
40
+ - Tau decay mode classification (multiclass-classification)
41
+
42
+ And for 3 different ways of training:
43
+
44
+ - From scratch
45
+ - Fixed backbone (fine-tune only head)
46
+ - Fine-tuning (fine-tune both head and backbone)
47
+
48
+ This will add up to 9 different models.
49
+
50
+ ### Model Sources [optional]
51
+
52
+ <!-- Provide the basic links for the model. -->
53
+
54
+ - **Repository (base model):** https://github.com/uhh-pd-ml/omnijet_alpha
55
+ - **Repository (fine-tuned model):** https://github.com/HEP-KBFI/ml-tau-en-reg
56
+ - **Paper:** https://doi.org/10.1088/2632-2153/ad66ad
57
+
58
+ ## Uses
59
+
60
+ ### Direct Use
61
+
62
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
63
+
64
+ The intended use of the models is to study the feasibility of foundation models for the purposes of reconstructing and identifying hadronically decaying tau leptons.
65
+
66
+
67
+ ### Out-of-Scope Use
68
+
69
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
70
+ This model is not intended for physics measurements on real data. The trainings have been done on CLIC detector simulations.
71
+
72
+
73
+ ## Bias, Risks, and Limitations
74
+
75
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
76
+
77
+ The model has only been trained on simulation data and has not been validated against real data. Although the base model has been published in a peer-reviewed journal, the fine-tuned model has not been.
78
+
79
+
80
+ ## How to Get Started with the Model
81
+
82
+ Use the code below to get started with the model.
83
+
84
+ ```bash
85
+ # Clone the repository
86
+ git clone git@github.com:HEP-KBFI/ml-tau-en-reg.git --recursive
87
+ cd ml-tau-en-reg
88
+
89
+
90
+ # Get the models
91
+ git clone https://huggingface.co/LauritsT/TauRecoID models
92
+ ```
93
+
94
+
95
+ ## Training Details
96
+
97
+ ### Training Data
98
+
99
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
100
+
101
+ The data used to fine-tune the base model can be found here: [Fu \\(\tau\\)ure](https://doi.org/10.5281/zenodo.13881061) dataset
102
+
103
+
104
+
105
+ #### Training Hyperparameters
106
+
107
+ - No hyperparameter tuning has been done. <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
108
+
109
+ #### Speeds, Sizes, Times [optional]
110
+
111
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
112
+
113
+ Training on 1M jets on AMD MI250x for 100 epochs takes ~8h.
114
+
115
+ ## Evaluation
116
+
117
+ <!-- This section describes the evaluation protocols and provides the results. -->
118
+
119
+ ### Testing Data, Factors & Metrics
120
+
121
+
122
+ #### Testing Data
123
+
124
+ <!-- This should link to a Dataset Card if possible. -->
125
+
126
+ Testing data can also be found in the same [Zenodo entry](https://doi.org/10.5281/zenodo.13881061) as the rest of the data.
127
+
128
+
129
+ #### Software
130
+
131
+ [Software](https://github.com/HEP-KBFI/ml-tau-en-reg/) to train and analyze the model
132
+
133
+ ## Citation [optional]
134
+
135
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
136
+
137
+ [OmniJet- \\(\alpha\\)](https://doi.org/10.1088/2632-2153/ad66ad)
138
+ ## Model Card Authors [optional]
139
+
140
+ Laurits Tani (laurits.tani@cern.ch)
141
+
142
+ ## Model Card Contact
143
+
144
+ Laurits Tani (laurits.tani@cern.ch)