heruberuto commited on
Commit
7c3c5a9
1 Parent(s): 4f5bc8c
Files changed (2) hide show
  1. README.md +45 -54
  2. tf_model.h5 +1 -1
README.md CHANGED
@@ -1,56 +1,47 @@
 
 
 
 
 
 
 
1
 
2
- # 🦾 xlm-roberta-large-squad2-ctkfacts
3
-
4
- ## 🧰 Usage
5
-
6
- ### 🤗 Using Huggingface `transformers`
7
- ```python
8
- from transformers import <class 'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification'>, <class 'transformers.models.auto.tokenization_auto.AutoTokenizer'>
9
- model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-squad2-ctkfacts")
10
- tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-squad2-ctkfacts")
11
- ```
12
-
13
- ### 👾 Using UKPLab `sentence_transformers` `CrossEncoder`
14
- The model was trained using the `CrossEncoder` API and we recommend it for its usage.
15
- ```python
16
- from sentence_transformers.cross_encoder import CrossEncoder
17
- model = CrossEncoder('ctu-aic/xlm-roberta-large-squad2-ctkfacts')
18
- scores = model.predict([["My first context.", "My first hypothesis."],
19
- ["Second context.", "Hypothesis."]])
20
- ```
21
-
22
- ## 🌳 Contributing
23
- Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
24
-
25
- ## 👬 Authors
26
- The model was trained and uploaded by **[ullriher](https://udb.fel.cvut.cz/?uid=ullriher&sn=&givenname=&_cmd=Hledat&_reqn=1&_type=user&setlang=en)** (e-mail: [ullriher@fel.cvut.cz](mailto:ullriher@fel.cvut.cz))
27
-
28
- The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague ([AIC](https://www.aic.fel.cvut.cz/)).
29
-
30
- ## 🔐 License
31
- [cc-by-sa-4.0](https://choosealicense.com/licenses/cc-by-sa-4.0)
32
-
33
- ## 💬 Citation
34
- If you find this model helpful, feel free to cite our publication:
35
- ```
36
-
37
- @article{DBLP:journals/corr/abs-2201-11115,
38
- author = {Jan Drchal and
39
- Herbert Ullrich and
40
- Martin R{'{y}}par and
41
- Hana Vincourov{'{a}} and
42
- V{'{a}}clav Moravec},
43
- title = {CsFEVER and CTKFacts: Czech Datasets for Fact Verification},
44
- journal = {CoRR},
45
- volume = {abs/2201.11115},
46
- year = {2022},
47
- url = {https://arxiv.org/abs/2201.11115},
48
- eprinttype = {arXiv},
49
- eprint = {2201.11115},
50
- timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
51
- biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
52
- bibsource = {dblp computer science bibliography, https://dblp.org}
53
- }
54
-
55
- ```
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_keras_callback
4
+ model-index:
5
+ - name: xlm-roberta-large-squad2-ctkfacts
6
+ results: []
7
+ ---
8
 
9
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
+ probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
+ # xlm-roberta-large-squad2-ctkfacts
13
+
14
+ This model was trained from scratch on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - optimizer: None
36
+ - training_precision: float32
37
+
38
+ ### Training results
39
+
40
+
41
+
42
+ ### Framework versions
43
+
44
+ - Transformers 4.21.0
45
+ - TensorFlow 2.7.1
46
+ - Datasets 2.4.0
47
+ - Tokenizers 0.12.1
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:913d9dfb23d54f748b5f53b8c3f0af2efc8f1ebdff0776fdc9413b66f6e2f60b
3
  size 2240127640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5116456cff5ab256af2e94b2530634c7f4ff6231e9b9a2c09aed59556f6ad652
3
  size 2240127640