Fill-Mask
Transformers
PyTorch
Spanish
roberta
legal
spanish
Inference Endpoints
mmarimon commited on
Commit
6714a6c
1 Parent(s): 6321571

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -19
README.md CHANGED
@@ -18,9 +18,60 @@ widget:
18
  ---
19
  # Spanish Legal-domain RoBERTa
20
 
21
- There are few models trained for the Spanish language. Some of the models have been trained with a low resource, unclean corpora. The ones derived from the Spanish National Plan for Language Technologies are proficient solving several tasks and have been trained using large scale clean corpora. However, the Spanish Legal domain language could be think of an independent language on its own. We therefore created a Spanish Legal model from scratch trained exclusively on legal corpora.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- ## Citing
24
  ```
25
  @misc{gutierrezfandino2021legal,
26
  title={Spanish Legalese Language Model and Corpora},
@@ -32,20 +83,6 @@ There are few models trained for the Spanish language. Some of the models have b
32
  }
33
  ```
34
 
35
- For more information visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-legal-es)
36
-
37
- ## Copyright
38
-
39
- Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
40
-
41
- ## Licensing information
42
-
43
- [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
44
-
45
- ## Funding
46
-
47
- This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
48
-
49
  ## Disclaimer
50
 
51
  The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
@@ -54,9 +91,6 @@ When third parties, deploy or provide systems and/or services to other parties
54
 
55
  In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
56
 
57
-
58
-
59
-
60
 
61
  Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
62
 
 
18
  ---
19
  # Spanish Legal-domain RoBERTa
20
 
21
+ ## Table of contents
22
+ <details>
23
+ <summary>Click to expand</summary>
24
+
25
+ - [Model description](#model-description)
26
+ - [Intended uses and limitations](#intended-use)
27
+ - [How to use](#how-to-use)
28
+ - [Limitations and bias](#limitations-and-bias)
29
+ - [Training](#training)
30
+ - [Evaluation](#evaluation)
31
+ - [Additional information](#additional-information)
32
+ - [Author](#author)
33
+ - [Contact information](#contact-information)
34
+ - [Copyright](#copyright)
35
+ - [Licensing information](#licensing-information)
36
+ - [Funding](#funding)
37
+ - [Citing information](#citing-information)
38
+ - [Disclaimer](#disclaimer)
39
+
40
+ </details>
41
+
42
+ ## Model description
43
+
44
+
45
+ ## Intended uses and limitations
46
+
47
+ ## How to use
48
+
49
+ ## Limitations and bias
50
+ At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
51
+
52
+ ## Training
53
+
54
+ ## Evaluation
55
+
56
+ ## Additional information
57
+
58
+ ### Author
59
+ Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
60
+
61
+ ### Contact information
62
+ For further information, send an email to <plantl-gob-es@bsc.es>
63
+
64
+ ### Copyright
65
+ Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
66
+
67
+ ### Licensing information
68
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
69
+
70
+ ### Funding
71
+ This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
72
+
73
+ ## Citing information
74
 
 
75
  ```
76
  @misc{gutierrezfandino2021legal,
77
  title={Spanish Legalese Language Model and Corpora},
 
83
  }
84
  ```
85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
  ## Disclaimer
87
 
88
  The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
 
91
 
92
  In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
93
 
 
 
 
94
 
95
  Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
96