hooking-dev
commited on
Commit
โข
ff85c3e
1
Parent(s):
783b67c
Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,118 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
- **Repository:** [
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
|
|
69 |
|
70 |
-
|
|
|
71 |
|
72 |
-
|
|
|
73 |
|
74 |
-
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
- **
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
### Testing Data, Factors & Metrics
|
108 |
|
109 |
-
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
|
165 |
-
|
166 |
|
167 |
-
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags: [gpt, hebrew, language-model, pretraining]
|
4 |
+
license: apache-2.0
|
5 |
+
datasets:
|
6 |
+
- oscar-corpus/OSCAR-2301
|
7 |
+
metrics:
|
8 |
+
- perplexity
|
9 |
+
model-index:
|
10 |
+
- name: HebrewGPT_Base_v1.0
|
11 |
+
results:
|
12 |
+
- task:
|
13 |
+
name: Language Modeling
|
14 |
+
type: language-modeling
|
15 |
+
dataset:
|
16 |
+
name: "OSCAR Hebrew"
|
17 |
+
type: oscar-corpus/OSCAR-2301
|
18 |
+
metrics:
|
19 |
+
- name: Perplexity
|
20 |
+
type: perplexity
|
21 |
+
value: More Information Needed
|
22 |
---
|
23 |
|
24 |
+
# HebrewGPT_Base_v1.0
|
|
|
|
|
|
|
25 |
|
26 |
+
This is the HebrewGPT_Base_v1.0 model, a foundational GPT model for the Hebrew language, pretrained from scratch on the OSCAR Hebrew dataset.
|
27 |
|
28 |
## Model Details
|
29 |
|
30 |
### Model Description
|
31 |
|
32 |
+
Developed by Hooking AI, this model is the base version of a Hebrew GPT series intended for further fine-tuning and downstream NLP tasks in Hebrew. It serves as a generic foundation for Hebrew language understanding and generation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
+
- **Developed by:** Hooking AI
|
35 |
+
- **Model type:** GPT (Generative Pre-trained Transformer)
|
36 |
+
- **Language(s) (NLP):** Hebrew
|
37 |
+
- **License:** Apache-2.0
|
38 |
+
- **Repository:** [hooking-dev/Hebrew_v1.0](https://huggingface.co/hooking-dev/Hebrew_v1.0)
|
|
|
|
|
39 |
|
40 |
## Uses
|
41 |
|
|
|
|
|
42 |
### Direct Use
|
43 |
|
44 |
+
This model can be used directly for tasks that involve understanding or generating Hebrew text, such as conversation modeling, text summarization, and more. It has not been fine-tuned on any downstream tasks and is best suited as a starting point for further NLP applications.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
### Out-of-Scope Use
|
47 |
|
48 |
+
The model is not recommended for use in high-stakes scenarios such as medical diagnosis or legal decision-making due to the lack of domain-specific fine-tuning and potential biases inherent in language models.
|
|
|
|
|
49 |
|
50 |
## Bias, Risks, and Limitations
|
51 |
|
52 |
+
The model, like many language models, likely contains biases that are present in the training data. Users should be aware of these potential biases when using the model, especially in sensitive applications. Further research and auditing for bias are recommended before deploying the model in production.
|
53 |
|
54 |
+
## How to Get Started with the Model
|
|
|
|
|
55 |
|
56 |
+
To get started with HebrewGPT_Base_v1.0, load the model using the Transformers library:
|
57 |
|
58 |
+
```python
|
59 |
+
from transformers import GPT2LMHeadModel, GPT2Tokenizer
|
60 |
|
61 |
+
model = GPT2LMHeadModel.from_pretrained("hooking-dev/Hebrew_v1.0")
|
62 |
+
tokenizer = GPT2Tokenizer.from_pretrained("hooking-dev/Hebrew_v1.0")
|
63 |
|
64 |
+
# Example text
|
65 |
+
input_ids = tokenizer.encode("ืฉืืื, ืื ืฉืืืื?", return_tensors="pt")
|
66 |
|
67 |
+
# Generate text
|
68 |
+
outputs = model.generate(input_ids)
|
69 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
70 |
+
```
|
71 |
|
72 |
## Training Details
|
73 |
|
74 |
### Training Data
|
75 |
|
76 |
+
The model was trained on the OSCAR Hebrew dataset, a large-scale, open corpus consisting of diverse text collected from the web, reflecting common usage of Hebrew in various contexts.
|
|
|
|
|
77 |
|
78 |
### Training Procedure
|
79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
#### Training Hyperparameters
|
81 |
|
82 |
+
- **Optimizer:** AdamW
|
83 |
+
- **Learning Rate:** 0.0002
|
84 |
+
- **Training Epochs:** 2
|
85 |
+
- **Batch Size:** 16
|
86 |
+
- **Sequence Length:** 512
|
87 |
+
- **Warmup Steps:** 500
|
|
|
88 |
|
89 |
## Evaluation
|
90 |
|
|
|
|
|
91 |
### Testing Data, Factors & Metrics
|
92 |
|
93 |
+
Since this model is a base model and not fine-tuned on specific downstream tasks, standard language modeling metrics such as perplexity were primarily considered during development. Detailed evaluation results will be added as further testing is conducted.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
+
## Technical Specifications
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
### Model Architecture and Objective
|
98 |
|
99 |
+
The model uses a standard GPT architecture with 16 transformer layers, 16 attention heads, and a hidden size of 1024.
|
100 |
|
101 |
### Compute Infrastructure
|
102 |
|
103 |
+
Training was conducted on GPU-accelerated hardware, specifically using NVIDIA Tesla GPUs.
|
|
|
|
|
104 |
|
105 |
+
## Citation
|
106 |
|
107 |
+
If you use this model in your research, please cite it as follows:
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
|
109 |
**BibTeX:**
|
110 |
|
111 |
+
```bibtex
|
112 |
+
@misc{hebrewgpt_base_v1_0,
|
113 |
+
title={HebrewGPT Base Model},
|
114 |
+
author={Hooking AI},
|
115 |
+
howpublished={Hugging Face Model Hub},
|
116 |
+
year={2024},
|
117 |
+
url={https://huggingface.co/hooking-dev/Hebrew_v1.0}
|
118 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|