File size: 9,643 Bytes
43e1acc
 
df4edc7
 
 
 
 
 
 
 
 
 
 
 
c8511fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43e1acc
 
 
 
 
df4edc7
43e1acc
 
 
 
 
 
 
 
df4edc7
 
 
 
 
43e1acc
df4edc7
43e1acc
df4edc7
43e1acc
 
 
 
 
 
 
 
df4edc7
43e1acc
df4edc7
 
 
43e1acc
df4edc7
43e1acc
 
df4edc7
43e1acc
df4edc7
 
 
 
43e1acc
df4edc7
43e1acc
df4edc7
43e1acc
 
 
 
df4edc7
43e1acc
 
 
 
df4edc7
43e1acc
df4edc7
43e1acc
 
 
 
 
df4edc7
 
 
 
 
 
 
 
 
 
 
 
 
43e1acc
 
 
 
 
df4edc7
43e1acc
df4edc7
 
43e1acc
 
 
 
 
df4edc7
 
 
 
43e1acc
 
 
 
 
 
df4edc7
 
 
 
 
43e1acc
 
 
 
 
 
 
df4edc7
 
43e1acc
 
 
 
 
 
 
 
df4edc7
43e1acc
 
 
 
 
c8511fc
43e1acc
 
 
df4edc7
43e1acc
df4edc7
43e1acc
df4edc7
43e1acc
 
 
df4edc7
 
 
43e1acc
 
 
 
 
 
 
 
 
 
 
 
 
df4edc7
 
 
43e1acc
 
 
df4edc7
43e1acc
 
 
df4edc7
 
43e1acc
df4edc7
43e1acc
df4edc7
43e1acc
 
 
 
 
df4edc7
 
 
 
 
 
 
 
43e1acc
 
 
df4edc7
 
 
43e1acc
df4edc7
43e1acc
 
 
df4edc7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
---
library_name: transformers
tags:
- nepali
- roberta
- nlp
- language-model
datasets:
- IRIISNEPAL/Nepali-Text-Corpus
language:
- ne
metrics:
- f1
- accuracy
model-index:
- name: IRIISNEPAL/RoBERTa_Nepali_110M
  results:
  - task:
      type: token-classification
      name: Named Entity Recognition (NER)
    dataset:
      type: nep_glue
      name: Nep-gLUE
    metrics:
      - type: f1
        name: Macro F1 Score
        value: 93.74

  - task:
      type: token-classification
      name: Part-of-Speech (POS) Tagging
    dataset:
      type: nep_glue
      name: Nep-gLUE
    metrics:
      - type: f1
        name: Macro F1 Score
        value: 97.52

  - task:
      type: text-classification
      name: Categorical Classification (CC)
    dataset:
      type: nep_glue
      name: Nep-gLUE
    metrics:
      - type: f1
        name: Macro F1 Score
        value: 94.68

  - task:
      type: similarity
      name: Categorical Pair Similarity (CPS)
    dataset:
      type: nep_glue
      name: Nep-gLUE
    metrics:
      - type: f1
        name: Macro F1 Score
        value: 96.49

  - task:
      type: overall-benchmark
      name: Nep-gLUE
    metrics:
      - type: score
        name: Overall Score
        value: 95.60
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
IRIISNEPAL/RoBERTa_Nepali_110M is a RoBERTa-based transformer model developed specifically for the Nepali language. This 110-million-parameter model is intended for tasks in natural language understanding (NLU), such as sentiment analysis, text classification, and named entity recognition in Nepali.


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** Institute of Research and Innovation in Intelligent Systems (IRIIS) 
- **Model type:** RoBERTa-based transformer model specifically trained on Nepali language data
- **Model Size:** 110 million parameters
- **Language (NLP):** Nepali
- **Training Objective:** Masked Language Modeling (MLM) and Next Sentence Prediction (NSP)

The IRIISNEPAL/RoBERTa_Nepali_110M model aims to provide a robust tool for NLP tasks specific to the Nepali language, supporting NLP research and applications within low-resource languages.

---

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model provides contextual embeddings for each token in an input sequence (`last_hidden_state`) and a pooled representation of the entire input (`pooler_output`). These outputs can be used for:

- **Text Classification**: Using `pooler_output` to classify the overall sentiment, intent, or category of a sentence.
- **Token-Level Tasks**: Leveraging `last_hidden_state` to perform tasks like named entity recognition (NER) or part-of-speech tagging by predicting labels for individual tokens.
- **Sentence Embeddings**: Using `pooler_output` as an embedding for the entire input text for similarity search or clustering tasks. 

### Downstream Use

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
The model was evaluated on the [Nepali Language Understanding Evaluation (Nep-gLUE)](https://nepberta.github.io/nepglue/) benchmark, demonstrating strong performance across various natural language understanding (NLU) tasks:

- **Named Entity Recognition (NER)**: 93.74
- **Part-of-Speech (POS) Tagging**: 97.52
- **Categorical Classification (CC)**: 94.68
- **Categorical Pair Similarity (CPS)**: 96.49

These results indicate the model’s effectiveness in capturing language nuances for multiple NLU tasks in Nepali.

---

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model may exhibit biases present in its training data, especially regarding social, cultural, and regional aspects of the Nepali language. Users should exercise caution when deploying it in applications that might perpetuate stereotypes or cultural biases.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
It’s advisable for users to monitor model outputs for fairness and avoid high-stakes applications without thorough testing. Fine-tuning or retraining may be necessary for sensitive applications.

---

## How to Get Started with the Model

Use the code below to get started with the model.

```python
# Load model directly
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("IRIISNEPAL/RoBERTa_Nepali_110M")
model = AutoModel.from_pretrained("IRIISNEPAL/RoBERTa_Nepali_110M")

text = "नेपालमा पर्यटनको विकास गर्नुपर्ने आवश्यकता छ।"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```

---

## Training Details

### Training Data

The model was trained on a 27.5 GB Nepali language corpus compiled from 99 Nepali news websites. This dataset represents the largest Nepali language corpus to date, providing a significant expansion in training resources for the language. The preprocessing involved deduplication, translation/removal of non-Nepali content, and noise reduction.

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
You can find detailed information about the dataset in the [dataset card on Hugging Face](https://huggingface.co/datasets/IRIISNEPAL/Nepali-Text-Corpus).

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

- **Training Regime:** Mixed precision (fp16) on TPU v4-8 hardware
- **Batch Size:** 256
- **Learning Rate:** 1e-4 with a warmup over the first 10,000 steps followed by linear decay

#### Preprocessing [optional]

[More Information Needed]

#### Training Hyperparameters

- **Max Sequence Length:** 512 tokens
- **Learning Rate Scheduler:** Linear with warmup
- **Optimizer:** AdamW with β1 = 0.9, β2 = 0.999, and L2 weight decay of 0.01
- **Dropout Probability:** 0.1 across all layers
- **Activation Function:** GELU

#### Speeds, Sizes, Times [optional]

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

[More Information Needed]

---

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

The model was evaluated on the [Nepali Language Evaluation Benchmark (Nep-gLUE)](https://nepberta.github.io/nepglue/), which includes tasks like Named Entity Recognition (NER), Part-of-Speech (POS) Tagging, text classification, and categorical pair similarity.

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

- **Macro-F1** for the [Nepali Language Understanding Evaluation (Nep-gLUE)](https://nepberta.github.io/nepglue/) benchmark.

### Results

On Nep-gLUE, the model outperformed existing state-of-the-art models with an overall score of 95.60, reflecting its strong language understanding capabilities.

---

## Model Examination

<!-- Relevant interpretability work for the model goes here -->

Performance analysis indicates robustness in capturing grammatical and syntactical features of Nepali. However, the model may have limited effectiveness in handling dialect-specific content or informal language.

---

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]

---

## Technical Specifications

### Model Architecture and Objective

RoBERTa architecture with 12 transformer layers, hidden size of 768, 12 attention heads, and 110 million parameters. This architecture facilitates strong bidirectional attention for accurate language understanding.

### Compute Infrastructure

- **Hardware:** TPU v4-8 and Nvidia GeForce RTX 3090 GPUs
- **Software:** Python, PyTorch, Hugging Face Transformers

---

## Citation 

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@misc{IRIISNEPAL_RoBERTa_Nepali_110M,
  title = {Development of Pre-trained Transformer-based Models for the Nepali Language},
  author = {Thapa, Prajwal and Nyachhyon, Jinu and Sharma, Mridul and Bal, Bal Krishna},
  year = {2024},
  note = {Submitted to COLING 2025}
}
```

**APA:**

```
Thapa, P., Nyachhyon, J., Sharma, M., & Bal, B. K. (2024). Development of pre-trained transformer-based models for the Nepali language. Manuscript submitted for publication to COLING 2025.
```

---

## Model Card Contact

For questions and support, contact IRIIS Nepal.