File size: 2,457 Bytes
17c63c0 7dc91da 2ad5a4b 7dc91da 17c63c0 7dc91da 17c63c0 7dc91da 17c63c0 c8804dc 17c63c0 c8804dc 17c63c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: HausaSentiLex
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HausaSentiLex
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0557
- Train Accuracy: 0.9799
- Epoch: 4
The sentiment fine-tuning was done on Hausa Language.
Model Repository : https://github.com/idimohammed/HausaSentiLex
## Model description
HausaSentiLex is a pretrained lexicon low resources language model. The model was trained on Hausa Language (Hausa is a Chadic language spoken by the Hausa people in the northern half of Nigeria, Niger, Ghana, Cameroon, Benin and Togo, and the southern half of Niger, Chad and Sudan, with significant minorities in Ivory Coast. It is the most widely spoken language in West Africa, and one of the most widely spoken languages in Africa as a whole).
The model has been shown to obtain competitive downstream performances on text classification on trained language.
## Intended uses & limitations
You can use this model with Transformers for sentiment analysis task in Hausa Language.
## Training and evaluation data
Training and Evaluation Dataset can be access via https://data.mendeley.com/datasets/9cbw2b7h57/1
## Training procedure
Article currently under review process.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-06, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.1586 | 0.9363 | 0 |
| 0.1130 | 0.9577 | 1 |
| 0.0969 | 0.9642 | 2 |
| 0.0740 | 0.9731 | 3 |
| 0.0557 | 0.9799 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|