BERT-AJGT
Arabic version bert model fine tuned on AJGT dataset
Data
The model were fine-tuned on ~1800 sentence from twitter for Jordanian dialect.
Results
class | precision | recall | f1-score | Support |
---|---|---|---|---|
0 | 0.9462 | 0.9778 | 0.9617 | 90 |
1 | 0.9399 | 0.9689 | 0.9542 | 90 |
Accuracy | 0.9611 | 180 |
How to use
You can use these models by installing torch
or tensorflow
and Huggingface library transformers
. And you can use it directly by initializing it like this:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/bert-ajgt"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(model_name)
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.