HingMaskedLM
MaskedLM is a pre-training technique used in Natural Language Processing (NLP) for deep-learning models like Transformers. It is a variant of language modeling where a portion of the input text is masked, and the model is trained to predict the masked tokens based on the context provided by the unmasked tokens. This model is trained for Masked Language Modeling for Hinglish Data
.
Dataset
Hinglish-Top Dataset columns
- en_query
- cs_query
- en_parse
- cs_parse
- domain
Training
Epoch | Loss |
---|---|
1 | 0.0465 |
2 | 0.0262 |
3 | 0.0116 |
4 | 0.00385 |
5 | 0.0103 |
6 | 0.00738 |
7 | 0.00892 |
8 | 0.00379 |
9 | 0.00126 |
10 | 0.000684 |
Inference
from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("SRDdev/HingMaskedLM")
model = AutoModelForMaskedLM.from_pretrained("SRDdev/HingMaskedLM")
fill = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill(f'please {fill.tokenizer.mask_token} ko cancel kardo')
Citation
Author: @SRDdev
Name: Shreyas Dixit
framework: Pytorch
Year: Jan 2023
Pipeline: fill-mask
Github: https://github.com/SRDdev
LinkedIn: https://www.linkedin.com/in/srddev/
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.