This model is capable of determining the definition of the homonym "ბარი" located at the position marked by the [MASK] token.
It is a simple Transformer model fine-tuned on a dataset comprising 4800 hand-classified sentences.
It shows 95% accuracy on a test set comprising 1200 hand-classified sentences.
The original 6000 sentences were split into 80% training data and 20% testing data. link to dataset
Methodology:
I've masked the homonyms from the sentences and replaced them with their synonyms according to the definitions used. For example, I replaced ”ბარი” with ”დაბლობი” (lowland) where the homonym referred to the field.The model predicts "თო" when it interprets the homonym as "Shovel," "დაბ" when it interprets it as "lowland," and "კაფე" when it interprets it as "Cafe."
My fine-tuned transformer model is based on a pre-trained transformer model which was downloaded from: https://huggingface.co/Davit6174/georgian-distilbert-mlm
Usage example
from transformers import pipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained('davmel/ka_homonym_disambiguation_FM')
tokenizer = AutoTokenizer.from_pretrained('davmel/ka_homonym_disambiguation_FM')
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
answer = {'თო': "თოხი", 'დაბ': 'დაბლობი', 'კაფე': "კაფე"}
answer_to_english = {"თო": "Shovel", "დაბ": "Lowland", "კაფე": "Cafe"}
#Make sure the sentence contains one [MASK] token (otherwise pipeline returns arrays of dictionaries).
sentence = 'აიღეთ ხელში [MASK], იმუშავეთ მიწაზე'
result = pipe(sentence)
print("The homonym is used as: ", answer_to_english[result[0]['token_str']])
print("ომონიმი \"ბარი\" გამოყენებულია როგორც ", answer[result[0]['token_str']])
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.