Improved Code-Mixed Sentence Translation Using Decoder-Only Transformers
Overview
This project addresses the limitations of traditional Neural Machine Translation (NMT) models in translating code-mixed sentences by utilizing a decoder-only transformer model. Inspired by the training methodologies of models like GPT and Llama, this approach leverages self-supervised learning to understand the context of languages more deeply. After learning the context, the model is fine-tuned on a smaller translation dataset, making it effective for translating both regular and code-mixed sentences.
Benefits
- Fraction of Translation Dataset: The model requires only a small amount of translation data for fine-tuning, which reduces the data preparation overhead.
- Rich and Meaningful Translation: By understanding the underlying context of languages, the model provides more accurate and meaningful translations for both regular and code-mixed sentences.
- Multilingual Capability: A single model can potentially translate multiple languages, making it a versatile solution for diverse translation needs.
Approach
- Context Learning: Train a decoder-only transformer model on a large corpus of text using self-supervised learning. This stage allows the model to grasp the contextual nuances of different languages.
- Fine-Tuning: Fine-tune the pre-trained model on a smaller dataset specifically for translation tasks. This step adapts the model to effectively handle translation while retaining its contextual understanding.
Example
Here is a comparison between the traditional Google Translate and the proposed approach:
Text: “Sun ka diameter kya hoga?”
Google Translate: “what will happen to sun's demetre”
- Proposed Approach: “What is the diameter of the Sun?”
The proposed method outperforms traditional translation models by providing a more accurate translation that respects the context and meaning of the original sentence.
Usage
- Pre-training: Train the decoder-only transformer model on a large text corpus.
- Fine-tuning: Fine-tune the model on a smaller dataset of translated sentences.
- Translation: Use the fine-tuned model to translate both regular and code-mixed sentences.
Future Work
- Evaluation: Conduct thorough evaluations and comparisons with other state-of-the-art translation models.
- Expansion: Explore additional languages and code-mixed scenarios to enhance the model's versatility.
License
This project is licensed under the MIT License.
Feel free to adjust any sections as needed!
- Downloads last month
- 7