GreekDeBERTaV3-xsmall
GreekDeBERTaV3-xsmall is a compact version of the GreekDeBERTaV3 model, optimized for use in environments with limited computational resources, while still maintaining strong performance on Greek NLP tasks. It is based on the DeBERTaV3 architecture, utilizing Replaced Token Detection (RTD) during pre-training.
Model Overview
- Model Architecture: DeBERTaV3-xsmall
- Language: Greek
- Pre-training Tasks: Replaced Token Detection (RTD)
- Tokenizer: SentencePiece Model (spm.model)
This smaller version is ideal for use cases where inference time and model size are critical without a significant compromise in performance.
Files
config.json
: Configuration file for the model.pytorch_model.bin
: The PyTorch weights of the smaller model.spm.model
: The SentencePiece tokenizer model.vocab.txt
: A human-readable vocabulary file that contains the list of tokens used by the model.tokenizer_config.json
: Tokenizer configuration file.
How to Use
You can use this model with the Hugging Face transformers
library:
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("AI-team-UoA/GreekDeBERTaV3-xsmall")
model = AutoModelForTokenClassification.from_pretrained("AI-team-UoA/GreekDeBERTaV3-xsmall")
- Downloads last month
- 2