plateer_classifier_ModernBERT_v01

This model is a fine-tuned version of x2bee/ModernBert_MLM_kotoken_v01 on x2bee/plateer_category_data.
It achieves the following results on the evaluation set:

  • Loss: 0.3379

Example Use

import joblib;
from huggingface_hub import hf_hub_download;
from peft import PeftModel, PeftConfig;
from transformers import AutoTokenizer, TextClassificationPipeline, AutoModelForSequenceClassification;
from huggingface_hub import HfApi, login

# need hgf token for accessing X2BEE repo.
with open('./api_key/HGF_TOKEN.txt', 'r') as hgf:
    login(token=hgf.read())
api = HfApi()
repo_id = "x2bee/plateer_classifier_ModernBERT_v01"
data_id = "x2bee/plateer_category_data"

# Load Config, Tokenizer, Label_Encoder
tokenizer = AutoTokenizer.from_pretrained(repo_id, subfolder="last-checkpoint")
label_encoder_file = hf_hub_download(repo_id=data_id, repo_type="dataset", filename="label_encoder.joblib")
label_encoder = joblib.load(label_encoder_file)

# Load Model
model = AutoModelForSequenceClassification.from_pretrained(repo_id, subfolder="last-checkpoint")

import torch
class TextClassificationPipeline(TextClassificationPipeline):
    def __call__(self, inputs, top_k=5, **kwargs):
        inputs = self.tokenizer(inputs, return_tensors="pt", truncation=True, padding=True, max_length=512, **kwargs)
        inputs = {k: v.to(self.model.device) for k, v in inputs.items()}
        
        with torch.no_grad():
            outputs = self.model(**inputs)
        
        probs = torch.nn.functional.softmax(outputs.logits, dim=-1)
        scores, indices = torch.topk(probs, top_k, dim=-1)
        
        results = []
        for batch_idx in range(indices.shape[0]):
            batch_results = []
            for score, idx in zip(scores[batch_idx], indices[batch_idx]):
                temp_list = []
                label = self.model.config.id2label[idx.item()]
                label = int(label.split("_")[1])
                temp_list.append(label)
                predicted_class = label_encoder.inverse_transform(temp_list)[0]
                            
                batch_results.append({
                    "label": label,
                    "label_decode": predicted_class,
                    "score": score.item(),
                })
            results.append(batch_results)
        
        return results

classifier_model = TextClassificationPipeline(tokenizer=tokenizer, model=model)

def plateer_classifier(text, top_k=3):
    result = classifier_model(text, top_k=top_k)
    return result

# run
result = plateer_classifier("๊ฒจ์šธ ๋“ฑ์‚ฐ์—์„œ ์‚ฌ์šฉํ•  ์˜ท")[0]
print(result)

# result
-----------Category-----------
{'label': 2, 'label_decode': '๊ธฐ๋Šฅ์„ฑ์˜๋ฅ˜', 'score': 0.9214227795600891}
{'label': 8, 'label_decode': '์Šคํฌ์ธ ', 'score': 0.07054771482944489}
{'label': 15, 'label_decode': 'ํŒจ์…˜/์˜๋ฅ˜/์žกํ™”', 'score': 0.0036312134470790625}

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-4
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10000
  • num_epochs: 3

Framework versions

  • Transformers 4.48
Downloads last month
3
Safetensors
Model size
186M params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.