Text Generation
Transformers
Safetensors
English
qwen2
conversational
text-generation-inference
Inference Endpoints
Edit model card

MinPLM-Qwen-200M

paper | code

MiniPLM-Qwen-200M is a 200M model with Qwen achitecture pre-trained from scratch on the Pile using the MiniPLM knowledge distillation framework with the offcial Qwen1.5-1.8B as the teacher model.

We also open-source the pre-training corpus refined by Difference Sampling in MiniPLM for reproducibility.

Evaluation

MiniPLM models achieves better performance given the same computation and scales well across model sizes:

Baseline Models

Citation

@article{miniplm,
    title={MiniPLM: Knowledge Distillation for Pre-Training Language Models}, 
    author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
    journal={arXiv preprint arXiv:2410.17215},
    year={2024}
}
Downloads last month
259
Safetensors
Model size
203M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train MiniLLM/MiniPLM-Qwen-200M

Collection including MiniLLM/MiniPLM-Qwen-200M