Configuration Parsing
Warning:
In UNKNOWN_FILENAME: "auto_map.AutoTokenizer" must be a string
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
PanGu-α Introduction
PanGu-α is proposed by a joint technical team headed by PCNL. It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the "Peng Cheng Cloud Brain II" computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension.
Key points
- The first Chinese autoregressive language model "PengCheng·PanGu-α" with 200 billion parameters
- Code and model are gradually released
- The first sequential autoregressive pre-training language model ALM
- The ultra-large-scale automatic parallel technology in MindSpore
- The model is trained based on the domestic full-stack software and hardware ecosystem(MindSpore+CANN+Atlas910+ModelArts)
Use
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Hanlard/Pangu_alpha")
model = AutoModelForCausalLM.from_pretrained("imone/pangu_2_6B", trust_remote_code=True)
- Downloads last month
- 10
Inference API (serverless) does not yet support model repos that contain custom code.