Edit model card

Tibetan-Llama2-7B

This is the full Tibetan-Llama2-7B model,which can be loaded directly for inference and full-parameter training.

Related models👇

Description of Tibetan-Llama2-Alpaca

This project is based on Llama2, and we open-source Tibetan-Llama2 (foundation model) and Tibetan-Alpaca (instruction-following model). These models have been expanded and optimized with Tibetan vocabulary, surpassing the original Llama-2. We utilized a considerable amount of Tibetan data for incremental pre-training, which further enhanced the fundamental semantic understanding of the Tibetan language. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.

The main contents of this project include:

  • 🚀 New extended Tibetan vocabulary beyond Llama2, open-sourcing the Tibetan-Llama2 and Tibetan-Alpaca LLMs.
  • 🚀 Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC.
  • 🚀 Support for Llama ecosystems like 🤗transformers, Llama.cpp, text-generation-webui, LangChain, vLLM etc.

Please refer to https://github.com/ymaoj/Tibetan-Llama2-Tibetan-Alpaca/ for details.

Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using ymaoj/Tibetan-Llama2-7B 1