lyraChatGLM / README.md
bigmoyan's picture
Update README.md
f8cc4df
|
raw
history blame
3.1 kB
---
license: creativeml-openrail-m
language:
- en
tags:
- LLM
- tensorRT
- chatGLM
---
## Model Card for lyraChatGLM
lyraChatGLM is currently the **fastest chatGLM-6B** available, as far as we know, it is also the **fisrt accelerated version of chatGLM-6B**.
The inference speed of lyraChatGLM is **10x** faster than the original version, and we're still working to improve the performance.
Among its main features are:
- weights: original ChatGLM-6B weights released by THUDM.
- device: lyraChatGLM is mainly based on FasterTransformer compiled for SM=80 (A100, for example).
## Speed
### test environment
- device: Nvidia A100 40G
|version|speed|
|:-:|:-:|
|original|30 tokens/s|
|lyraChatGLM|310 tokens/s|
## Model Sources
- **Repository:** [https://huggingface.co/THUDM/chatglm-6b]
## Uses
```python
from transformers import AutoTokenizer
from faster_chat_glm import GLM6B, FasterChatGLM
MAX_OUT_LEN = 100
tokenizer = AutoTokenizer.from_pretrained('./models', trust_remote_code=True)
input_str = ["为什么我们需要对深度学习模型加速?", ]
inputs = tokenizer(input_str, return_tensors="pt", padding=True)
input_ids = inputs.input_ids.to('cuda:0')
plan_path = './models/glm6b-bs8.ftm'
# kernel for chat model.
kernel = GLM6B(plan_path=plan_path,
batch_size=1,
num_beams=1,
use_cache=True,
num_heads=32,
emb_size_per_heads=128,
decoder_layers=28,
vocab_size=150528,
max_seq_len=MAX_OUT_LEN)
chat = FasterChatGLM(model_dir="./models", kernel=kernel).half().cuda()
# generate
sample_output = chat.generate(inputs=input_ids, max_length=MAX_OUT_LEN)
# de-tokenize model output to text
res = tokenizer.decode(sample_output[0], skip_special_tokens=True)
print(res)
```
## Demo output
### input
为什么我们需要对深度学习模型加速? 。
### output
为什么我们需要对深度学习模型加速? 深度学习模型的训练需要大量计算资源,特别是在训练模型时,需要大量的内存、GPU(图形处理器)和其他计算资源。因此,训练深度学习模型需要一定的时间,并且如果模型不能快速训练,则可能会导致训练进度缓慢或无法训练。
以下是一些原因我们需要对深度学习模型加速:
1. 训练深度神经网络需要大量的计算资源,特别是在训练深度神经网络时,需要更多的计算资源,因此需要更快的训练速度。
## Environment
- hardware: Nvidia Ampere architecture (A100) or compatable
- docker image avaible: https://hub.docker.com/r/bigmoyan/lyra_aigc/tags
```
docker pull bigmoyan/lyra_aigc:v0.1
```
## Citation
``` bibtex
@Misc{lyraChatGLM2023,
author = {Kangjian Wu, Zhengtao Wang, Bin Wu},
title = {lyaraChatGLM: Accelerating chatGLM by 10x+},
howpublished = {\url{https://huggingface.co/TMElyralab/lyraChatGLM}},
year = {2023}
}
```
## Report bug
- start a discussion to report any bugs!--> https://huggingface.co/TMElyralab/lyraChatGLM/discussions
- report bug with a `[bug]` mark in the title.