--- language: - en - zh library_name: transformers tags: - Long Context - chatglm - llama datasets: - THUDM/LongWriter-6k pipeline_tag: text-generation --- # LongWriter-glm4-9b

🤗 [LongWriter Dataset] • 💻 [Github Repo] • 📃 [LongWriter Paper]

LongWriter-glm4-9b is trained based on [glm-4-9b](https://huggingface.co/THUDM/glm-4-9b), and is capable of generating 10,000+ words at once. A simple demo for deployment of the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("THUDM/LongWriter-glm4-9b", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("THUDM/LongWriter-glm4-9b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") model = model.eval() query = "Write a 10000-word China travel guide" response, history = model.chat(tokenizer, query, history=[], max_new_tokens=32768, temperature=0.5) print(response) ``` Environment: Same environment requirement as [glm-4-9b-chat](https://huggingface.co/THUDM/glm-4-9b-chat) (`transforemrs>=4.44.0`). License: [glm-4-9b License](https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE) ## Citation If you find our work useful, please consider citing LongWriter: ``` @article{bai2024longwriter, title={LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs}, author={Yushi Bai and Jiajie Zhang and Xin Lv and Linzhi Zheng and Siqi Zhu and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li}, journal={arXiv preprint arXiv:2408.07055}, year={2024} } ```