mradermacher/Taiwan-tinyllama-v1.1-base-GGUF
Updated
•
163
This dataset is designed for Traditional Chinese (zh-tw) and comprises of a collection of books from 好讀
Total tokens: 1.3B
(Tokens are calculated by tokenizer of LLaMA2)
from datasets import load_dataset
dataset = load_dataset("benchang1110/Taiwan-book-1B", split="train")