sam-mosaic's picture
Update README.md
8cf8528
|
raw
history blame
420 Bytes

Pile of Law Tokenizer

This tokenizer should be a drop-in replacement for the GPT2Tokenizer. It has the same vocabulary size and special tokens, but was trained on a random 1M samples from the pile of law train split.

Usage:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("sam-mosaic/pile-of-law-tokenizer")