File size: 463 Bytes
246cc63
 
be6ca24
 
 
8cf8528
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Pile of Law Tokenizer

This tokenizer should be a drop-in replacement for the GPT2Tokenizer. It has the same special tokens, but was trained on a random 1M samples from [the pile of law](https://huggingface.co/datasets/pile-of-law/pile-of-law) train split.

It has exactly 52,000 tokens, which is not identical to GPT2.

Usage:

```python
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("sam-mosaic/pile-of-law-tokenizer")
```