[CodeParrot](https://huggingface.co/codeparrot/codeparrot) uses GPT-2 architecture with BPE tokenizer trained on Python code from the training split of the data, and a context length of 1024. This model was released as an educational tool for training large language models from scratch on code, with detailed tutorials and descriptions of the training process. It makes use of 🤗 [`accelerate`](https://huggingface.co/docs/accelerate/index) for distributed training and mixed precision. See this [blog](https://huggingface.co/blog/codeparrot) and [repo](https://github.com/huggingface/transformers/tree/main/examples/research_projects/codeparrot) for more details.
|Model | # parameters |
| - | - |
| [codeparrot-small](https://huggingface.co/codeparrot/codeparrot-small) | 110M |
| [codeparrot](https://huggingface.co/codeparrot/codeparrot) | 1.5B |
You can load the model and tokenizer directly from 🤗 [`transformers`](https://huggingface.co/docs/transformers/index):
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot")
model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot")
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)
```
You can also use `pipeline` to generate code:
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="codeparrot/codeparrot")
outputs = pipe("def hello_world():")
```