|
[InCoder](https://huggingface.co/facebook/incoder-6B) uses a decoder-only Transformer with Causal Masking objective, to train a left-to-right language model to fill in masked token segments, with a context length of 2048. |
|
<div align="center"> |
|
|
|
|Model | # parameters | |
|
| - | - | |
|
| Decoder |1.3B | |
|
| Decoder |6.7B | |
|
|
|
</div> |
|
|
|
[Causal Masking objective](https://arxiv.org/abs/2201.07520) is a hybrid approach of Causal and Masked language models, "it combines the benefit of per-token generation with optional bi-directionality specifically tailored to prompting". |
|
During the training of InCoder, spans of code were randomly masked and moved to the end of each file, which allows for bidirectional context. Figure below from InCoder [paper](https://arxiv.org/pdf/2204.05999.pdf) illustrates the training process. |
|
|
|
<p align="center"> |
|
<img src="https://huggingface.co/datasets/loubnabnl/repo-images/raw/main/incoder.png" alt="drawing" width="750"/> |
|
</p> |
|
|
|
So in addition to program synthesis (via left-to-right generation), InCoder can also perform editing (via infilling). The model gives promising results in some zero-shot code infilling tasks such as type prediction, variable re-naming and comment generation. |
|
|
|
In the code generation demo, at the end of the blog, we use InCoder 1.3B. |
|
|
|
You can load the model and tokenizer directly from [`transformers`](https://huggingface.co/docs/transformers/index): |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelWithLMHead |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("facebook/incoder-6B") |
|
model = AutoModelWithLMHead.from_pretrained("facebook/incoder-6B") |
|
|
|
inputs = tokenizer("def hello_world():", return_tensors="pt") |
|
outputs = model(**inputs) |
|
|
|
``` |
|
|
|
Or you can use a `pipeline`: |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
pipe = pipeline("text-generation", model="facebook/incoder-6B") |
|
outputs = pipe("def hello_world():") |
|
``` |