File size: 943 Bytes
76df626 d97087d 76df626 50d6392 76df626 77279f3 50d6392 |
1 2 3 4 5 6 7 8 9 10 11 |
[InCoder](https://huggingface.co/facebook/incoder-6B) uses a decoder-only Transformer with Causal Masking objective, to train a left-to-right language model to fill in masked token segments. |Model | # parameters | | - | - | | Decoder |6.7B | [Causal Masking objective](https://arxiv.org/abs/2201.07520) is a hybrid approach of Causal and Masked language models, "it combines the benefit of per-token generation with optional bi-directionality specifically tailored to prompting". During the training of InCoder, spans of code were randomly masked and moved to the end of each file, which allows for bidirectional context. Figure 1 below illustrates the training process. So in addition to program synthesis (via left-to-right generation), InCoder can also perform editing (via infilling). The model gives promising results in some zero-shot code infilling tasks such as type prediction, variable re-naming and comment generation. |