update architecture
Browse files
architectures/incoder.txt
CHANGED
@@ -1,5 +1,10 @@
|
|
1 |
-
[InCoder](https://huggingface.co/facebook/incoder-6B) uses a decoder-only Transformer with
|
2 |
|
3 |
|Model | # parameters |
|
4 |
| - | - |
|
5 |
-
| Decoder |6.7B |
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[InCoder](https://huggingface.co/facebook/incoder-6B) uses a decoder-only Transformer with Causal Masking objective, to train a left-to-right language model to fill in masked token segments.
|
2 |
|
3 |
|Model | # parameters |
|
4 |
| - | - |
|
5 |
+
| Decoder |6.7B |
|
6 |
+
|
7 |
+
[Causal Masking objective](https://arxiv.org/abs/2201.07520) is a hybrid approach of Causal and Masked language models, "it combines the benefit of per-token generation with optional bi-directionality specifically tailored to prompting".
|
8 |
+
During the training of InCoder, spans of code were randomly masked and moved to the end of each file, which allows for bidirectional context. The figure below illustrates the training process:
|
9 |
+
![image](incoder.png)
|
10 |
+
So in addition to program synthesis (via left-to-right generation), InCoder can also performs editing (via infilling). The model gives promising results in some zero-shot code infilling tasks such as type prediction, variable re-naming and comment generation.
|