Update contributor citation
#3
by
ZennyKenny
- opened
README.md
CHANGED
@@ -12,7 +12,7 @@ and _encoder-decoder_) to support a wide range of code understanding and generat
|
|
12 |
It is introduced in the paper:
|
13 |
|
14 |
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
|
15 |
-
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://
|
16 |
indicates equal contribution).
|
17 |
|
18 |
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of
|
|
|
12 |
It is introduced in the paper:
|
13 |
|
14 |
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
|
15 |
+
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://scholar.google.com/citations?user=MuUhwi0AAAAJ&hl), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (*
|
16 |
indicates equal contribution).
|
17 |
|
18 |
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of
|