|
--- |
|
license: apache-2.0 |
|
language: |
|
- code |
|
- en |
|
datasets: |
|
- JetBrains-Research/commit-chronicle |
|
tags: |
|
- code |
|
- commit_message_generation |
|
pipeline_tag: text2text-generation |
|
--- |
|
|
|
# CMG/CMC: CodeT5 (without history) |
|
|
|
This is the checkpoint for [CodeT5](https://huggingface.co/Salesforce/codet5-base) model, fine-tuned for the commit message generation (and/or completion) task as part of the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023. |
|
|
|
## Details |
|
|
|
> π For further details, please refer to: |
|
> * **Paper**: [https://arxiv.org/abs/2308.07655](https://arxiv.org/abs/2308.07655) |
|
> * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation) |
|
|
|
|
|
* This model is based on [`Salesforce/codet5-base`](https://huggingface.co/Salesforce/codet5-base) checkpoint from π [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://aclanthology.org/2021.emnlp-main.685/). |
|
* This model was trained with commit diffs, WITHOUT commit message history. |
|
* This model was trained on the CommitChronicle dataset introduced in our study. |
|
* Our hyperparameter setting is mostly based on π [RACE: Retrieval-augmented Commit Message Generation](https://aclanthology.org/2022.emnlp-main.372/). |
|
The exact values are provided below: |
|
|
|
| Hyperparameter | Value | |
|
|:--------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------:| |
|
| Encoder context max length | 512 | |
|
| Decoder context max length | 512 | |
|
| Number of training epochs | 1 | |
|
| Batch size | 32 | |
|
| Optimizer | [AdamW](https://pytorch.org/docs/1.12/generated/torch.optim.AdamW.html?highlight=adamw#torch.optim.AdamW) | |
|
| Warmup | [Linear](https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/optimizer_schedules#transformers.get_linear_schedule_with_warmup) | |
|
| Number of warmup steps | 100 | |
|
| Peak learning rate | 0.00002 | |
|
|
|
|
|
## Available checkpoints |
|
|
|
We also released checkpoints for other models fine-tuned as part of our study. |
|
|
|
* Models trained *with commit message history*: |
|
* **CodeT5:** π€ [`JetBrains-Research/cmg-codet5-with-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-with-history) |
|
* **CodeReviewer:** π€ [`JetBrains-Research/cmg-codereviewer-with-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-with-history) |
|
* **RACE:** π€ [`JetBrains-Research/cmg-race-with-history`](https://huggingface.co/JetBrains-Research/cmg-race-with-history) |
|
* Models trained *without commit message history*: |
|
* **CodeT5:** π€ [`JetBrains-Research/cmg-codet5-without-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-without-history) (this model) |
|
* **CodeReviewer:** π€ [`JetBrains-Research/cmg-codereviewer-without-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-without-history) |
|
* **RACE:** π€ [`JetBrains-Research/cmg-race-without-history`](https://huggingface.co/JetBrains-Research/cmg-race-without-history) |
|
|
|
## Citation |
|
|
|
``` |
|
TODO |
|
``` |