File size: 4,203 Bytes
a4d5657 b3b032b ea539e6 a4d5657 b3b032b 919a7d9 b3b032b ea539e6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
---
license: apache-2.0
language:
- code
- en
datasets:
- saridormi/commit-chronicle
tags:
- code
- commit_message_generation
pipeline_tag: text2text-generation
---
# CMG/CMC: CodeT5 (without history)
This is the checkpoint for [CodeT5](https://huggingface.co/Salesforce/codet5-base) model, fine-tuned for the commit message generation (and/or completion) task as part of the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023.
## Details
> π For further details, please refer to:
> * **Paper**: TODO
> * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation)
* This model is based on [`Salesforce/codet5-base`](https://huggingface.co/Salesforce/codet5-base) checkpoint from π [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://aclanthology.org/2021.emnlp-main.685/).
* This model was trained with commit diffs, WITHOUT commit message history.
* This model was trained on the CommitChronicle dataset introduced in our study.
* Our hyperparameter setting is mostly based on π [RACE: Retrieval-augmented Commit Message Generation](https://aclanthology.org/2022.emnlp-main.372/).
The exact values are provided below:
| Hyperparameter | Value |
|:--------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------:|
| Encoder context max length | 512 |
| Decoder context max length | 512 |
| Number of training epochs | 1 |
| Batch size | 32 |
| Optimizer | [AdamW](https://pytorch.org/docs/1.12/generated/torch.optim.AdamW.html?highlight=adamw#torch.optim.AdamW) |
| Warmup | [Linear](https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/optimizer_schedules#transformers.get_linear_schedule_with_warmup) |
| Number of warmup steps | 100 |
| Peak learning rate | 0.00002 |
## Available checkpoints
We also released checkpoints for other models fine-tuned as part of our study.
* Models trained *with commit message history*:
* **CodeT5:** π€ [`JetBrains-Research/cmg-codet5-with-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-with-history)
* **CodeReviewer:** π€ [`JetBrains-Research/cmg-codereviewer-with-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-with-history)
* **RACE:** π€ [`JetBrains-Research/cmg-race-with-history`](https://huggingface.co/JetBrains-Research/cmg-race-with-history)
* Models trained *without commit message history*:
* **CodeT5:** π€ [`JetBrains-Research/cmg-codet5-without-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-without-history) (this model)
* **CodeReviewer:** π€ [`JetBrains-Research/cmg-codereviewer-without-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-without-history)
* **RACE:** π€ [`JetBrains-Research/cmg-race-without-history`](https://huggingface.co/JetBrains-Research/cmg-race-without-history)
## Citation
```
TODO
``` |