|
--- |
|
license: mit |
|
--- |
|
|
|
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1 |
|
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) |
|
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in English. |
|
|
|
### Dataset Creation |
|
|
|
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). |
|
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). |
|
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). |
|
|
|
<h3 align="center"> |
|
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> |
|
</h3> |
|
|
|
### Training Parameters |
|
|
|
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). |
|
This version of the weights was trained with the following hyperparameters: |
|
|
|
- Epochs: 8 |
|
- Batch size: 128 |
|
- Cutoff length: 1024 |
|
- Learning rate: 3e-4 |
|
- Lora _r_: 16 |
|
- Lora target modules: query_key_value |
|
|
|
|
|
That is: |
|
|
|
``` |
|
python finetune.py \ |
|
--base_model='bigscience/bloom-7b1' \ |
|
--num_epochs=5 \ |
|
--cutoff_len=1024 \ |
|
--group_by_length \ |
|
--output_dir='./bactrian-en-bloom-7b1-lora' \ |
|
--lora_target_modules='query_key_value' \ |
|
--lora_r=16 \ |
|
--micro_batch_size=32 |
|
``` |
|
|
|
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. |
|
|
|
### Discussion of Biases |
|
|
|
(1) Translation bias; (2) Potential English-culture bias in the translated dataset. |
|
|
|
|
|
### Citation Information |
|
|
|
``` |
|
@misc{li2023bactrianx, |
|
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, |
|
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, |
|
year={2023}, |
|
eprint={2305.15011}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
|
|
``` |
|
|