Merge pull request #279 from NanoCode012/feat/multi-gpu-readme
Browse files
README.md
CHANGED
@@ -36,8 +36,6 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl
|
|
36 |
pip3 install -e .
|
37 |
pip3 install -U git+https://github.com/huggingface/peft.git
|
38 |
|
39 |
-
accelerate config
|
40 |
-
|
41 |
# finetune lora
|
42 |
accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml
|
43 |
|
@@ -532,6 +530,21 @@ Run
|
|
532 |
accelerate launch scripts/finetune.py configs/your_config.yml
|
533 |
```
|
534 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
535 |
### Inference
|
536 |
|
537 |
Pass the appropriate flag to the train command:
|
@@ -582,6 +595,10 @@ Try set `fp16: true`
|
|
582 |
|
583 |
Try to turn off xformers.
|
584 |
|
|
|
|
|
|
|
|
|
585 |
## Need help? πβοΈ
|
586 |
|
587 |
Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you
|
|
|
36 |
pip3 install -e .
|
37 |
pip3 install -U git+https://github.com/huggingface/peft.git
|
38 |
|
|
|
|
|
39 |
# finetune lora
|
40 |
accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml
|
41 |
|
|
|
530 |
accelerate launch scripts/finetune.py configs/your_config.yml
|
531 |
```
|
532 |
|
533 |
+
#### Multi-GPU Config
|
534 |
+
|
535 |
+
- llama FSDP
|
536 |
+
```yaml
|
537 |
+
fsdp:
|
538 |
+
- full_shard
|
539 |
+
- auto_wrap
|
540 |
+
fsdp_config:
|
541 |
+
fsdp_offload_params: true
|
542 |
+
fsdp_state_dict_type: FULL_STATE_DICT
|
543 |
+
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
|
544 |
+
```
|
545 |
+
|
546 |
+
- llama Deepspeed: append `ACCELERATE_USE_DEEPSPEED=true` in front of finetune command
|
547 |
+
|
548 |
### Inference
|
549 |
|
550 |
Pass the appropriate flag to the train command:
|
|
|
595 |
|
596 |
Try to turn off xformers.
|
597 |
|
598 |
+
> Message about accelerate config missing
|
599 |
+
|
600 |
+
It's safe to ignore it.
|
601 |
+
|
602 |
## Need help? πβοΈ
|
603 |
|
604 |
Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you
|