Update quickstart. Add common error and contribution section.
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@
|
|
22 |
| mpt | β
| β | β | β | β | β |
|
23 |
|
24 |
|
25 |
-
##
|
26 |
|
27 |
**Requirements**: Python 3.9.
|
28 |
|
@@ -32,12 +32,15 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl
|
|
32 |
pip3 install -e .[int4]
|
33 |
|
34 |
accelerate config
|
35 |
-
accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml
|
36 |
-
```
|
37 |
|
|
|
|
|
38 |
|
|
|
|
|
|
|
39 |
|
40 |
-
##
|
41 |
|
42 |
### Environment
|
43 |
|
@@ -108,6 +111,8 @@ Have dataset(s) in one of the following format (JSONL recommended):
|
|
108 |
{"text": "..."}
|
109 |
```
|
110 |
|
|
|
|
|
111 |
</details>
|
112 |
|
113 |
Optionally, download some datasets, see [data/README.md](data/README.md)
|
@@ -309,6 +314,7 @@ Configure accelerate
|
|
309 |
```bash
|
310 |
accelerate config
|
311 |
|
|
|
312 |
# nano ~/.cache/huggingface/accelerate/default_config.yaml
|
313 |
```
|
314 |
|
@@ -330,5 +336,18 @@ If you are inferencing a pretrained LORA, pass
|
|
330 |
|
331 |
### Merge LORA to base (Dev branch π§ )
|
332 |
|
333 |
-
Add
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
334 |
|
|
|
|
22 |
| mpt | β
| β | β | β | β | β |
|
23 |
|
24 |
|
25 |
+
## Quickstart β‘
|
26 |
|
27 |
**Requirements**: Python 3.9.
|
28 |
|
|
|
32 |
pip3 install -e .[int4]
|
33 |
|
34 |
accelerate config
|
|
|
|
|
35 |
|
36 |
+
# finetune
|
37 |
+
accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml
|
38 |
|
39 |
+
# inference
|
40 |
+
accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml --inference
|
41 |
+
```
|
42 |
|
43 |
+
## Installation
|
44 |
|
45 |
### Environment
|
46 |
|
|
|
111 |
{"text": "..."}
|
112 |
```
|
113 |
|
114 |
+
> Have some new format to propose? Check if it's already defined in [data.py](src/axolotl/utils/data.py) in `dev` branch!
|
115 |
+
|
116 |
</details>
|
117 |
|
118 |
Optionally, download some datasets, see [data/README.md](data/README.md)
|
|
|
314 |
```bash
|
315 |
accelerate config
|
316 |
|
317 |
+
# Edit manually
|
318 |
# nano ~/.cache/huggingface/accelerate/default_config.yaml
|
319 |
```
|
320 |
|
|
|
336 |
|
337 |
### Merge LORA to base (Dev branch π§ )
|
338 |
|
339 |
+
Add below flag to train command above
|
340 |
+
|
341 |
+
```bash
|
342 |
+
--merge_lora --lora_model_dir="./completed-model"
|
343 |
+
```
|
344 |
+
|
345 |
+
## Common Errors π§°
|
346 |
+
|
347 |
+
- Cuda out of memory: Please reduce `micro_batch_size` and/or `eval_batch_size`
|
348 |
+
|
349 |
+
## Contributing π€
|
350 |
+
|
351 |
+
Bugs? Please check for open issue else create a new [Issue](https://github.com/OpenAccess-AI-Collective/axolotl/issues/new).
|
352 |
|
353 |
+
PRs are **greatly welcome**!
|