Nanobit commited on
Commit
3815c05
β€’
2 Parent(s): 259262b 85326bf

Merge pull request #61 from NanoCode012/feat/update-readme

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -33,12 +33,12 @@ pip3 install -e .[int4]
33
 
34
  accelerate config
35
 
36
- # finetune
37
- accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml
38
 
39
  # inference
40
- accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml \
41
- --inference --lora_model_dir="./llama-7b-lora-int4"
42
  ```
43
 
44
  ## Installation
@@ -199,8 +199,7 @@ datasets:
199
  # The type of prompt to use for training. [alpaca, sharegpt, gpteacher, oasst, reflection]
200
  type: alpaca # format OR format:prompt_style (chat/instruct)
201
  data_files: # path to source data files
202
- shards: # true if use subset data. make sure to set `shards` param also
203
- shards: # number of shards to split dataset into
204
 
205
  # axolotl attempts to save the dataset as an arrow after packing the data together so
206
  # subsequent training attempts load faster, relative path
@@ -326,6 +325,9 @@ debug:
326
 
327
  # Seed
328
  seed:
 
 
 
329
  ```
330
 
331
  </details>
@@ -382,6 +384,10 @@ Please reduce any below
382
 
383
  Try set `fp16: true`
384
 
 
 
 
 
385
  ## Need help? πŸ™‹β€β™‚οΈ
386
 
387
  Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you
 
33
 
34
  accelerate config
35
 
36
+ # finetune lora
37
+ accelerate launch scripts/finetune.py examples/lora-openllama-3b/config.yml
38
 
39
  # inference
40
+ accelerate launch scripts/finetune.py examples/lora-openllama-3b/config.yml \
41
+ --inference --lora_model_dir="./lora-out"
42
  ```
43
 
44
  ## Installation
 
199
  # The type of prompt to use for training. [alpaca, sharegpt, gpteacher, oasst, reflection]
200
  type: alpaca # format OR format:prompt_style (chat/instruct)
201
  data_files: # path to source data files
202
+ shards: # number of shards to split data into
 
203
 
204
  # axolotl attempts to save the dataset as an arrow after packing the data together so
205
  # subsequent training attempts load faster, relative path
 
325
 
326
  # Seed
327
  seed:
328
+
329
+ # Allow overwrite yml config using from cli
330
+ strict:
331
  ```
332
 
333
  </details>
 
384
 
385
  Try set `fp16: true`
386
 
387
+ > NotImplementedError: No operator found for `memory_efficient_attention_forward` ...
388
+
389
+ Try to turn off xformers.
390
+
391
  ## Need help? πŸ™‹β€β™‚οΈ
392
 
393
  Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you