dev-slx commited on
Commit
66422a7
1 Parent(s): 4bbc065

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -33,18 +33,18 @@ In this version, we employed our new, improved decomposable ELM techniques on a
33
 
34
  ## 1. Run ELM Turbo models with Huggingface Transformers library.
35
  There are three ELM Turbo slices derived from the `Meta-Llama-3.1-8B-Instruct` model:
36
- 1. `slicexai/Llama3.1-elm-turbo-3B-instruct` (3B params)
37
  2. `slicexai/Llama3.1-elm-turbo-4B-instruct`(4B params)
38
  3. `slicexai/Llama3.1-elm-turbo-6B-instruct` (6B params)
39
 
40
  Make sure to update your transformers installation via pip install --upgrade transformers.
41
 
42
- Example - To run the `slicexai/Llama3.1-elm-turbo-4B-instruct`
43
  ```python
44
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
45
  import torch
46
 
47
- elm_turbo_model = "slicexai/Llama3.1-elm-turbo-4B-instruct"
48
  model = AutoModelForCausalLM.from_pretrained(
49
  elm_turbo_model,
50
  device_map="cuda",
 
33
 
34
  ## 1. Run ELM Turbo models with Huggingface Transformers library.
35
  There are three ELM Turbo slices derived from the `Meta-Llama-3.1-8B-Instruct` model:
36
+ 1. **`slicexai/Llama3.1-elm-turbo-3B-instruct` (3B params)**
37
  2. `slicexai/Llama3.1-elm-turbo-4B-instruct`(4B params)
38
  3. `slicexai/Llama3.1-elm-turbo-6B-instruct` (6B params)
39
 
40
  Make sure to update your transformers installation via pip install --upgrade transformers.
41
 
42
+ Example - To run the `slicexai/Llama3.1-elm-turbo-3B-instruct`
43
  ```python
44
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
45
  import torch
46
 
47
+ elm_turbo_model = "slicexai/Llama3.1-elm-turbo-3B-instruct"
48
  model = AutoModelForCausalLM.from_pretrained(
49
  elm_turbo_model,
50
  device_map="cuda",