Update README.md
Browse files
README.md
CHANGED
@@ -44,19 +44,20 @@ Architecture, training process, data, etc. [see our series of cookbooks](https:/
|
|
44 |
|
45 |
## Usage
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
|
|
|
50 |
|
51 |
```python
|
52 |
-
pip install
|
53 |
```
|
54 |
|
55 |
Arctic leverages several features from [DeepSpeed](https://github.com/microsoft/DeepSpeed), you will need to
|
56 |
-
install the
|
57 |
|
58 |
```python
|
59 |
-
pip install
|
60 |
```
|
61 |
|
62 |
### Inference examples
|
@@ -66,8 +67,9 @@ favorite cloud provider such as: AWS [p5.48xlarge](https://aws.amazon.com/ec2/in
|
|
66 |
Azure [ND96isr_H100_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nd-h100-v5-series), etc.
|
67 |
|
68 |
In this example we are using FP8 quantization provided by DeepSpeed in the backend, we can also use FP6
|
69 |
-
quantization by specifying `q_bits=6` in the `
|
70 |
-
for max_memory is required until we can get DeepSpeed's FP quantization supported natively as a
|
|
|
71 |
are actively working on.
|
72 |
|
73 |
```python
|
@@ -77,24 +79,29 @@ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
|
|
77 |
|
78 |
import torch
|
79 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
80 |
-
from
|
81 |
|
82 |
-
tokenizer = AutoTokenizer.from_pretrained(
|
83 |
-
|
84 |
-
|
|
|
|
|
85 |
|
86 |
model = AutoModelForCausalLM.from_pretrained(
|
87 |
"Snowflake/snowflake-arctic-instruct",
|
|
|
88 |
low_cpu_mem_usage=True,
|
89 |
device_map="auto",
|
90 |
ds_quantization_config=quant_config,
|
91 |
max_memory={i: "150GiB" for i in range(8)},
|
92 |
torch_dtype=torch.bfloat16)
|
93 |
|
94 |
-
|
|
|
|
|
95 |
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
|
96 |
|
97 |
-
outputs = model.generate(input_ids=input_ids, max_new_tokens=
|
98 |
print(tokenizer.decode(outputs[0]))
|
99 |
```
|
100 |
|
|
|
44 |
|
45 |
## Usage
|
46 |
|
47 |
+
Arctic is currently supported with `transformers` by leveraging the
|
48 |
+
[custom code feature](https://huggingface.co/docs/transformers/en/custom_models#using-a-model-with-custom-code),
|
49 |
+
to use this you simply need to add `trust_remote_code=True` to your AutoTokenizer and AutoModelForCausalLM calls.
|
50 |
+
However, we recommend that you use a `transformers` version at or above 4.39:
|
51 |
|
52 |
```python
|
53 |
+
pip install transformers>=4.39.0
|
54 |
```
|
55 |
|
56 |
Arctic leverages several features from [DeepSpeed](https://github.com/microsoft/DeepSpeed), you will need to
|
57 |
+
install the DeepSpeed 0.14.2 or higher to get all of these required features:
|
58 |
|
59 |
```python
|
60 |
+
pip install deepspeed>=0.14.2
|
61 |
```
|
62 |
|
63 |
### Inference examples
|
|
|
67 |
Azure [ND96isr_H100_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nd-h100-v5-series), etc.
|
68 |
|
69 |
In this example we are using FP8 quantization provided by DeepSpeed in the backend, we can also use FP6
|
70 |
+
quantization by specifying `q_bits=6` in the `QuantizationConfig` config. The `"150GiB"` setting
|
71 |
+
for max_memory is required until we can get DeepSpeed's FP quantization supported natively as a
|
72 |
+
[HFQuantizer](https://huggingface.co/docs/transformers/main/en/hf_quantizer#build-a-new-hfquantizer-class) which we
|
73 |
are actively working on.
|
74 |
|
75 |
```python
|
|
|
79 |
|
80 |
import torch
|
81 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
82 |
+
from deepspeed.linear.config import QuantizationConfig
|
83 |
|
84 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
85 |
+
"Snowflake/snowflake-arctic-instruct",
|
86 |
+
trust_remote_code=True
|
87 |
+
)
|
88 |
+
quant_config = QuantizationConfig(q_bits=8)
|
89 |
|
90 |
model = AutoModelForCausalLM.from_pretrained(
|
91 |
"Snowflake/snowflake-arctic-instruct",
|
92 |
+
trust_remote_code=True,
|
93 |
low_cpu_mem_usage=True,
|
94 |
device_map="auto",
|
95 |
ds_quantization_config=quant_config,
|
96 |
max_memory={i: "150GiB" for i in range(8)},
|
97 |
torch_dtype=torch.bfloat16)
|
98 |
|
99 |
+
|
100 |
+
content = "5x + 35 = 7x - 60 + 10. Solve for x"
|
101 |
+
messages = [{"role": "user", "content": content}]
|
102 |
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
|
103 |
|
104 |
+
outputs = model.generate(input_ids=input_ids, max_new_tokens=256)
|
105 |
print(tokenizer.decode(outputs[0]))
|
106 |
```
|
107 |
|