Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,8 @@ language:
|
|
12 |
- es
|
13 |
- th
|
14 |
pipeline_tag: text-generation
|
|
|
|
|
15 |
base_model:
|
16 |
- mistral-community/pixtral-12b
|
17 |
- mistralai/Pixtral-12B-2409
|
@@ -20,17 +22,17 @@ base_model:
|
|
20 |
# pixtral-12b-FP8-dynamic
|
21 |
|
22 |
## Model Overview
|
23 |
-
- **Model Architecture:** Llava
|
24 |
- **Input:** Text/Image
|
25 |
- **Output:** Text
|
26 |
- **Model Optimizations:**
|
27 |
- **Weight quantization:** FP8
|
28 |
- **Activation quantization:** FP8
|
29 |
-
- **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similar to [
|
30 |
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
|
31 |
- **Release Date:** 11/1/2024
|
32 |
- **Version:** 1.0
|
33 |
-
- **License(s):**
|
34 |
- **Model Developers:** Neural Magic
|
35 |
|
36 |
Quantized version of [mistral-community/pixtral-12b](https://huggingface.co/mistral-community/pixtral-12b).
|
@@ -51,39 +53,38 @@ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/
|
|
51 |
|
52 |
```python
|
53 |
from vllm import LLM, SamplingParams
|
54 |
-
from vllm.assets.image import ImageAsset
|
55 |
|
56 |
# Initialize the LLM
|
57 |
model_name = "neuralmagic/pixtral-12b-FP8-dynamic"
|
58 |
-
llm = LLM(model=model_name,
|
59 |
-
|
60 |
-
# Load the image
|
61 |
-
image = ImageAsset("cherry_blossom").pil_image.convert("RGB")
|
62 |
|
63 |
# Create the prompt
|
64 |
-
|
65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
# Set up sampling parameters
|
68 |
-
sampling_params = SamplingParams(temperature=0.2, max_tokens=
|
69 |
|
70 |
# Generate the response
|
71 |
-
|
72 |
-
"prompt": prompt,
|
73 |
-
"multi_modal_data": {
|
74 |
-
"image": image
|
75 |
-
},
|
76 |
-
}
|
77 |
-
outputs = llm.generate(inputs, sampling_params=sampling_params)
|
78 |
|
79 |
# Print the generated text
|
80 |
-
|
|
|
81 |
```
|
82 |
|
83 |
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
|
84 |
|
85 |
```
|
86 |
-
vllm serve neuralmagic/pixtral-12b-FP8-dynamic
|
87 |
```
|
88 |
|
89 |
## Creation
|
|
|
12 |
- es
|
13 |
- th
|
14 |
pipeline_tag: text-generation
|
15 |
+
license: apache-2.0
|
16 |
+
library_name: vllm
|
17 |
base_model:
|
18 |
- mistral-community/pixtral-12b
|
19 |
- mistralai/Pixtral-12B-2409
|
|
|
22 |
# pixtral-12b-FP8-dynamic
|
23 |
|
24 |
## Model Overview
|
25 |
+
- **Model Architecture:** Pixtral (Llava)
|
26 |
- **Input:** Text/Image
|
27 |
- **Output:** Text
|
28 |
- **Model Optimizations:**
|
29 |
- **Weight quantization:** FP8
|
30 |
- **Activation quantization:** FP8
|
31 |
+
- **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similar to [mistralai/Pixtral-12B-2409](https://huggingface.co/mistralai/Pixtral-12B-2409), this models is intended for assistant-like chat.
|
32 |
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
|
33 |
- **Release Date:** 11/1/2024
|
34 |
- **Version:** 1.0
|
35 |
+
- **License(s):** [Apache 2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md)
|
36 |
- **Model Developers:** Neural Magic
|
37 |
|
38 |
Quantized version of [mistral-community/pixtral-12b](https://huggingface.co/mistral-community/pixtral-12b).
|
|
|
53 |
|
54 |
```python
|
55 |
from vllm import LLM, SamplingParams
|
|
|
56 |
|
57 |
# Initialize the LLM
|
58 |
model_name = "neuralmagic/pixtral-12b-FP8-dynamic"
|
59 |
+
llm = LLM(model=model_name, max_model_len=10000)
|
|
|
|
|
|
|
60 |
|
61 |
# Create the prompt
|
62 |
+
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
|
63 |
+
messages = [
|
64 |
+
{
|
65 |
+
"role": "user",
|
66 |
+
"content": [
|
67 |
+
{"type": "text", "text": "Describe the image."},
|
68 |
+
{"type": "image_url", "image_url": {"url": image_url}},
|
69 |
+
],
|
70 |
+
},
|
71 |
+
]
|
72 |
|
73 |
# Set up sampling parameters
|
74 |
+
sampling_params = SamplingParams(temperature=0.2, max_tokens=100)
|
75 |
|
76 |
# Generate the response
|
77 |
+
outputs = llm.chat(messages, sampling_params=sampling_params)
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
# Print the generated text
|
80 |
+
for output in outputs:
|
81 |
+
print(output.outputs[0].text)
|
82 |
```
|
83 |
|
84 |
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
|
85 |
|
86 |
```
|
87 |
+
vllm serve neuralmagic/pixtral-12b-FP8-dynamic
|
88 |
```
|
89 |
|
90 |
## Creation
|