Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -15,8 +15,12 @@ tags:
|
|
15 |
# Qwen2-7B-Instruct: Optimized for Mobile Deployment
|
16 |
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
|
17 |
|
|
|
18 |
The Qwen2-7B-Instruct is a state-of-the-art multilingual language model with 7.07 billion parameters, excelling in language understanding, generation, coding, and mathematics. AI Hub provides with four QNN context binaries (shared weights) that can be deployed on Snapdragon 8 Elite with Genie SDK.
|
19 |
|
|
|
|
|
|
|
20 |
This is based on the implementation of Qwen2-7B-Instruct found
|
21 |
[here]({source_repo}). More details on model performance
|
22 |
accross various devices, can be found [here](https://aihub.qualcomm.com/models/qwen2_7b_instruct_quantized).
|
|
|
15 |
# Qwen2-7B-Instruct: Optimized for Mobile Deployment
|
16 |
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
|
17 |
|
18 |
+
|
19 |
The Qwen2-7B-Instruct is a state-of-the-art multilingual language model with 7.07 billion parameters, excelling in language understanding, generation, coding, and mathematics. AI Hub provides with four QNN context binaries (shared weights) that can be deployed on Snapdragon 8 Elite with Genie SDK.
|
20 |
|
21 |
+
This model is an implementation of Posenet-Mobilenet found [here](https://github.com/QwenLM/Qwen2.5).
|
22 |
+
|
23 |
+
|
24 |
This is based on the implementation of Qwen2-7B-Instruct found
|
25 |
[here]({source_repo}). More details on model performance
|
26 |
accross various devices, can be found [here](https://aihub.qualcomm.com/models/qwen2_7b_instruct_quantized).
|