Files changed (1) hide show
  1. README.md +35 -3
README.md CHANGED
@@ -1,3 +1,35 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ inference: false
4
+ tags: [green, llmware-chat, p14, ov,emerald]
5
+ ---
6
+
7
+ # qwen2.5-14b-instruct-ov
8
+
9
+ **qwen2.5-14b-instruct-ov** is an OpenVino int4 quantized version of [Qwen2.5-14B-Instruct](https://www.huggingface.co/Qwen/Qwen2.5-14B-Instruct), providing a very fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
10
+
11
+ This is from the latest release series from Qwen, and is one of the largest models in the collection.
12
+
13
+ This model will run on an AI PC, with GPU acceleration and 32 GB of memory. Please note that the loading can take a little bit of time, but inference is still quite fast.
14
+
15
+
16
+ ### Model Description
17
+
18
+ - **Developed by:** Qwen
19
+ - **Quantized by:** llmware
20
+ - **Model type:** qwen2.5
21
+ - **Parameters:** 14 billion
22
+ - **Model Parent:** Qwen/Qwen2.5-14B-Instruct
23
+ - **Language(s) (NLP):** English
24
+ - **License:** Apache 2.0
25
+ - **Uses:** Chat, general-purpose LLM
26
+ - **Quantization:** int4
27
+
28
+
29
+ ## Model Card Contact
30
+
31
+ [llmware on github](https://www.github.com/llmware-ai/llmware)
32
+
33
+ [llmware on hf](https://www.huggingface.co/llmware)
34
+
35
+ [llmware website](https://www.llmware.ai)