pooja-ganesh commited on
Commit
35ef64f
1 Parent(s): 7faedd6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,9 +9,9 @@ base_model:
9
  - ## Introduction
10
  This model was created using Quark Quantization, followed by OGA Model Builder, and finalized with post-processing for NPU deployment.
11
  - ## Quantization Strategy
12
- - AWQ / Group 128 / Asymmetric / BF16 activations
13
  - ## Quick Start
14
- For quickstart, refer to npu-llm-artifacts_1.3.zip available in [RyzenAI-SW-EA](https://account.amd.com/en/member/ryzenai-sw-ea.html)
15
 
16
  #### Evaluation scores
17
  The perplexity measurement is run on the wikitext-2-raw-v1 (raw data) dataset provided by Hugging Face. Perplexity score measured for prompt length 2k is 12.27.
 
9
  - ## Introduction
10
  This model was created using Quark Quantization, followed by OGA Model Builder, and finalized with post-processing for NPU deployment.
11
  - ## Quantization Strategy
12
+ - AWQ / Group 128 / Asymmetric / BF16 activations / UINT4 Weights
13
  - ## Quick Start
14
+ For quickstart, refer to npu-llm-artifacts_1.3.0.zip available in [RyzenAI-SW-EA](https://account.amd.com/en/member/ryzenai-sw-ea.html)
15
 
16
  #### Evaluation scores
17
  The perplexity measurement is run on the wikitext-2-raw-v1 (raw data) dataset provided by Hugging Face. Perplexity score measured for prompt length 2k is 12.27.