|
--- |
|
license: llama2 |
|
language: |
|
- en |
|
base_model: |
|
- meta-llama/Llama-2-7b-chat |
|
pipeline_tag: text-generation |
|
tags: |
|
- amd |
|
- meta |
|
- facebook |
|
- llama |
|
- llama-2 |
|
--- |
|
|
|
# meta-llama/Llama-2-7b-chat |
|
- ## Introduction |
|
This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset, and applying [onnxruntime-genai model builder](https://github.com/microsoft/onnxruntime-genai/tree/main/src/python/py/models) to convert to ONNX. |
|
- ## Quantization Strategy |
|
- ***Quantized Layers***: TBD |
|
- ***Weight***: TBD |
|
- ## Quick Start |
|
For quickstart, refer to AMD [RyzenAI-SW-EA](https://account.amd.com/en/member/ryzenai-sw-ea.html) |
|
|
|
#### Evaluation scores |
|
The perplexity measurement is run on the wikitext-2-raw-v1 (raw data) dataset provided by Hugging Face. Perplexity score measured for prompt length 2k is 7.153726. |
|
|
|
#### License |
|
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved. |
|
|
|
license: llama2 |