File size: 983 Bytes
26b6339 f8f534d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
license: llama2
language:
- en
base_model:
- meta-llama/Llama-2-7b-chat
pipeline_tag: text-generation
tags:
- amd
- meta
- facebook
- llama
- llama-2
---
# meta-llama/Llama-2-7b-chat
- ## Introduction
This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset, and applying [onnxruntime-genai model builder](https://github.com/microsoft/onnxruntime-genai/tree/main/src/python/py/models) to convert to ONNX.
- ## Quantization Strategy
- ***Quantized Layers***: TBD
- ***Weight***: TBD
- ## Quick Start
For quickstart, refer to AMD [RyzenAI-SW-EA](https://account.amd.com/en/member/ryzenai-sw-ea.html)
#### Evaluation scores
The perplexity measurement is run on the wikitext-2-raw-v1 (raw data) dataset provided by Hugging Face. Perplexity score measured for prompt length 2k is 7.153726.
#### License
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
license: llama2 |