Llama-3-Taiwan-70B-Instruct - GPTQ
- Model creator: Yen-Ting Lin
- Original model: Llama-3-Taiwan-70B-Instruct
Description
This repo contains GPTQ model files for Llama-3-Taiwan-70B-Instruct.
Quantization parameter
- Bits : 4
- Group Size : 128
- Act Order : Yes
- Damp % : 0.1
- Seq Len : 2048
- Size : 37.07 GB
It tooks about 6.5 hrs to quantize on H100.
- Downloads last month
- 11
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for minyichen/Llama-3-Taiwan-70B-Instruct-GPTQ
Base model
meta-llama/Meta-Llama-3-70B
Finetuned
yentinglin/Llama-3-Taiwan-70B-Instruct