Update README.md
54ee5f5
verified
Mixtral-8x7B-Instruct-v0.1-FP8-v3
- Weights and activations are per-tensor quantized to float8_e4m3.
- Quantization with AutoFP8 with updated activation scaling factor names.
- Calibration dataset: Ultrachat-200k
- Samples: 4096
- Sequence length: 8192
Evaluation
TBA