echarlaix's picture
echarlaix HF staff
update loading instructions
4439326
|
raw
history blame
975 Bytes
metadata
language: en
tags:
  - bert
  - rte
  - glue
  - torchdistill
  - nlp
  - int8
  - neural-compressor
  - Intel® Neural Compressor
  - text-classfication
  - PostTrainingStatic
license: apache-2.0
datasets:
  - rte
metrics:
  - f1

INT8 bert-large-uncased-rte-int8-static

Post-training static quantization

PyTorch

This is an INT8 PyTorch model quantized with Intel® Neural Compressor.

The original fp32 model comes from the fine-tuned model yoshitomo-matsubara/bert-large-uncased-rte.

Test result

INT8 FP32
Accuracy (eval-f1) 0.7365 0.7401
Model size (MB) 1244 1349

Load with Intel® Neural Compressor:

from optimum.intel import INCModelForSequenceClassification

model_id = "Intel/bert-large-uncased-rte-int8-static"
int8_model = INCModelForSequenceClassification.from_pretrained(model_id)