gte-base-sparse / README.md
zeroshot's picture
Update README.md
ae9cea7
|
raw
history blame
2.56 kB
metadata
tags:
  - sparse sparsity quantized onnx embeddings int8
license: mit
language:
  - en

gte-base-quant

This is the sparsified ONNX variant of the gte-base embeddings model created with DeepSparse Optimum for ONNX export/inference and Neural Magic's Sparsify for one-shot quantization (INT8) and unstructured pruning 50%.

Current list of sparse and quantized gte-small ONNX models:

Links Sparsification Method
zeroshot/gte-small-sparse Quantization (INT8) & 50% Pruning
zeroshot/gte-small-quant Quantization (INT8)

BGE models using this architecture:

Links Sparsification Method
zeroshot/bge-large-en-v1.5-sparse Quantization (INT8) & 50% Pruning
zeroshot/bge-large-en-v1.5-quant Quantization (INT8)
zeroshot/bge-base-en-v1.5-sparse Quantization (INT8) & 50% Pruning
zeroshot/bge-base-en-v1.5-quant Quantization (INT8)
zeroshot/bge-small-en-v1.5-sparse Quantization (INT8) & 50% Pruning
zeroshot/bge-small-en-v1.5-quant Quantization (INT8)

For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.

;)