AIFunOver's picture
Upload README.md with huggingface_hub
00cf045 verified
|
raw
history blame
1.43 kB
---
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
language: en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- openvino
- nncf
- 8-bit
base_model_relation: quantized
---
This model is a quantized version of [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
First make sure you have `optimum-intel` installed:
```bash
pip install optimum[openvino]
```
To load your model you can do as follows:
```python
from optimum.intel import OVModelForFeatureExtraction
model_id = "AIFunOver/all-MiniLM-L6-v2-openvino-8bit"
model = OVModelForFeatureExtraction.from_pretrained(model_id)
```