DeBERTa-v3-base-onnx-quantized
This model has been quantized using the base model: sileod/deberta-v3-base-tasksource-nli, To use this model you need to have onnxruntime
installed on your machine.
To use this model, you can check out my Huggingface Spaces.
The source code for the Huggingface Application can be found on GitHub.
To run this model on your machine use the following code. Note that this model is optimized for CPU with AVX2 support.
- Install dependencies
pip install transformers optimum[onnxruntime]
- Run the model:
# load libraries
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
from optimum.pipelines import pipeline
# load model components
MODEL_ID = "pitangent-ds/deberta-v3-nli-onnx-quantized"
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model = ORTModelForSequenceClassification.from_pretrained(MODEL_ID)
# load the pipeline
classifier = pipeline("zero-shot-classification", tokenizer=tokenizer, model=model)
# inference
text = "The jacket that I bought is awesome"
candidate_labels = ["positive", "negative"]
results = classifier(text, candidate_labels)
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.