This model represents an ONNX-optimized version of the original roberta_toxicity_classifier model. It has been specifically tailored for GPUs and may exhibit variations in performance when run on CPUs.
Dependencies
Please install the following dependency before you begin working with the model:
pip install optimum[onnxruntime-gpu]
How to use
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
from optimum.pipelines import pipeline
# load tokenizer and model weights
tokenizer = AutoTokenizer.from_pretrained('Deepchecks/roberta_toxicity_classifier_onnx')
model = ORTModelForSequenceClassification.from_pretrained('Deepchecks/roberta_toxicity_classifier_onnx')
# prepare the pipeline and generate inferences
pip = pipeline(task='text-classification', model=model, tokenizer=tokenizer, device=device, accelerator="ort")
res = pip(['I hate you', 'I love you'], batch_size=64, truncation="only_first")
Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.