Text Classification
Transformers
PyTorch
TensorBoard
distilbert
Generated from Trainer
Eval Results (legacy)
text-embeddings-inference
Instructions to use autoevaluate/binary-classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use autoevaluate/binary-classification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="autoevaluate/binary-classification")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("autoevaluate/binary-classification") model = AutoModelForSequenceClassification.from_pretrained("autoevaluate/binary-classification") - Notebooks
- Google Colab
- Kaggle
binary-classification / runs /May25_09-48-27_6bf2abbffa14 /events.out.tfevents.1653472536.6bf2abbffa14.72.6
- Xet hash:
- 2bc2c096e8e7c1d42b305bb50487700932fe659dd0217d120fc96e3edb8a6168
- Size of remote file:
- 363 Bytes
- SHA256:
- b5eed4f7dbe77a5d23b35dc824dc585f9ac9148609156a531e19c5c157705033
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.