Text Classification
Transformers
PyTorch
TensorBoard
distilbert
Generated from Trainer
Eval Results (legacy)
text-embeddings-inference
Instructions to use autoevaluate/binary-classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use autoevaluate/binary-classification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="autoevaluate/binary-classification")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("autoevaluate/binary-classification") model = AutoModelForSequenceClassification.from_pretrained("autoevaluate/binary-classification") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- ca95e89dbde994aa3b8808dfcda6e8779fd2dd3de59b66ccb1ef9cd964f91f70
- Size of remote file:
- 3.25 kB
- SHA256:
- b33780b2a72efeb8dffc41dafe7bf51ae167ca18264032115544bb99b6e124f9
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.