Summarization
Transformers
PyTorch
TensorBoard
t5
text2text-generation
Generated from Trainer
Eval Results (legacy)
text-generation-inference
Instructions to use autoevaluate/summarization with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use autoevaluate/summarization with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="autoevaluate/summarization")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("autoevaluate/summarization") model = AutoModelForSeq2SeqLM.from_pretrained("autoevaluate/summarization") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 7a08bcbcc5b096654f99e4b23ab87097fac7111675c9255b82bb4a410aeff8b3
- Size of remote file:
- 242 MB
- SHA256:
- 22de12c0c265b8f038661fd969f81f08037588a78a44d7eb8cab2be49d913cb4
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.