Zero-Shot Image Classification
Transformers
PyTorch
Chinese
altclip
Zero-Shot Image Classification
bilingual
en
English
Chinese
Instructions to use BAAI/AltCLIP with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use BAAI/AltCLIP with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="BAAI/AltCLIP") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("BAAI/AltCLIP") model = AutoModelForZeroShotImageClassification.from_pretrained("BAAI/AltCLIP") - Notebooks
- Google Colab
- Kaggle
| { | |
| "crop_size": { | |
| "height": 224, | |
| "width": 224 | |
| }, | |
| "do_center_crop": true, | |
| "do_convert_rgb": true, | |
| "do_normalize": true, | |
| "do_rescale": true, | |
| "do_resize": true, | |
| "feature_extractor_type": "CLIPFeatureExtractor", | |
| "image_mean": [ | |
| 0.48145466, | |
| 0.4578275, | |
| 0.40821073 | |
| ], | |
| "image_processor_type": "CLIPImageProcessor", | |
| "image_std": [ | |
| 0.26862954, | |
| 0.26130258, | |
| 0.27577711 | |
| ], | |
| "processor_class": "AltCLIPProcessor", | |
| "resample": 3, | |
| "rescale_factor": 0.00392156862745098, | |
| "size": { | |
| "shortest_edge": 224 | |
| } | |
| } | |