Image Feature Extraction
Transformers
JAX
Safetensors
MLX
PyTorch
aimv2_vision_model
vision
custom_code
Instructions to use apple/aimv2-large-patch14-native with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use apple/aimv2-large-patch14-native with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-feature-extraction", model="apple/aimv2-large-patch14-native", trust_remote_code=True)# Load model directly from transformers import AutoImageProcessor, AutoModel processor = AutoImageProcessor.from_pretrained("apple/aimv2-large-patch14-native", trust_remote_code=True) model = AutoModel.from_pretrained("apple/aimv2-large-patch14-native", trust_remote_code=True) - MLX
How to use apple/aimv2-large-patch14-native with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir aimv2-large-patch14-native apple/aimv2-large-patch14-native
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
File size: 638 Bytes
45e617d d2d4723 ab183d3 45e617d ab183d3 45e617d d2d4723 45e617d d2d4723 45e617d d2d4723 45e617d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | {
"crop_size": {
"height": 224,
"width": 224
},
"data_format": "channels_first",
"default_to_square": false,
"device": null,
"disable_grouping": null,
"do_center_crop": false,
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": false,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "CLIPImageProcessorFast",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"input_data_format": null,
"resample": 3,
"rescale_factor": 0.00392156862745098,
"return_tensors": null,
"size": {
"shortest_edge": 224
}
}
|