MetaCLIP2ImageConverter classkeras_hub.layers.MetaCLIP2ImageConverter(
image_size=None,
scale=None,
offset=None,
crop_to_aspect_ratio=True,
pad_to_aspect_ratio=False,
interpolation="bilinear",
antialias=False,
bounding_box_format="yxyx",
data_format=None,
**kwargs
)
Image converter for MetaCLIP 2 models.
This converter handles image preprocessing for MetaCLIP 2, including resizing and normalization to match the model's expected input format.
from_preset methodMetaCLIP2ImageConverter.from_preset(preset, **kwargs)
Instantiate a keras_hub.layers.ImageConverter from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as
one of:
'pali_gemma_3b_224''kaggle://user/paligemma/keras/pali_gemma_3b_224''hf://user/pali_gemma_3b_224''./pali_gemma_3b_224'You can run cls.presets.keys() to list all built-in presets available
on the class.
Arguments
True, the weights will be loaded into the
model architecture. If False, the weights will be randomly
initialized.Examples
batch = np.random.randint(0, 256, size=(2, 512, 512, 3))
# Resize images for `"pali_gemma_3b_224"`.
converter = keras_hub.layers.ImageConverter.from_preset(
"pali_gemma_3b_224"
)
converter(batch) # # Output shape (2, 224, 224, 3)
# Resize images for `"pali_gemma_3b_448"` without cropping.
converter = keras_hub.layers.ImageConverter.from_preset(
"pali_gemma_3b_448",
crop_to_aspect_ratio=False,
)
converter(batch) # # Output shape (2, 448, 448, 3)
| Preset | Parameters | Description |
|---|---|---|
| metaclip_2_vit_huge_patch14_224 | 1.86B | 2 billion parameter, 32-layer for vision and 24-layer for text, patch size of 14, image resolution 224x224. MetaCLIP 2 worldwide huge model (ViT-H-14-quickgelu-worldwide) trained on 29B seen pairs with QuickGELU activation. |
| metaclip_2_vit_huge_patch14_378 | 1.86B | 2 billion parameter, 32-layer for vision and 24-layer for text, patch size of 14, image resolution 378x378. MetaCLIP 2 worldwide huge model (ViT-H-14-378-worldwide) trained on 29B seen pairs. |
| metaclip_2_vit_giant_patch14_224 | 3.63B | 4 billion parameter, 40-layer for vision and 24-layer for text, patch size of 14, image resolution 224x224. MetaCLIP 2 worldwide giant model (ViT-bigG-14-worldwide) trained on 29B seen pairs. |
| metaclip_2_vit_giant_patch14_378 | 3.63B | 4 billion parameter, 40-layer for vision and 24-layer for text, patch size of 14, image resolution 378x378. MetaCLIP 2 worldwide giant model (ViT-bigG-14-378-worldwide) trained on 29B seen pairs. |