Qwen3_5ImageConverter classkeras_hub.layers.Qwen3_5ImageConverter(
patch_size=16,
temporal_patch_size=2,
spatial_merge_size=2,
min_pixels=65536,
max_pixels=16777216,
**kwargs
)
Image pre-processor for Qwen3.5.
Converts images to the patch tensor format expected by
Qwen3_5VisionEncoder and also returns grid_thw metadata.
Arguments
shortest_edge).longest_edge).from_preset methodQwen3_5ImageConverter.from_preset(preset, **kwargs)
Instantiate a keras_hub.layers.ImageConverter from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as
one of:
'pali_gemma_3b_224''kaggle://user/paligemma/keras/pali_gemma_3b_224''hf://user/pali_gemma_3b_224''./pali_gemma_3b_224'You can run cls.presets.keys() to list all built-in presets available
on the class.
Arguments
True, the weights will be loaded into the
model architecture. If False, the weights will be randomly
initialized.Examples
batch = np.random.randint(0, 256, size=(2, 512, 512, 3))
# Resize images for `"pali_gemma_3b_224"`.
converter = keras_hub.layers.ImageConverter.from_preset(
"pali_gemma_3b_224"
)
converter(batch) # # Output shape (2, 224, 224, 3)
# Resize images for `"pali_gemma_3b_448"` without cropping.
converter = keras_hub.layers.ImageConverter.from_preset(
"pali_gemma_3b_448",
crop_to_aspect_ratio=False,
)
converter(batch) # # Output shape (2, 448, 448, 3)
| Preset | Parameters | Description |
|---|---|---|
| qwen3_5_0.8b_base | 852.99M | Ultra-lightweight foundation model. Ideal for edge devices and efficient, task-specific fine-tuning. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_0.8b | 852.99M | Instruction-tuned ultra-lightweight model. Best for simple chat and basic NLP tasks on resource-constrained devices. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_2b_base | 2.21B | Lightweight foundation model. Balances speed and capability; great for mobile deployment and domain-specific fine-tuning. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_2b | 2.21B | Instruction-tuned lightweight model. Optimized for fast chat applications and general assistance on consumer hardware. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_4b_base | 4.54B | Mid-small foundation model. Offers improved reasoning and context understanding for custom fine-tuning tasks. |
| qwen3_5_4b | 4.54B | Instruction-tuned mid-small model. A capable assistant for general text generation and conversational tasks on standard GPUs. Supports Multimodal, video processing tasks. |
| qwen3_5_9b_base | 9.41B | Mid-sized foundation model. Delivers strong reasoning, coding, and math baseline capabilities for advanced fine-tuning. Supports Multimodal, video processing tasks. |
| qwen3_5_9b | 9.41B | Instruction-tuned mid-sized model. Highly capable chatbot offering strong logic, coding assistance, and multi-lingual support. Supports Multimodal, video processing tasks. |
| qwen3_5_27b | 27.36B | Instruction-tuned large model. Delivers high-tier performance for complex reasoning, coding, and extensive contextual tasks. Supports Multimodal, video processing tasks. |