Qwen3_5ImageConverter

[source]

Qwen3_5ImageConverter class

keras_hub.layers.Qwen3_5ImageConverter(
    patch_size=16,
    temporal_patch_size=2,
    spatial_merge_size=2,
    min_pixels=65536,
    max_pixels=16777216,
    **kwargs
)

Image pre-processor for Qwen3.5.

Converts images to the patch tensor format expected by Qwen3_5VisionEncoder and also returns grid_thw metadata.

Arguments

  • patch_size: int. Spatial size of each patch in pixels. Default 16.
  • temporal_patch_size: int. Temporal patch size. For images this is always 1 (a single frame). Default 2 (matches HF config).
  • spatial_merge_size: int. Spatial merge downsampling factor. Default 2.
  • min_pixels: int. Minimum pixel budget for the resized image. Images smaller than this will be upscaled. Default 65536 (= 256×256, from HF preprocessor_config.json shortest_edge).
  • max_pixels: int. Maximum pixel budget. Images larger than this will be downscaled. Default 16777216 (= 4096×4096, longest_edge).
  • scale: float or list of floats. Per-channel scale for normalisation.
  • offset: float or list of floats. Per-channel offset for normalisation.

[source]

from_preset method

Qwen3_5ImageConverter.from_preset(preset, **kwargs)

Instantiate a keras_hub.layers.ImageConverter from a model preset.

A preset is a directory of configs, weights and other file assets used to save and load a pre-trained model. The preset can be passed as one of:

  1. a built-in preset identifier like 'pali_gemma_3b_224'
  2. a Kaggle Models handle like 'kaggle://user/paligemma/keras/pali_gemma_3b_224'
  3. a Hugging Face handle like 'hf://user/pali_gemma_3b_224'
  4. a path to a local preset directory like './pali_gemma_3b_224'

You can run cls.presets.keys() to list all built-in presets available on the class.

Arguments

  • preset: string. A built-in preset identifier, a Kaggle Models handle, a Hugging Face handle, or a path to a local directory.
  • load_weights: bool. If True, the weights will be loaded into the model architecture. If False, the weights will be randomly initialized.

Examples

batch = np.random.randint(0, 256, size=(2, 512, 512, 3))

# Resize images for `"pali_gemma_3b_224"`.
converter = keras_hub.layers.ImageConverter.from_preset(
    "pali_gemma_3b_224"
)
converter(batch) # # Output shape (2, 224, 224, 3)

# Resize images for `"pali_gemma_3b_448"` without cropping.
converter = keras_hub.layers.ImageConverter.from_preset(
    "pali_gemma_3b_448",
    crop_to_aspect_ratio=False,
)
converter(batch) # # Output shape (2, 448, 448, 3)
Preset Parameters Description
qwen3_5_0.8b_base 852.99M Ultra-lightweight foundation model. Ideal for edge devices and efficient, task-specific fine-tuning. Supports Text, Multimodal, video processing tasks.
qwen3_5_0.8b 852.99M Instruction-tuned ultra-lightweight model. Best for simple chat and basic NLP tasks on resource-constrained devices. Supports Text, Multimodal, video processing tasks.
qwen3_5_2b_base 2.21B Lightweight foundation model. Balances speed and capability; great for mobile deployment and domain-specific fine-tuning. Supports Text, Multimodal, video processing tasks.
qwen3_5_2b 2.21B Instruction-tuned lightweight model. Optimized for fast chat applications and general assistance on consumer hardware. Supports Text, Multimodal, video processing tasks.
qwen3_5_4b_base 4.54B Mid-small foundation model. Offers improved reasoning and context understanding for custom fine-tuning tasks.
qwen3_5_4b 4.54B Instruction-tuned mid-small model. A capable assistant for general text generation and conversational tasks on standard GPUs. Supports Multimodal, video processing tasks.
qwen3_5_9b_base 9.41B Mid-sized foundation model. Delivers strong reasoning, coding, and math baseline capabilities for advanced fine-tuning. Supports Multimodal, video processing tasks.
qwen3_5_9b 9.41B Instruction-tuned mid-sized model. Highly capable chatbot offering strong logic, coding assistance, and multi-lingual support. Supports Multimodal, video processing tasks.
qwen3_5_27b 27.36B Instruction-tuned large model. Delivers high-tier performance for complex reasoning, coding, and extensive contextual tasks. Supports Multimodal, video processing tasks.