Qwen3_5VideoConverter classkeras_hub.layers.Qwen3_5VideoConverter(
patch_size=16,
temporal_patch_size=2,
spatial_merge_size=2,
min_pixels=65536,
max_pixels=16777216,
interpolation="bicubic",
antialias=True,
**kwargs
)
Video pre-processor for Qwen3.5.
Converts videos to the patch tensor format expected by
Qwen3_5VisionEncoder and also returns grid_thw metadata.
Arguments
"bicubic".True.from_preset methodQwen3_5VideoConverter.from_preset(preset, **kwargs)
Instantiate a keras_hub.layers.VideoConverter from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as
one of:
'gemma4_2b_it''kaggle://user/gemma4/keras/gemma4_2b_it''hf://google/gemma-4-2b-it''./gemma4_2b_it'You can run cls.presets.keys() to list all built-in presets available
on the class.
This constructor can be called in one of two ways. Either from the base
class like keras_hub.models.VideoConverter.from_preset(), or from a
model class like keras_hub.models.Gemma4VideoConverter.from_preset().
If calling from the base class, the subclass of the returning object
will be inferred from the config in the preset directory.
Arguments
Examples
# Load a video converter from a preset.
converter = keras_hub.layers.VideoConverter.from_preset(
"hf://google/gemma-4-2b-it"
)
| Preset | Parameters | Description |
|---|---|---|
| qwen3_5_0.8b_base | 852.99M | Ultra-lightweight foundation model. Ideal for edge devices and efficient, task-specific fine-tuning. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_0.8b | 852.99M | Instruction-tuned ultra-lightweight model. Best for simple chat and basic NLP tasks on resource-constrained devices. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_2b_base | 2.21B | Lightweight foundation model. Balances speed and capability; great for mobile deployment and domain-specific fine-tuning. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_2b | 2.21B | Instruction-tuned lightweight model. Optimized for fast chat applications and general assistance on consumer hardware. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_4b_base | 4.54B | Mid-small foundation model. Offers improved reasoning and context understanding for custom fine-tuning tasks. |
| qwen3_5_4b | 4.54B | Instruction-tuned mid-small model. A capable assistant for general text generation and conversational tasks on standard GPUs. Supports Multimodal, video processing tasks. |
| qwen3_5_9b_base | 9.41B | Mid-sized foundation model. Delivers strong reasoning, coding, and math baseline capabilities for advanced fine-tuning. Supports Multimodal, video processing tasks. |
| qwen3_5_9b | 9.41B | Instruction-tuned mid-sized model. Highly capable chatbot offering strong logic, coding assistance, and multi-lingual support. Supports Multimodal, video processing tasks. |
| qwen3_5_27b | 27.36B | Instruction-tuned large model. Delivers high-tier performance for complex reasoning, coding, and extensive contextual tasks. Supports Multimodal, video processing tasks. |