Qwen3_5CausalLMPreprocessor classkeras_hub.models.Qwen3_5CausalLMPreprocessor(
tokenizer,
image_converter=None,
video_converter=None,
sequence_length=1024,
add_start_token=False,
add_end_token=True,
video_fps=2.0,
**kwargs
)
Qwen3.5 Causal LM preprocessor with multimodal support.
For text-only usage this behaves identically to the base
CausalLMPreprocessor. When an image_converter is provided,
the preprocessor also:
Qwen3_5ImageConverter.<|image_pad|> and <|video_pad|> placeholder tokens
in the token sequence with the correct number of vision tokens.vision_indices for scattering visual embeddings
into the text sequence.position_ids for spatial awareness.Arguments
Qwen3_5Tokenizer instance. Vision special token
IDs (image_token, video_token, etc.) are resolved
from the tokenizer's vocabulary automatically.Qwen3_5ImageConverter instance, or None
for text-only mode.Qwen3_5VideoConverter instance, or None.False.True.2.0.from_preset methodQwen3_5CausalLMPreprocessor.from_preset(
preset, config_file="preprocessor.json", **kwargs
)
Instantiate a keras_hub.models.Preprocessor from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as
one of:
'bert_base_en''kaggle://user/bert/keras/bert_base_en''hf://user/bert_base_en''./bert_base_en'For any Preprocessor subclass, you can run cls.presets.keys() to
list all built-in presets available on the class.
As there are usually multiple preprocessing classes for a given model,
this method should be called on a specific subclass like
keras_hub.models.BertTextClassifierPreprocessor.from_preset().
Arguments
Examples
# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.CausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.TextClassifierPreprocessor.from_preset(
"bert_base_en",
)
| Preset | Parameters | Description |
|---|---|---|
| qwen3_5_0.8b_base | 852.99M | Ultra-lightweight foundation model. Ideal for edge devices and efficient, task-specific fine-tuning. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_0.8b | 852.99M | Instruction-tuned ultra-lightweight model. Best for simple chat and basic NLP tasks on resource-constrained devices. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_2b_base | 2.21B | Lightweight foundation model. Balances speed and capability; great for mobile deployment and domain-specific fine-tuning. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_2b | 2.21B | Instruction-tuned lightweight model. Optimized for fast chat applications and general assistance on consumer hardware. Supports Text, Multimodal, video processing tasks. |
| qwen3_5_4b_base | 4.54B | Mid-small foundation model. Offers improved reasoning and context understanding for custom fine-tuning tasks. |
| qwen3_5_4b | 4.54B | Instruction-tuned mid-small model. A capable assistant for general text generation and conversational tasks on standard GPUs. Supports Multimodal, video processing tasks. |
| qwen3_5_9b_base | 9.41B | Mid-sized foundation model. Delivers strong reasoning, coding, and math baseline capabilities for advanced fine-tuning. Supports Multimodal, video processing tasks. |
| qwen3_5_9b | 9.41B | Instruction-tuned mid-sized model. Highly capable chatbot offering strong logic, coding assistance, and multi-lingual support. Supports Multimodal, video processing tasks. |
| qwen3_5_27b | 27.36B | Instruction-tuned large model. Delivers high-tier performance for complex reasoning, coding, and extensive contextual tasks. Supports Multimodal, video processing tasks. |
tokenizer propertykeras_hub.models.Qwen3_5CausalLMPreprocessor.tokenizer
The tokenizer used to tokenize strings.