KerasHub: Pretrained Models / API documentation / Model Architectures / Qwen3.5 / Qwen3_5CausalLMPreprocessor layer

Qwen3_5CausalLMPreprocessor layer

[source]

Qwen3_5CausalLMPreprocessor class

keras_hub.models.Qwen3_5CausalLMPreprocessor(
    tokenizer,
    image_converter=None,
    video_converter=None,
    sequence_length=1024,
    add_start_token=False,
    add_end_token=True,
    video_fps=2.0,
    **kwargs
)

Qwen3.5 Causal LM preprocessor with multimodal support.

For text-only usage this behaves identically to the base CausalLMPreprocessor. When an image_converter is provided, the preprocessor also:

  1. Converts images to patch tensors via Qwen3_5ImageConverter.
  2. Replaces <|image_pad|> and <|video_pad|> placeholder tokens in the token sequence with the correct number of vision tokens.
  3. Computes flat vision_indices for scattering visual embeddings into the text sequence.
  4. Builds 4-channel M-RoPE position_ids for spatial awareness.

Arguments

  • tokenizer: A Qwen3_5Tokenizer instance. Vision special token IDs (image_token, video_token, etc.) are resolved from the tokenizer's vocabulary automatically.
  • image_converter: A Qwen3_5ImageConverter instance, or None for text-only mode.
  • video_converter: A Qwen3_5VideoConverter instance, or None.
  • sequence_length: int. Total padded sequence length. Default 1024.
  • add_start_token: bool. Prepend BOS token. Default False.
  • add_end_token: bool. Append EOS token. Default True.
  • video_fps: float. Default video sampling rate for timestamp computation. Default 2.0.

[source]

from_preset method

Qwen3_5CausalLMPreprocessor.from_preset(
    preset, config_file="preprocessor.json", **kwargs
)

Instantiate a keras_hub.models.Preprocessor from a model preset.

A preset is a directory of configs, weights and other file assets used to save and load a pre-trained model. The preset can be passed as one of:

  1. a built-in preset identifier like 'bert_base_en'
  2. a Kaggle Models handle like 'kaggle://user/bert/keras/bert_base_en'
  3. a Hugging Face handle like 'hf://user/bert_base_en'
  4. a path to a local preset directory like './bert_base_en'

For any Preprocessor subclass, you can run cls.presets.keys() to list all built-in presets available on the class.

As there are usually multiple preprocessing classes for a given model, this method should be called on a specific subclass like keras_hub.models.BertTextClassifierPreprocessor.from_preset().

Arguments

  • preset: string. A built-in preset identifier, a Kaggle Models handle, a Hugging Face handle, or a path to a local directory.

Examples

# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.CausalLMPreprocessor.from_preset(
    "gemma_2b_en",
)

# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.TextClassifierPreprocessor.from_preset(
    "bert_base_en",
)
Preset Parameters Description
qwen3_5_0.8b_base 852.99M Ultra-lightweight foundation model. Ideal for edge devices and efficient, task-specific fine-tuning. Supports Text, Multimodal, video processing tasks.
qwen3_5_0.8b 852.99M Instruction-tuned ultra-lightweight model. Best for simple chat and basic NLP tasks on resource-constrained devices. Supports Text, Multimodal, video processing tasks.
qwen3_5_2b_base 2.21B Lightweight foundation model. Balances speed and capability; great for mobile deployment and domain-specific fine-tuning. Supports Text, Multimodal, video processing tasks.
qwen3_5_2b 2.21B Instruction-tuned lightweight model. Optimized for fast chat applications and general assistance on consumer hardware. Supports Text, Multimodal, video processing tasks.
qwen3_5_4b_base 4.54B Mid-small foundation model. Offers improved reasoning and context understanding for custom fine-tuning tasks.
qwen3_5_4b 4.54B Instruction-tuned mid-small model. A capable assistant for general text generation and conversational tasks on standard GPUs. Supports Multimodal, video processing tasks.
qwen3_5_9b_base 9.41B Mid-sized foundation model. Delivers strong reasoning, coding, and math baseline capabilities for advanced fine-tuning. Supports Multimodal, video processing tasks.
qwen3_5_9b 9.41B Instruction-tuned mid-sized model. Highly capable chatbot offering strong logic, coding assistance, and multi-lingual support. Supports Multimodal, video processing tasks.
qwen3_5_27b 27.36B Instruction-tuned large model. Delivers high-tier performance for complex reasoning, coding, and extensive contextual tasks. Supports Multimodal, video processing tasks.

tokenizer property

keras_hub.models.Qwen3_5CausalLMPreprocessor.tokenizer

The tokenizer used to tokenize strings.