GptOssCausalLMPreprocessor classkeras_hub.models.GptOssCausalLMPreprocessor(
tokenizer, sequence_length=1024, add_start_token=True, add_end_token=True, **kwargs
)
GptOss Causal LM preprocessor.
This preprocessing layer is meant for use with
keras_hub.models.GptOssCausalLM. By default, it will take in batches of
strings, and return outputs in a (x, y, sample_weight) format, where the
y label is the next token id in the x sequence.
For use with generation, the layer also exposes two methods
generate_preprocess() and generate_postprocess(). When this preprocessor
is attached to a keras_hub.models.GptOssCausalLM instance, these methods
will be called implicitly in generate(). They can also be called
standalone (e.g. to precompute preprocessing inputs for generation in a
separate process).
Arguments
keras_hub.models.GptOssTokenizer instance.True, the preprocessor will prepend the tokenizer
start token to each input sequence. Default is True.True, the preprocessor will append the tokenizer
end token to each input sequence. Default is False.Call arguments
tf.Tensor or list of python strings.None as the layer generates labels.None as the layer
generates label weights.sequence_length of
the layer.Examples
import tensorflow as tf
import keras_hub
# Load the preprocessor from a preset.
preprocessor = keras_hub.models.GptOssCausalLMPreprocessor.from_preset(
"gpt_oss_base_en"
)
# Tokenize and pack a single sentence.
sentence = tf.constant("League of legends")
preprocessor(sentence)
# Same output.
preprocessor("League of legends")
# Tokenize a batch of sentences.
sentences = tf.constant(["Taco tuesday", "Fish taco please!"])
preprocessor(sentences)
# Same output.
preprocessor(["Taco tuesday", "Fish taco please!"])
# Map a dataset to preprocess a single sentence.
features = tf.constant(
[
"Avatar 2 is amazing!",
"Well, I am not sure.",
]
)
labels = tf.constant([1, 0])
ds = tf.data.Dataset.from_tensor_slices((features, labels))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map a dataset to preprocess unlabled sentences.
ds = tf.data.Dataset.from_tensor_slices(features)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
from_preset methodGptOssCausalLMPreprocessor.from_preset(
preset, config_file="preprocessor.json", **kwargs
)
Instantiate a keras_hub.models.Preprocessor from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as
one of:
'bert_base_en''kaggle://user/bert/keras/bert_base_en''hf://user/bert_base_en''./bert_base_en'For any Preprocessor subclass, you can run cls.presets.keys() to
list all built-in presets available on the class.
As there are usually multiple preprocessing classes for a given model,
this method should be called on a specific subclass like
keras_hub.models.BertTextClassifierPreprocessor.from_preset().
Arguments
Examples
# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.CausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.TextClassifierPreprocessor.from_preset(
"bert_base_en",
)
| Preset | Parameters | Description |
|---|---|---|
| gpt_oss_20b_en | 20.91B | This preset has 21 billion total parameters, with 3.6 billion active parameters, a 128k context length, and is de-quantized from MXFP4. |
| gpt_oss_safeguard_20b_en | 20.91B | Open-weight safety reasoning model with 21 billion total parameters,with 3.6 billion active parameters, a context length of over 128k, and is de-quantized from MXFP4. |
| gpt_oss_120b_en | 116.83B | This preset has 117 billion total parameters, with 5.1 billion active parameters, a 128k context length, and is de-quantized from MXFP4. |
| gpt_oss_safeguard_120b_en | 116.83B | Open-weight safety reasoning model with 117 billion total parameters,with 5.1 billion active parameters, a 128k context length, and is de-quantized from MXFP4. |
tokenizer propertykeras_hub.models.GptOssCausalLMPreprocessor.tokenizer
The tokenizer used to tokenize strings.