Seq2SeqLMPreprocessor classkeras_hub.models.Seq2SeqLMPreprocessor(
tokenizer, encoder_sequence_length=1024, decoder_sequence_length=1024, **kwargs
)
Base class for seq2seq language modeling preprocessing layers.
Seq2SeqLMPreprocessor tasks wrap a keras_hub.tokenizer.Tokenizer to
create a preprocessing layer for seq2seq language modeling tasks. It is
intended to be paired with a keras.models.Seq2SeqLM task.
All Seq2SeqLMPreprocessor layers take inputs a dictionary input with keys
"encoder_text" and "decoder_text".
This layer will always output a (x, y, sample_weight) tuple, where x
is a dictionary with the tokenized inputs, y contains the tokens from x
offset by 1, and sample_weight marks where y contains padded
values. The exact contents of x will vary depending on the model being
used.
a Seq2SeqLMPreprocessor contains two extra methods, generate_preprocess
and generate_postprocess for use with generation. See examples below.
All Seq2SeqLMPreprocessor tasks include a from_preset() constructor
which can be used to load a pre-trained config and vocabularies. You can
call the from_preset() constructor directly on this base class, in which
case the correct class for you model will be automatically instantiated.
Examples.
preprocessor = keras_hub.models.Seq2SeqLMPreprocessor.from_preset(
"bart_base_en",
encoder_sequence_length=256,
decoder_sequence_length=256,
)
# Tokenize, mask and pack a single sentence.
x = {
"encoder_text": "The fox was sleeping.",
"decoder_text": "The fox was awake.",
}
x, y, sample_weight = preprocessor(x)
# Tokenize and pad/truncate a batch of labeled sentences.
x = {
"encoder_text": ["The fox was sleeping."],
"decoder_text": ["The fox was awake."],
x, y, sample_weight = preprocessor(x)
# With a [`tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset).
ds = tf.data.Dataset.from_tensor_slices(x)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Generate preprocess and postprocess.
x = preprocessor.generate_preprocess(x) # Tokenized numeric inputs.
x = preprocessor.generate_postprocess(x) # Detokenized string outputs.
from_preset methodSeq2SeqLMPreprocessor.from_preset(preset, config_file="preprocessor.json", **kwargs)
Instantiate a keras_hub.models.Preprocessor from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as
one of:
'bert_base_en''kaggle://user/bert/keras/bert_base_en''hf://user/bert_base_en''./bert_base_en'For any Preprocessor subclass, you can run cls.presets.keys() to
list all built-in presets available on the class.
As there are usually multiple preprocessing classes for a given model,
this method should be called on a specific subclass like
keras_hub.models.BertTextClassifierPreprocessor.from_preset().
Arguments
Examples
# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.CausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.TextClassifierPreprocessor.from_preset(
"bert_base_en",
)
| Preset | Parameters | Description |
|---|---|---|
| bart_base_en | 139.42M | 6-layer BART model where case is maintained. Trained on BookCorpus, English Wikipedia and CommonCrawl. |
| bart_large_en | 406.29M | 12-layer BART model where case is maintained. Trained on BookCorpus, English Wikipedia and CommonCrawl. |
| bart_large_en_cnn | 406.29M | The bart_large_en backbone model fine-tuned on the CNN+DM summarization dataset. |
| moonshine_tiny_en | 27.09M | Moonshine tiny model for English speech recognition. Developed by Useful Sensors for real-time transcription. |
| moonshine_base_en | 61.51M | Moonshine base model for English speech recognition. Developed by Useful Sensors for real-time transcription. |
| t5gemma_s_s_ul2 | 312.52M | T5Gemma S/S model with a small encoder and small decoder, adapted as a UL2 model. |
| t5gemma_s_s_prefixlm | 312.52M | T5Gemma S/S model with a small encoder and small decoder, adapted as a prefix language model. |
| t5gemma_s_s_ul2_it | 312.52M | T5Gemma S/S model with a small encoder and small decoder, adapted as a UL2 model and fine-tuned for instruction following. |
| t5gemma_s_s_prefixlm_it | 312.52M | T5Gemma S/S model with a small encoder and small decoder, adapted as a prefix language model and fine-tuned for instruction following. |
| t5gemma_b_b_ul2 | 591.49M | T5Gemma B/B model with a base encoder and base decoder, adapted as a UL2 model. |
| t5gemma_b_b_prefixlm | 591.49M | T5Gemma B/B model with a base encoder and base decoder, adapted as a prefix language model. |
| t5gemma_b_b_ul2_it | 591.49M | T5Gemma B/B model with a base encoder and base decoder, adapted as a UL2 model and fine-tuned for instruction following. |
| t5gemma_b_b_prefixlm_it | 591.49M | T5Gemma B/B model with a base encoder and base decoder, adapted as a prefix language model and fine-tuned for instruction following. |
| t5gemma_l_l_ul2 | 1.24B | T5Gemma L/L model with a large encoder and large decoder, adapted as a UL2 model. |
| t5gemma_l_l_prefixlm | 1.24B | T5Gemma L/L model with a large encoder and large decoder, adapted as a prefix language model. |
| t5gemma_l_l_ul2_it | 1.24B | T5Gemma L/L model with a large encoder and large decoder, adapted as a UL2 model and fine-tuned for instruction following. |
| t5gemma_l_l_prefixlm_it | 1.24B | T5Gemma L/L model with a large encoder and large decoder, adapted as a prefix language model and fine-tuned for instruction following. |
| t5gemma_ml_ml_ul2 | 2.20B | T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a UL2 model. |
| t5gemma_ml_ml_prefixlm | 2.20B | T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a prefix language model. |
| t5gemma_ml_ml_ul2_it | 2.20B | T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a UL2 model and fine-tuned for instruction following. |
| t5gemma_ml_ml_prefixlm_it | 2.20B | T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a prefix language model and fine-tuned for instruction following. |
| t5gemma_xl_xl_ul2 | 3.77B | T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a UL2 model. |
| t5gemma_xl_xl_prefixlm | 3.77B | T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a prefix language model. |
| t5gemma_xl_xl_ul2_it | 3.77B | T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a UL2 model and fine-tuned for instruction following. |
| t5gemma_xl_xl_prefixlm_it | 3.77B | T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a prefix language model and fine-tuned for instruction following. |
| t5gemma_2b_2b_ul2 | 5.60B | T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model. |
| t5gemma_2b_2b_prefixlm | 5.60B | T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model. |
| t5gemma_2b_2b_ul2_it | 5.60B | T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following. |
| t5gemma_2b_2b_prefixlm_it | 5.60B | T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following. |
| t5gemma_9b_2b_ul2 | 12.29B | T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model. |
| t5gemma_9b_2b_prefixlm | 12.29B | T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model. |
| t5gemma_9b_2b_ul2_it | 12.29B | T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following. |
| t5gemma_9b_2b_prefixlm_it | 12.29B | T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following. |
| t5gemma_9b_9b_ul2 | 20.33B | T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a UL2 model. |
| t5gemma_9b_9b_prefixlm | 20.33B | T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a prefix language model. |
| t5gemma_9b_9b_ul2_it | 20.33B | T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following. |
| t5gemma_9b_9b_prefixlm_it | 20.33B | T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following. |
save_to_preset methodSeq2SeqLMPreprocessor.save_to_preset(preset_dir)
Save preprocessor to a preset directory.
Arguments
tokenizer propertykeras_hub.models.Seq2SeqLMPreprocessor.tokenizer
The tokenizer used to tokenize strings.