T5GemmaTokenizer

[source]

T5GemmaTokenizer class

keras_hub.tokenizers.T5GemmaTokenizer(proto, **kwargs)

T5Gemma tokenizer layer based on SentencePiece.

This tokenizer class will tokenize raw strings into integer sequences and is based on keras_hub.tokenizers.SentencePieceTokenizer. Unlike the underlying tokenizer, it will check for all special tokens needed by T5Gemma models and provides a from_preset() method to automatically download a matching vocabulary for a T5Gemma preset.

If input is a batch of strings (rank > 0), the layer will output a tf.RaggedTensor where the last dimension of the output is ragged.

If input is a scalar string (rank == 0), the layer will output a dense tf.Tensor with static shape [None].

Arguments

  • proto: Either a string path to a SentencePiece proto file, or a bytes object with a serialized SentencePiece proto. See the SentencePiece repository for more details on the format.

Examples

import io
import tensorflow as tf
import sentencepiece

# Unbatched input.
tokenizer = keras_hub.models.T5GemmaTokenizer.from_preset(
    "t5gemma_b_b_prefixlm_it"
)
tokenizer("The quick brown fox jumped.")

# Batched input.
tokenizer(["The quick brown fox jumped.", "The fox slept."])

# Detokenization.
tokenizer.detokenize(tokenizer("The quick brown fox jumped."))

# Custom vocabulary.
bytes_io = io.BytesIO()
ds = tf.data.Dataset.from_tensor_slices(["The quick brown fox jumped."])
sentencepiece.SentencePieceTrainer.train(
    sentence_iterator=ds.as_numpy_iterator(),
    model_writer=bytes_io,
    vocab_size=8,
    model_type="WORD",
    pad_id=0,
    bos_id=1,
    eos_id=2,
    unk_id=3,
    pad_piece="<pad>",
    bos_piece="<bos>",
    eos_piece="<eos>",
    unk_piece="<unk>",
)
tokenizer = keras_hub.models.T5GemmaTokenizer(
    proto=bytes_io.getvalue(),
)
tokenizer("The quick brown fox jumped.")

[source]

from_preset method

T5GemmaTokenizer.from_preset(preset, config_file="tokenizer.json", **kwargs)

Instantiate a keras_hub.models.Tokenizer from a model preset.

A preset is a directory of configs, weights and other file assets used to save and load a pre-trained model. The preset can be passed as one of:

  1. a built-in preset identifier like 'bert_base_en'
  2. a Kaggle Models handle like 'kaggle://user/bert/keras/bert_base_en'
  3. a Hugging Face handle like 'hf://user/bert_base_en'
  4. a path to a local preset directory like './bert_base_en'

For any Tokenizer subclass, you can run cls.presets.keys() to list all built-in presets available on the class.

This constructor can be called in one of two ways. Either from the base class like keras_hub.models.Tokenizer.from_preset(), or from a model class like keras_hub.models.GemmaTokenizer.from_preset(). If calling from the base class, the subclass of the returning object will be inferred from the config in the preset directory.

Arguments

  • preset: string. A built-in preset identifier, a Kaggle Models handle, a Hugging Face handle, or a path to a local directory.
  • load_weights: bool. If True, the weights will be loaded into the model architecture. If False, the weights will be randomly initialized.

Examples

# Load a preset tokenizer.
tokenizer = keras_hub.tokenizer.Tokenizer.from_preset("bert_base_en")

# Tokenize some input.
tokenizer("The quick brown fox tripped.")

# Detokenize some input.
tokenizer.detokenize([5, 6, 7, 8, 9])
Preset Parameters Description
t5gemma_s_s_ul2 312.52M T5Gemma S/S model with a small encoder and small decoder, adapted as a UL2 model.
t5gemma_s_s_prefixlm 312.52M T5Gemma S/S model with a small encoder and small decoder, adapted as a prefix language model.
t5gemma_s_s_ul2_it 312.52M T5Gemma S/S model with a small encoder and small decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_s_s_prefixlm_it 312.52M T5Gemma S/S model with a small encoder and small decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_b_b_ul2 591.49M T5Gemma B/B model with a base encoder and base decoder, adapted as a UL2 model.
t5gemma_b_b_prefixlm 591.49M T5Gemma B/B model with a base encoder and base decoder, adapted as a prefix language model.
t5gemma_b_b_ul2_it 591.49M T5Gemma B/B model with a base encoder and base decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_b_b_prefixlm_it 591.49M T5Gemma B/B model with a base encoder and base decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_l_l_ul2 1.24B T5Gemma L/L model with a large encoder and large decoder, adapted as a UL2 model.
t5gemma_l_l_prefixlm 1.24B T5Gemma L/L model with a large encoder and large decoder, adapted as a prefix language model.
t5gemma_l_l_ul2_it 1.24B T5Gemma L/L model with a large encoder and large decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_l_l_prefixlm_it 1.24B T5Gemma L/L model with a large encoder and large decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_ml_ml_ul2 2.20B T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a UL2 model.
t5gemma_ml_ml_prefixlm 2.20B T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a prefix language model.
t5gemma_ml_ml_ul2_it 2.20B T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_ml_ml_prefixlm_it 2.20B T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_xl_xl_ul2 3.77B T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a UL2 model.
t5gemma_xl_xl_prefixlm 3.77B T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a prefix language model.
t5gemma_xl_xl_ul2_it 3.77B T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_xl_xl_prefixlm_it 3.77B T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_2b_2b_ul2 5.60B T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model.
t5gemma_2b_2b_prefixlm 5.60B T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model.
t5gemma_2b_2b_ul2_it 5.60B T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_2b_2b_prefixlm_it 5.60B T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_9b_2b_ul2 12.29B T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model.
t5gemma_9b_2b_prefixlm 12.29B T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model.
t5gemma_9b_2b_ul2_it 12.29B T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_9b_2b_prefixlm_it 12.29B T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_9b_9b_ul2 20.33B T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a UL2 model.
t5gemma_9b_9b_prefixlm 20.33B T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a prefix language model.
t5gemma_9b_9b_ul2_it 20.33B T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_9b_9b_prefixlm_it 20.33B T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following.

[source]

T5GemmaTokenizer class

keras_hub.models.T5GemmaTokenizer(proto, **kwargs)

T5Gemma tokenizer layer based on SentencePiece.

This tokenizer class will tokenize raw strings into integer sequences and is based on keras_hub.tokenizers.SentencePieceTokenizer. Unlike the underlying tokenizer, it will check for all special tokens needed by T5Gemma models and provides a from_preset() method to automatically download a matching vocabulary for a T5Gemma preset.

If input is a batch of strings (rank > 0), the layer will output a tf.RaggedTensor where the last dimension of the output is ragged.

If input is a scalar string (rank == 0), the layer will output a dense tf.Tensor with static shape [None].

Arguments

  • proto: Either a string path to a SentencePiece proto file, or a bytes object with a serialized SentencePiece proto. See the SentencePiece repository for more details on the format.

Examples

import io
import tensorflow as tf
import sentencepiece

# Unbatched input.
tokenizer = keras_hub.models.T5GemmaTokenizer.from_preset(
    "t5gemma_b_b_prefixlm_it"
)
tokenizer("The quick brown fox jumped.")

# Batched input.
tokenizer(["The quick brown fox jumped.", "The fox slept."])

# Detokenization.
tokenizer.detokenize(tokenizer("The quick brown fox jumped."))

# Custom vocabulary.
bytes_io = io.BytesIO()
ds = tf.data.Dataset.from_tensor_slices(["The quick brown fox jumped."])
sentencepiece.SentencePieceTrainer.train(
    sentence_iterator=ds.as_numpy_iterator(),
    model_writer=bytes_io,
    vocab_size=8,
    model_type="WORD",
    pad_id=0,
    bos_id=1,
    eos_id=2,
    unk_id=3,
    pad_piece="<pad>",
    bos_piece="<bos>",
    eos_piece="<eos>",
    unk_piece="<unk>",
)
tokenizer = keras_hub.models.T5GemmaTokenizer(
    proto=bytes_io.getvalue(),
)
tokenizer("The quick brown fox jumped.")

[source]

from_preset method

T5GemmaTokenizer.from_preset(preset, config_file="tokenizer.json", **kwargs)

Instantiate a keras_hub.models.Tokenizer from a model preset.

A preset is a directory of configs, weights and other file assets used to save and load a pre-trained model. The preset can be passed as one of:

  1. a built-in preset identifier like 'bert_base_en'
  2. a Kaggle Models handle like 'kaggle://user/bert/keras/bert_base_en'
  3. a Hugging Face handle like 'hf://user/bert_base_en'
  4. a path to a local preset directory like './bert_base_en'

For any Tokenizer subclass, you can run cls.presets.keys() to list all built-in presets available on the class.

This constructor can be called in one of two ways. Either from the base class like keras_hub.models.Tokenizer.from_preset(), or from a model class like keras_hub.models.GemmaTokenizer.from_preset(). If calling from the base class, the subclass of the returning object will be inferred from the config in the preset directory.

Arguments

  • preset: string. A built-in preset identifier, a Kaggle Models handle, a Hugging Face handle, or a path to a local directory.
  • load_weights: bool. If True, the weights will be loaded into the model architecture. If False, the weights will be randomly initialized.

Examples

# Load a preset tokenizer.
tokenizer = keras_hub.tokenizer.Tokenizer.from_preset("bert_base_en")

# Tokenize some input.
tokenizer("The quick brown fox tripped.")

# Detokenize some input.
tokenizer.detokenize([5, 6, 7, 8, 9])
Preset Parameters Description
t5gemma_s_s_ul2 312.52M T5Gemma S/S model with a small encoder and small decoder, adapted as a UL2 model.
t5gemma_s_s_prefixlm 312.52M T5Gemma S/S model with a small encoder and small decoder, adapted as a prefix language model.
t5gemma_s_s_ul2_it 312.52M T5Gemma S/S model with a small encoder and small decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_s_s_prefixlm_it 312.52M T5Gemma S/S model with a small encoder and small decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_b_b_ul2 591.49M T5Gemma B/B model with a base encoder and base decoder, adapted as a UL2 model.
t5gemma_b_b_prefixlm 591.49M T5Gemma B/B model with a base encoder and base decoder, adapted as a prefix language model.
t5gemma_b_b_ul2_it 591.49M T5Gemma B/B model with a base encoder and base decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_b_b_prefixlm_it 591.49M T5Gemma B/B model with a base encoder and base decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_l_l_ul2 1.24B T5Gemma L/L model with a large encoder and large decoder, adapted as a UL2 model.
t5gemma_l_l_prefixlm 1.24B T5Gemma L/L model with a large encoder and large decoder, adapted as a prefix language model.
t5gemma_l_l_ul2_it 1.24B T5Gemma L/L model with a large encoder and large decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_l_l_prefixlm_it 1.24B T5Gemma L/L model with a large encoder and large decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_ml_ml_ul2 2.20B T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a UL2 model.
t5gemma_ml_ml_prefixlm 2.20B T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a prefix language model.
t5gemma_ml_ml_ul2_it 2.20B T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_ml_ml_prefixlm_it 2.20B T5Gemma ML/ML model with a medium-large encoder and medium-large decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_xl_xl_ul2 3.77B T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a UL2 model.
t5gemma_xl_xl_prefixlm 3.77B T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a prefix language model.
t5gemma_xl_xl_ul2_it 3.77B T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_xl_xl_prefixlm_it 3.77B T5Gemma XL/XL model with an extra-large encoder and extra-large decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_2b_2b_ul2 5.60B T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model.
t5gemma_2b_2b_prefixlm 5.60B T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model.
t5gemma_2b_2b_ul2_it 5.60B T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_2b_2b_prefixlm_it 5.60B T5Gemma 2B/2B model with a 2-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_9b_2b_ul2 12.29B T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model.
t5gemma_9b_2b_prefixlm 12.29B T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model.
t5gemma_9b_2b_ul2_it 12.29B T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_9b_2b_prefixlm_it 12.29B T5Gemma 9B/2B model with a 9-billion-parameter encoder and 2-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following.
t5gemma_9b_9b_ul2 20.33B T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a UL2 model.
t5gemma_9b_9b_prefixlm 20.33B T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a prefix language model.
t5gemma_9b_9b_ul2_it 20.33B T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a UL2 model and fine-tuned for instruction following.
t5gemma_9b_9b_prefixlm_it 20.33B T5Gemma 9B/9B model with a 9-billion-parameter encoder and 9-billion-parameter decoder, adapted as a prefix language model and fine-tuned for instruction following.