Keras 3 API documentation / KerasNLP / Models / FNet / FNetTokenizer

FNetTokenizer

[source]

FNetTokenizer class

keras_nlp.models.FNetTokenizer(proto, **kwargs)

FNet tokenizer layer based on SentencePiece.

This tokenizer class will tokenize raw strings into integer sequences and is based on keras_nlp.tokenizers.SentencePieceTokenizer. Unlike the underlying tokenizer, it will check for all special tokens needed by FNet models and provides a from_preset() method to automatically download a matching vocabulary for a FNet preset.

This tokenizer does not provide truncation or padding of inputs. It can be combined with a keras_nlp.models.FNetPreprocessor layer for input packing.

If input is a batch of strings (rank > 0), the layer will output a tf.RaggedTensor where the last dimension of the output is ragged.

If input is a scalar string (rank == 0), the layer will output a dense tf.Tensor with static shape [None].

Arguments

  • proto: Either a string path to a SentencePiece proto file, or a bytes object with a serialized SentencePiece proto. See the SentencePiece repository for more details on the format.

Examples

# Unbatched input.
tokenizer = keras_nlp.models.FNetTokenizer.from_preset(
    "f_net_base_en",
)
tokenizer("The quick brown fox jumped.")

# Batched input.
tokenizer(["The quick brown fox jumped.", "The fox slept."])

# Detokenization.
tokenizer.detokenize(tokenizer("The quick brown fox jumped."))

[source]

from_preset method

FNetTokenizer.from_preset()

Instantiate FNetTokenizer tokenizer from preset vocabulary.

Arguments

  • preset: string. Must be one of "f_net_base_en", "f_net_large_en".

Examples

# Load a preset tokenizer.
tokenizer = FNetTokenizer.from_preset("f_net_base_en")

# Tokenize some input.
tokenizer("The quick brown fox tripped.")

# Detokenize some input.
tokenizer.detokenize([5, 6, 7, 8, 9])
Preset name Parameters Description
f_net_base_en 82.86M 12-layer FNet model where case is maintained. Trained on the C4 dataset.
f_net_large_en 236.95M 24-layer FNet model where case is maintained. Trained on the C4 dataset.