Keras 3 API documentation / KerasNLP / Tokenizers / SentencePieceTokenizer

SentencePieceTokenizer

[source]

SentencePieceTokenizer class

keras_nlp.tokenizers.SentencePieceTokenizer(
    proto=None, sequence_length=None, dtype="int32", **kwargs
)

A SentencePiece tokenizer layer.

This layer provides an implementation of SentencePiece tokenization as described in the SentencePiece paper and the SentencePiece package. The tokenization will run entirely within the Tensorflow graph, and can be saved inside a keras.Model.

By default, the layer will output a tf.RaggedTensor where the last dimension of the output is ragged after whitespace splitting and sub-word tokenizing. If sequence_length is set, the layer will output a dense tf.Tensor where all inputs have been padded or truncated to sequence_length. The output dtype can be controlled via the dtype argument, which should be either an integer or string type.

Arguments

  • proto: Either a string path to a SentencePiece proto file, or a bytes object with a serialized SentencePiece proto. See the SentencePiece repository for more details on the format.
  • sequence_length: If set, the output will be converted to a dense tensor and padded/trimmed so all outputs are of sequence_length.

References

Examples

From bytes.

def train_sentence_piece_bytes(ds, size):
    bytes_io = io.BytesIO()
    sentencepiece.SentencePieceTrainer.train(
        sentence_iterator=ds.as_numpy_iterator(),
        model_writer=bytes_io,
        vocab_size=size,
    )
    return bytes_io.getvalue()

# Train a sentencepiece proto.
ds = tf.data.Dataset.from_tensor_slices(["the quick brown fox."])
proto = train_sentence_piece_bytes(ds, 20)
# Tokenize inputs.
tokenizer = keras_nlp.tokenizers.SentencePieceTokenizer(proto=proto)
ds = ds.map(tokenizer)

From a file.

def train_sentence_piece_file(ds, path, size):
    with open(path, "wb") as model_file:
        sentencepiece.SentencePieceTrainer.train(
            sentence_iterator=ds.as_numpy_iterator(),
            model_writer=model_file,
            vocab_size=size,
        )

# Train a sentencepiece proto.
ds = tf.data.Dataset.from_tensor_slices(["the quick brown fox."])
proto = train_sentence_piece_file(ds, "model.spm", 20)
# Tokenize inputs.
tokenizer = keras_nlp.tokenizers.SentencePieceTokenizer(proto="model.spm")
ds = ds.map(tokenizer)

[source]

tokenize method

SentencePieceTokenizer.tokenize(inputs)

Transform input tensors of strings into output tokens.

Arguments

  • inputs: Input tensor, or dict/list/tuple of input tensors.
  • *args: Additional positional arguments.
  • **kwargs: Additional keyword arguments.

[source]

detokenize method

SentencePieceTokenizer.detokenize(inputs)

Transform tokens back into strings.

Arguments

  • inputs: Input tensor, or dict/list/tuple of input tensors.
  • *args: Additional positional arguments.
  • **kwargs: Additional keyword arguments.

[source]

get_vocabulary method

SentencePieceTokenizer.get_vocabulary()

Get the tokenizer vocabulary.


[source]

vocabulary_size method

SentencePieceTokenizer.vocabulary_size()

Get the integer size of the tokenizer vocabulary.


[source]

token_to_id method

SentencePieceTokenizer.token_to_id(token)

Convert a string token to an integer id.


[source]

id_to_token method

SentencePieceTokenizer.id_to_token(id)

Convert an integer id to a string token.