Keras 3 API documentation / KerasNLP / Models / GPT2 / GPT2Tokenizer

GPT2Tokenizer

[source]

GPT2Tokenizer class

keras_nlp.models.GPT2Tokenizer(vocabulary=None, merges=None, **kwargs)

A GPT-2 tokenizer using Byte-Pair Encoding subword segmentation.

This tokenizer class will tokenize raw strings into integer sequences and is based on keras_nlp.tokenizers.BytePairTokenizer. Unlike the underlying tokenizer, it will check for all special tokens needed by GPT-2 models and provides a from_preset() method to automatically download a matching vocabulary for a GPT-2 preset.

This tokenizer does not provide truncation or padding of inputs.

If input is a batch of strings (rank > 0), the layer will output a tf.RaggedTensor where the last dimension of the output is ragged.

If input is a scalar string (rank == 0), the layer will output a dense tf.Tensor with static shape [None].

Arguments

  • vocabulary: string or dict, maps token to integer ids. If it is a string, it should be the file path to a json file.
  • merges: string or list, contains the merge rule. If it is a string, it should be the file path to merge rules. The merge rule file should have one merge rule per line. Every merge rule contains merge entities separated by a space.

Examples

# Unbatched input.
tokenizer = keras_nlp.models.GPT2Tokenizer.from_preset("gpt2_base_en")
tokenizer("The quick brown fox jumped.")

# Batched input.
tokenizer(["The quick brown fox jumped.", "The fox slept."])

# Detokenization.
tokenizer.detokenize(tokenizer("The quick brown fox jumped."))

# Custom vocabulary.
vocab = {"<|endoftext|>": 0, "a": 4, "Ġquick": 5, "Ġfox": 6}
merges = ["Ġ q", "u i", "c k", "ui ck", "Ġq uick"]
merges += ["Ġ f", "o x", "Ġf ox"]
tokenizer = keras_nlp.models.GPT2Tokenizer(vocabulary=vocab, merges=merges)
tokenizer("a quick fox.")

[source]

from_preset method

GPT2Tokenizer.from_preset()

Instantiate GPT2Tokenizer tokenizer from preset vocabulary.

Arguments

  • preset: string. Must be one of "gpt2_base_en", "gpt2_medium_en", "gpt2_large_en", "gpt2_extra_large_en", "gpt2_base_en_cnn_dailymail".

Examples

# Load a preset tokenizer.
tokenizer = GPT2Tokenizer.from_preset("gpt2_base_en")

# Tokenize some input.
tokenizer("The quick brown fox tripped.")

# Detokenize some input.
tokenizer.detokenize([5, 6, 7, 8, 9])
Preset name Parameters Description
gpt2_base_en 124.44M 12-layer GPT-2 model where case is maintained. Trained on WebText.
gpt2_medium_en 354.82M 24-layer GPT-2 model where case is maintained. Trained on WebText.
gpt2_large_en 774.03M 36-layer GPT-2 model where case is maintained. Trained on WebText.
gpt2_extra_large_en 1.56B 48-layer GPT-2 model where case is maintained. Trained on WebText.
gpt2_base_en_cnn_dailymail 124.44M 12-layer GPT-2 model where case is maintained. Finetuned on the CNN/DailyMail summarization dataset.