Keras 3 API documentation / KerasNLP / Models / GPT2 / GPT2CausalLMPreprocessor layer

GPT2CausalLMPreprocessor layer

[source]

GPT2CausalLMPreprocessor class

keras_nlp.models.GPT2CausalLMPreprocessor(
    tokenizer, sequence_length=1024, add_start_token=True, add_end_token=True, **kwargs
)

GPT2 Causal LM preprocessor.

This preprocessing layer is meant for use with keras_nlp.models.GPT2CausalLM. By default, it will take in batches of strings, and return outputs in a (x, y, sample_weight) format, where the y label is the next token id in the x sequence.

For use with generation, the layer also exposes two methods generate_preprocess() and generate_postprocess(). When this preprocessor is attached to a keras_nlp.models.GPT2CausalLM instance, these methods will be called implicitly in generate(). They can also be called standalone (e.g. to precompute preprocessing inputs for generation in a separate process).

Arguments

  • tokenizer: A keras_nlp.models.GPT2Tokenizer instance.
  • sequence_length: The length of the packed inputs.
  • add_start_token: If True, the preprocessor will prepend the tokenizer start token to each input sequence.
  • add_end_token: If True, the preprocessor will append the tokenizer end token to each input sequence.

Call arguments

  • x: A string, tf.Tensor or list of python strings.
  • y: Label data. Should always be None as the layer generates labels.
  • sample_weight: Label weights. Should always be None as the layer generates label weights.
  • sequence_length: Pass to override the configured sequence_length of the layer.

Examples

# Load the preprocessor from a preset.
preprocessor = keras_nlp.models.GPT2CausalLMPreprocessor.from_preset(
    "gpt2_base_en"
)

# Tokenize and pack a single sentence.
sentence = tf.constant("League of legends")
preprocessor(sentence)
# Same output.
preprocessor("League of legends")

# Tokenize a batch of sentences.
sentences = tf.constant(["Taco tuesday", "Fish taco please!"])
preprocessor(sentences)
# Same output.
preprocessor(["Taco tuesday", "Fish taco please!"])

# Map a dataset to preprocess a single sentence.
features = tf.constant(
    [
        "Avatar 2 is amazing!",
        "Well, I am not sure.",
    ]
)
labels = tf.constant([1, 0])
ds = tf.data.Dataset.from_tensor_slices((features, labels))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

# Map a dataset to preprocess unlabled sentences.
ds = tf.data.Dataset.from_tensor_slices(features)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

[source]

from_preset method

GPT2CausalLMPreprocessor.from_preset()

Instantiate GPT2CausalLMPreprocessor from preset architecture.

Arguments

  • preset: string. Must be one of "gpt2_base_en", "gpt2_medium_en", "gpt2_large_en", "gpt2_extra_large_en", "gpt2_base_en_cnn_dailymail".

Examples

# Load a preprocessor layer from a preset.
preprocessor = keras_nlp.models.GPT2CausalLMPreprocessor.from_preset(
    "gpt2_base_en",
)
Preset name Parameters Description
gpt2_base_en 124.44M 12-layer GPT-2 model where case is maintained. Trained on WebText.
gpt2_medium_en 354.82M 24-layer GPT-2 model where case is maintained. Trained on WebText.
gpt2_large_en 774.03M 36-layer GPT-2 model where case is maintained. Trained on WebText.
gpt2_extra_large_en 1.56B 48-layer GPT-2 model where case is maintained. Trained on WebText.
gpt2_base_en_cnn_dailymail 124.44M 12-layer GPT-2 model where case is maintained. Finetuned on the CNN/DailyMail summarization dataset.

[source]

generate_preprocess method

GPT2CausalLMPreprocessor.generate_preprocess(x, sequence_length=None)

Covert strings to integer token input for generation.

Similar to calling the layer for training, this method takes in strings or tensor strings, tokenizes and packs the input, and computes a padding mask masking all inputs not filled in with a padded value.

Unlike calling the layer for training, this method does not compute labels and will never append a tokenizer.end_token_id to the end of the sequence (as generation is expected to continue at the end of the inputted prompt).


[source]

generate_postprocess method

GPT2CausalLMPreprocessor.generate_postprocess(x)

Covert integer token output to strings for generation.

This method reverses generate_preprocess(), by first removing all padding and start/end tokens, and then converting the integer sequence back to a string.


tokenizer property

keras_nlp.models.GPT2CausalLMPreprocessor.tokenizer

The tokenizer used to tokenize strings.