Keras 3 API documentation / KerasNLP / Models / OPT / OPTCausalLMPreprocessor layer

OPTCausalLMPreprocessor layer

[source]

OPTCausalLMPreprocessor class

keras_nlp.models.OPTCausalLMPreprocessor(
    tokenizer, sequence_length=2048, add_start_token=True, add_end_token=True, **kwargs
)

OPT Causal LM preprocessor.

This preprocessing layer is primarily meant to be used with keras_nlp.models.OPTCausalLM. By default, it will take in batches of strings, and return outputs in a (x, y, sample_weight) format, where the y label is the next token id in the x sequence. For use with generation, pass return_labels=False in which case the output will simply be the encoded string features.

Arguments

  • tokenizer: A keras_nlp.models.OPTTokenizer instance.
  • sequence_length: The length of the packed inputs.
  • add_start_token: If True, the preprocessor will prepend the tokenizer start token to each input sequence.
  • add_end_token: If True, the preprocessor will append the tokenizer end token to each input sequence.

Call arguments

  • x: A string, tf.Tensor or list of python strings.
  • y: Label data. Should always be None as the layer generates labels.
  • sample_weight: Label weights. Should always be None as the layer generates label weights.
  • sequence_length: Pass to override the configured sequence_length of the layer.
  • add_start_token: Pass to override the configured value of add_start_token on the layer.
  • add_end_token: Pass to override the configured value of add_end_token on the layer.
  • return_labels: If True, the output "token_ids" will be offset by one and returned as labels. If False only features will be returned.

Examples

# Load the preprocessor from a preset.
preprocessor = keras_nlp.models.OPTCausalLMPreprocessor.from_preset(
    "opt_125m_en"
)

# Tokenize and pack a single sentence.
sentence = tf.constant("League of legends")
preprocessor(sentence)
# Same output.
preprocessor("League of legends")

# Tokenize a batch of sentences.
sentences = tf.constant(["Taco tuesday", "Fish taco please!"])
preprocessor(sentences)
# Same output.
preprocessor(["Taco tuesday", "Fish taco please!"])

# Map a dataset to preprocess a single sentence.
features = tf.constant(
    [
        "Avatar 2 is amazing!",
        "Well, I am not sure.",
    ]
)
labels = tf.constant([1, 0])
ds = tf.data.Dataset.from_tensor_slices((features, labels))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

# Map a dataset to preprocess unlabled sentences.
ds = tf.data.Dataset.from_tensor_slices(features)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

[source]

from_preset method

OPTCausalLMPreprocessor.from_preset()

Instantiate OPTCausalLMPreprocessor from preset architecture.

Arguments

  • preset: string. Must be one of "opt_125m_en", "opt_1.3b_en", "opt_2.7b_en", "opt_6.7b_en".

Examples

# Load a preprocessor layer from a preset.
preprocessor = keras_nlp.models.OPTCausalLMPreprocessor.from_preset(
    "opt_125m_en",
)
Preset name Parameters Description
opt_125m_en 125.24M 12-layer OPT model where case in maintained. Trained on BookCorpus, CommonCrawl, Pile, and PushShift.io corpora.
opt_1.3b_en 1.32B 24-layer OPT model where case in maintained. Trained on BookCorpus, CommonCrawl, Pile, and PushShift.io corpora.
opt_2.7b_en 2.70B 32-layer OPT model where case in maintained. Trained on BookCorpus, CommonCrawl, Pile, and PushShift.io corpora.
opt_6.7b_en 6.70B 32-layer OPT model where case in maintained. Trained on BookCorpus, CommonCrawl, Pile, and PushShift.io corpora.

tokenizer property

keras_nlp.models.OPTCausalLMPreprocessor.tokenizer

The tokenizer used to tokenize strings.