Keras 3 API documentation / KerasNLP / Pretrained Models / Llama / LlamaPreprocessor layer

LlamaPreprocessor layer

[source]

LlamaPreprocessor class

keras_nlp.models.LlamaPreprocessor(
    tokenizer, sequence_length=1024, add_start_token=True, add_end_token=False, **kwargs
)

A Llama preprocessing layer which tokenizes and packs inputs.

This preprocessing layer will do three things:

  1. Tokenize any number of input segments using the tokenizer.
  2. Pack the inputs together using a keras_nlp.layers.StartEndPacker. with the appropriate tokens.
  3. Construct a dictionary with keys "token_ids", and "padding_mask" that can be passed directly to keras_nlp.models.LlamaBackbone.

This layer can be used directly with tf.data.Dataset.map to preprocess string data in the (x, y, sample_weight) format used by keras.Model.fit.

Arguments

  • tokenizer: A keras_nlp.models.LlamaTokenizer instance.
  • sequence_length: The length of the packed inputs.
  • add_start_token: If True, the preprocessor will prepend the tokenizer start token to each input sequence. Default is True.
  • add_end_token: If True, the preprocessor will append the tokenizer end token to each input sequence. Default is False.

Call arguments

  • x: A tensor of single string sequences, or a tuple of multiple tensor sequences to be packed together. Inputs may be batched or unbatched. For single sequences, raw python inputs will be converted to tensors. For multiple sequences, pass tensors directly.
  • y: Any label data. Will be passed through unaltered.
  • sample_weight: Any label weight data. Will be passed through unaltered.
  • sequence_length: Pass to override the configured sequence_length of the layer.

Examples

Directly calling the from_preset().

preprocessor = keras_nlp.models.LlamaPreprocessor.from_preset(
    "llama_base_en"
)

# Tokenize and pack a single sentence.
preprocessor("The quick brown fox jumped.")

# Tokenize and a batch of single sentences.
preprocessor(["The quick brown fox jumped.", "Call me Ishmael."])

# Preprocess a batch of sentence pairs.
# When handling multiple sequences, always convert to tensors first!
first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])
preprocessor((first, second))

Mapping with tf.data.Dataset.

preprocessor = keras_nlp.models.LlamaPreprocessor.from_preset(
    "llama_base_en"
)
first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])
label = tf.constant([1, 1])

# Map labeled single sentences.
ds = tf.data.Dataset.from_tensor_slices((first, label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

# Map unlabeled single sentences.
ds = tf.data.Dataset.from_tensor_slices(first)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

# Map labeled sentence pairs.
ds = tf.data.Dataset.from_tensor_slices(((first, second), label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

# Map unlabeled sentence pairs.
ds = tf.data.Dataset.from_tensor_slices((first, second))

# Watch out for tf.data's default unpacking of tuples here!
# Best to invoke the `preprocessor` directly in this case.
ds = ds.map(
    lambda first, second: preprocessor(x=(first, second)),
    num_parallel_calls=tf.data.AUTOTUNE,
)

[source]

from_preset method

LlamaPreprocessor.from_preset(preset, **kwargs)

Instantiate a keras_nlp.models.Preprocessor from a model preset.

A preset is a directory of configs, weights and other file assets used to save and load a pre-trained model. The preset can be passed as a one of:

  1. a built in preset identifier like 'bert_base_en'
  2. a Kaggle Models handle like 'kaggle://user/bert/keras/bert_base_en'
  3. a Hugging Face handle like 'hf://user/bert_base_en'
  4. a path to a local preset directory like './bert_base_en'

For any Preprocessor subclass, you can run cls.presets.keys() to list all built-in presets available on the class.

As there are usually multiple preprocessing classes for a given model, this method should be called on a specific subclass like keras_nlp.models.BertPreprocessor.from_preset().

Arguments

  • preset: string. A built in preset identifier, a Kaggle Models handle, a Hugging Face handle, or a path to a local directory.

Examples

# Load a preprocessor for Gemma generation.
preprocessor = keras_nlp.models.GemmaCausalLMPreprocessor.from_preset(
    "gemma_2b_en",
)

# Load a preprocessor for Bert classification.
preprocessor = keras_nlp.models.BertPreprocessor.from_preset(
    "bert_base_en",
)
Preset name Parameters Description
llama2_7b_en 6.74B LLaMA 2 7B Base model
llama2_instruct_7b_en 6.74B LLaMA 2 7B Chat model
llama3_8b_en 8.03B LLaMA 3 8B Base model
llama3_instruct_8b_en 8.03B LLaMA 3 8B Instruct model

tokenizer property

keras_nlp.models.LlamaPreprocessor.tokenizer

The tokenizer used to tokenize strings.