Keras 3 API documentation / KerasNLP / Models / DistilBert / DistilBertPreprocessor layer

DistilBertPreprocessor layer

[source]

DistilBertPreprocessor class

keras_nlp.models.DistilBertPreprocessor(
    tokenizer, sequence_length=512, truncate="round_robin", **kwargs
)

A DistilBERT preprocessing layer which tokenizes and packs inputs.

This preprocessing layer will do three things:

  1. Tokenize any number of input segments using the tokenizer.
  2. Pack the inputs together using a keras_nlp.layers.MultiSegmentPacker. with the appropriate "[CLS]", "[SEP]" and "[PAD]" tokens.
  3. Construct a dictionary of with keys "token_ids" and "padding_mask", that can be passed directly to a DistilBERT model.

This layer can be used directly with tf.data.Dataset.map to preprocess string data in the (x, y, sample_weight) format used by keras.Model.fit.

Arguments

  • tokenizer: A keras_nlp.models.DistilBertTokenizer instance.
  • sequence_length: The length of the packed inputs.
  • truncate: string. The algorithm to truncate a list of batched segments to fit within sequence_length. The value can be either round_robin or waterfall: - "round_robin": Available space is assigned one token at a time in a round-robin fashion to the inputs that still need some, until the limit is reached. - "waterfall": The allocation of the budget is done using a "waterfall" algorithm that allocates quota in a left-to-right manner and fills up the buckets until we run out of budget. It supports an arbitrary number of segments.

Call arguments

  • x: A tensor of single string sequences, or a tuple of multiple tensor sequences to be packed together. Inputs may be batched or unbatched. For single sequences, raw python inputs will be converted to tensors. For multiple sequences, pass tensors directly.
  • y: Any label data. Will be passed through unaltered.
  • sample_weight: Any label weight data. Will be passed through unaltered.

Examples

Directly calling the layer on data.

preprocessor = keras_nlp.models.DistilBertPreprocessor.from_preset(
    "distil_bert_base_en_uncased"
)
preprocessor(["The quick brown fox jumped.", "Call me Ishmael."])

# Custom vocabulary.
vocab = ["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]
vocab += ["The", "quick", "brown", "fox", "jumped", "."]
tokenizer = keras_nlp.models.DistilBertTokenizer(vocabulary=vocab)
preprocessor = keras_nlp.models.DistilBertPreprocessor(tokenizer)
preprocessor("The quick brown fox jumped.")

Mapping with tf.data.Dataset.

preprocessor = keras_nlp.models.DistilBertPreprocessor.from_preset(
    "distil_bert_base_en_uncased"
)

first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])
label = tf.constant([1, 1])
# Map labeled single sentences.
ds = tf.data.Dataset.from_tensor_slices((first, label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)


# Map unlabeled single sentences.
ds = tf.data.Dataset.from_tensor_slices(first)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

# Map labeled sentence pairs.
ds = tf.data.Dataset.from_tensor_slices(((first, second), label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map unlabeled sentence pairs.
ds = tf.data.Dataset.from_tensor_slices((first, second))

# Watch out for tf.data's default unpacking of tuples here!
# Best to invoke the `preprocessor` directly in this case.
ds = ds.map(
    lambda first, second: preprocessor(x=(first, second)),
    num_parallel_calls=tf.data.AUTOTUNE,
)

[source]

from_preset method

DistilBertPreprocessor.from_preset()

Instantiate DistilBertPreprocessor from preset architecture.

Arguments

  • preset: string. Must be one of "distil_bert_base_en_uncased", "distil_bert_base_en", "distil_bert_base_multi".

Examples

# Load a preprocessor layer from a preset.
preprocessor = keras_nlp.models.DistilBertPreprocessor.from_preset(
    "distil_bert_base_en_uncased",
)
Preset name Parameters Description
distil_bert_base_en_uncased 66.36M 6-layer DistilBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus using BERT as the teacher model.
distil_bert_base_en 65.19M 6-layer DistilBERT model where case is maintained. Trained on English Wikipedia + BooksCorpus using BERT as the teacher model.
distil_bert_base_multi 134.73M 6-layer DistilBERT model where case is maintained. Trained on Wikipedias of 104 languages

tokenizer property

keras_nlp.models.DistilBertPreprocessor.tokenizer

The tokenizer used to tokenize strings.