Keras 3 API documentation / KerasNLP / Models / DistilBert / DistilBertMaskedLMPreprocessor layer

DistilBertMaskedLMPreprocessor layer

[source]

DistilBertMaskedLMPreprocessor class

keras_nlp.models.DistilBertMaskedLMPreprocessor(
    tokenizer,
    sequence_length=512,
    truncate="round_robin",
    mask_selection_rate=0.15,
    mask_selection_length=96,
    mask_token_rate=0.8,
    random_token_rate=0.1,
    **kwargs
)

DistilBERT preprocessing for the masked language modeling task.

This preprocessing layer will prepare inputs for a masked language modeling task. It is primarily intended for use with the keras_nlp.models.DistilBertMaskedLM task model. Preprocessing will occur in multiple steps.

  1. Tokenize any number of input segments using the tokenizer.
  2. Pack the inputs together using a keras_nlp.layers.MultiSegmentPacker. with the appropriate "[CLS]", "[SEP]" and "[PAD]" tokens.
  3. Randomly select non-special tokens to mask, controlled by mask_selection_rate.
  4. Construct a (x, y, sample_weight) tuple suitable for training with a keras_nlp.models.DistilBertMaskedLM task model.

Arguments

  • tokenizer: A keras_nlp.models.DistilBertTokenizer instance.
  • sequence_length: int. The length of the packed inputs.
  • truncate: string. The algorithm to truncate a list of batched segments to fit within sequence_length. The value can be either round_robin or waterfall: - "round_robin": Available space is assigned one token at a time in a round-robin fashion to the inputs that still need some, until the limit is reached. - "waterfall": The allocation of the budget is done using a "waterfall" algorithm that allocates quota in a left-to-right manner and fills up the buckets until we run out of budget. It supports an arbitrary number of segments.
  • mask_selection_rate: float. The probability an input token will be dynamically masked.
  • mask_selection_length: int. The maximum number of masked tokens in a given sample.
  • mask_token_rate: float. The probability the a selected token will be replaced with the mask token.
  • random_token_rate: float. The probability the a selected token will be replaced with a random token from the vocabulary. A selected token will be left as is with probability 1 - mask_token_rate - random_token_rate.

Call arguments

  • x: A tensor of single string sequences, or a tuple of multiple tensor sequences to be packed together. Inputs may be batched or unbatched. For single sequences, raw python inputs will be converted to tensors. For multiple sequences, pass tensors directly.
  • y: Label data. Should always be None as the layer generates labels.
  • sample_weight: Label weights. Should always be None as the layer generates label weights.

Examples

Directly calling the layer on data.

preprocessor = keras_nlp.models.DistilBertMaskedLMPreprocessor.from_preset(
    "distil_bert_base_en_uncased"
)

# Tokenize and mask a single sentence.
preprocessor("The quick brown fox jumped.")

# Tokenize and mask a batch of single sentences.
preprocessor(["The quick brown fox jumped.", "Call me Ishmael."])

# Tokenize and mask sentence pairs.
# In this case, always convert input to tensors before calling the layer.
first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])
preprocessor((first, second))

Mapping with tf.data.Dataset.

preprocessor = keras_nlp.models.DistilBertMaskedLMPreprocessor.from_preset(
    "distil_bert_base_en_uncased"
)

first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])

# Map single sentences.
ds = tf.data.Dataset.from_tensor_slices(first)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)

# Map sentence pairs.
ds = tf.data.Dataset.from_tensor_slices((first, second))
# Watch out for tf.data's default unpacking of tuples here!
# Best to invoke the `preprocessor` directly in this case.
ds = ds.map(
    lambda first, second: preprocessor(x=(first, second)),
    num_parallel_calls=tf.data.AUTOTUNE,
)

[source]

from_preset method

DistilBertMaskedLMPreprocessor.from_preset()

Instantiate DistilBertMaskedLMPreprocessor from preset architecture.

Arguments

  • preset: string. Must be one of "distil_bert_base_en_uncased", "distil_bert_base_en", "distil_bert_base_multi".

Examples

# Load a preprocessor layer from a preset.
preprocessor = keras_nlp.models.DistilBertMaskedLMPreprocessor.from_preset(
    "distil_bert_base_en_uncased",
)
Preset name Parameters Description
distil_bert_base_en_uncased 66.36M 6-layer DistilBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus using BERT as the teacher model.
distil_bert_base_en 65.19M 6-layer DistilBERT model where case is maintained. Trained on English Wikipedia + BooksCorpus using BERT as the teacher model.
distil_bert_base_multi 134.73M 6-layer DistilBERT model where case is maintained. Trained on Wikipedias of 104 languages

tokenizer property

keras_nlp.models.DistilBertMaskedLMPreprocessor.tokenizer

The tokenizer used to tokenize strings.