Keras 3 API documentation / KerasNLP / Models / Bert / BertBackbone model

BertBackbone model

[source]

BertBackbone class

keras_nlp.models.BertBackbone(
    vocabulary_size,
    num_layers,
    num_heads,
    hidden_dim,
    intermediate_dim,
    dropout=0.1,
    max_sequence_length=512,
    num_segments=2,
    dtype=None,
    **kwargs
)

A BERT encoder network.

This class implements a bi-directional Transformer-based encoder as described in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". It includes the embedding lookups and transformer layers, but not the masked language model or next sentence prediction heads.

The default constructor gives a fully customizable, randomly initialized BERT encoder with any number of layers, heads, and embedding dimensions. To load preset architectures and weights, use the from_preset() constructor.

Disclaimer: Pre-trained models are provided on an "as is" basis, without warranties or conditions of any kind.

Arguments

  • vocabulary_size: int. The size of the token vocabulary.
  • num_layers: int. The number of transformer layers.
  • num_heads: int. The number of attention heads for each transformer. The hidden size must be divisible by the number of attention heads.
  • hidden_dim: int. The size of the transformer encoding and pooler layers.
  • intermediate_dim: int. The output dimension of the first Dense layer in a two-layer feedforward network for each transformer.
  • dropout: float. Dropout probability for the Transformer encoder.
  • max_sequence_length: int. The maximum sequence length that this encoder can consume. If None, max_sequence_length uses the value from sequence length. This determines the variable shape for positional embeddings.
  • num_segments: int. The number of types that the 'segment_ids' input can take.
  • dtype: string or keras.mixed_precision.DTypePolicy. The dtype to use for model computations and weights. Note that some computations, such as softmax and layer normalization, will always be done at float32 precision regardless of dtype.

Examples

input_data = {
    "token_ids": np.ones(shape=(1, 12), dtype="int32"),
    "segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]]),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}

# Pretrained BERT encoder.
model = keras_nlp.models.BertBackbone.from_preset("bert_base_en_uncased")
model(input_data)

# Randomly initialized BERT encoder with a custom config.
model = keras_nlp.models.BertBackbone(
    vocabulary_size=30552,
    num_layers=4,
    num_heads=4,
    hidden_dim=256,
    intermediate_dim=512,
    max_sequence_length=128,
)
model(input_data)

[source]

from_preset method

BertBackbone.from_preset()

Instantiate BertBackbone model from preset architecture and weights.

Arguments

  • preset: string. Must be one of "bert_tiny_en_uncased", "bert_small_en_uncased", "bert_medium_en_uncased", "bert_base_en_uncased", "bert_base_en", "bert_base_zh", "bert_base_multi", "bert_large_en_uncased", "bert_large_en".
  • load_weights: Whether to load pre-trained weights into model. Defaults to True.

Examples

# Load architecture and weights from preset
model = keras_nlp.models.BertBackbone.from_preset(
    "bert_tiny_en_uncased"
)

# Load randomly initialized model from preset architecture
model = keras_nlp.models.BertBackbone.from_preset(
    "bert_tiny_en_uncased",
    load_weights=False
)
Preset name Parameters Description
bert_tiny_en_uncased 4.39M 2-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus.
bert_small_en_uncased 28.76M 4-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus.
bert_medium_en_uncased 41.37M 8-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus.
bert_base_en_uncased 109.48M 12-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus.
bert_base_en 108.31M 12-layer BERT model where case is maintained. Trained on English Wikipedia + BooksCorpus.
bert_base_zh 102.27M 12-layer BERT model. Trained on Chinese Wikipedia.
bert_base_multi 177.85M 12-layer BERT model where case is maintained. Trained on trained on Wikipedias of 104 languages
bert_large_en_uncased 335.14M 24-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus.
bert_large_en 333.58M 24-layer BERT model where case is maintained. Trained on English Wikipedia + BooksCorpus.

token_embedding property

keras_nlp.models.BertBackbone.token_embedding

A keras.layers.Embedding instance for embedding token ids.

This layer embeds integer token ids to the hidden dim of the model.