Keras 3 API documentation / KerasNLP / Models / FNet / FNetBackbone model

FNetBackbone model

[source]

FNetBackbone class

keras_nlp.models.FNetBackbone(
    vocabulary_size,
    num_layers,
    hidden_dim,
    intermediate_dim,
    dropout=0.1,
    max_sequence_length=512,
    num_segments=4,
    dtype=None,
    **kwargs
)

A FNet encoder network.

This class implements a bi-directional Fourier Transform-based encoder as described in "FNet: Mixing Tokens with Fourier Transforms". It includes the embedding lookups and keras_nlp.layers.FNetEncoder layers, but not the masked language model or next sentence prediction heads.

The default constructor gives a fully customizable, randomly initialized FNet encoder with any number of layers and embedding dimensions. To load preset architectures and weights, use the from_preset() constructor.

Note: unlike other models, FNet does not take in a "padding_mask" input, the "<pad>" token is handled equivalently to all other tokens in the input sequence.

Disclaimer: Pre-trained models are provided on an "as is" basis, without warranties or conditions of any kind.

Arguments

  • vocabulary_size: int. The size of the token vocabulary.
  • num_layers: int. The number of FNet layers.
  • hidden_dim: int. The size of the FNet encoding and pooler layers.
  • intermediate_dim: int. The output dimension of the first Dense layer in a two-layer feedforward network for each FNet layer.
  • dropout: float. Dropout probability for the embeddings and FNet encoder.
  • max_sequence_length: int. The maximum sequence length that this encoder can consume. If None, max_sequence_length uses the value from sequence length. This determines the variable shape for positional embeddings.
  • num_segments: int. The number of types that the 'segment_ids' input can take.
  • dtype: string or keras.mixed_precision.DTypePolicy. The dtype to use for model computations and weights. Note that some computations, such as softmax and layer normalization, will always be done at float32 precision regardless of dtype.

Examples

input_data = {
    "token_ids": np.ones(shape=(1, 12), dtype="int32"),
    "segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]]),
}

# Pretrained BERT encoder.
model = keras_nlp.models.FNetBackbone.from_preset("f_net_base_en")
model(input_data)

# Randomly initialized FNet encoder with a custom config.
model = keras_nlp.models.FNetBackbone(
    vocabulary_size=32000,
    num_layers=4,
    hidden_dim=256,
    intermediate_dim=512,
    max_sequence_length=128,
)
model(input_data)

[source]

from_preset method

FNetBackbone.from_preset()

Instantiate FNetBackbone model from preset architecture and weights.

Arguments

  • preset: string. Must be one of "f_net_base_en", "f_net_large_en".
  • load_weights: Whether to load pre-trained weights into model. Defaults to True.

Examples

# Load architecture and weights from preset
model = keras_nlp.models.FNetBackbone.from_preset(
    "f_net_base_en"
)

# Load randomly initialized model from preset architecture
model = keras_nlp.models.FNetBackbone.from_preset(
    "f_net_base_en",
    load_weights=False
)
Preset name Parameters Description
f_net_base_en 82.86M 12-layer FNet model where case is maintained. Trained on the C4 dataset.
f_net_large_en 236.95M 24-layer FNet model where case is maintained. Trained on the C4 dataset.

token_embedding property

keras_nlp.models.FNetBackbone.token_embedding

A keras.layers.Embedding instance for embedding token ids.

This layer embeds integer token ids to the hidden dim of the model.