SinePositionEncoding classkeras_hub.layers.SinePositionEncoding(max_wavelength=10000, **kwargs)
Sinusoidal positional encoding layer.
This layer calculates the position encoding as a mix of sine and cosine functions with geometrically increasing wavelengths. Defined and formulized in Attention is All You Need.
Takes as input an embedded token tensor. The input must have shape [batch_size, sequence_length, feature_size]. This layer will return a positional encoding the same size as the embedded token tensor, which can be added directly to the embedded token tensor.
Arguments
10000.keras.layers.Layer,
including name, trainable, dtype etc.Call arguments
(batch_size, sequence_length, hidden_dim).(sequence_length,) or
(batch_size, sequence_length). Custom positions for the input
sequence. If specified, this tensor will be used to
compute the position embedding, and the start_index argument will
be ignored. This is useful for cases with non-standard positions.Example
# create a simple embedding layer with sinusoidal positional encoding
seq_len = 100
vocab_size = 1000
embedding_dim = 32
inputs = keras.Input((seq_len,), dtype="float32")
embedding = keras.layers.Embedding(
input_dim=vocab_size, output_dim=embedding_dim
)(inputs)
positional_encoding = keras_hub.layers.SinePositionEncoding()(embedding)
outputs = embedding + positional_encoding
References