ReversibleEmbedding
classkeras_hub.layers.ReversibleEmbedding(
input_dim,
output_dim,
tie_weights=True,
embeddings_initializer="uniform",
embeddings_regularizer=None,
embeddings_constraint=None,
mask_zero=False,
reverse_dtype=None,
logit_soft_cap=None,
**kwargs
)
An embedding layer which can project backwards to the input dim.
This layer is an extension of keras.layers.Embedding
for language models.
This layer can be called "in reverse" with reverse=True
, in which case the
layer will linearly project from output_dim
back to input_dim
.
By default, the reverse projection will use the transpose of the
embeddings
weights to project to input_dim
(weights are "tied"). If
tie_weights=False
, the model will use a separate, trainable variable for
reverse projection.
This layer has no bias terms.
Arguments
reverse
projection should share the same
weights.embeddings
matrix (see keras.initializers
).embeddings
matrix (see keras.regularizers
).embeddings
matrix (see keras.constraints
).compute_dtype
of the layer.logit_soft_cap
is set and reverse=True
, the
output logits will be scaled by
tanh(logits / logit_soft_cap) * logit_soft_cap
. This narrows the
range of output logits and can improve training.keras.layers.Embedding
,
including name
, trainable
, dtype
etc.Call arguments
True
the layer will perform a linear projection
from output_dim
to input_dim
, instead of a normal embedding
call. Default to False
.Example
batch_size = 16
vocab_size = 100
hidden_dim = 32
seq_length = 50
# Generate random inputs.
token_ids = np.random.randint(vocab_size, size=(batch_size, seq_length))
embedding = keras_hub.layers.ReversibleEmbedding(vocab_size, hidden_dim)
# Embed tokens to shape `(batch_size, seq_length, hidden_dim)`.
hidden_states = embedding(token_ids)
# Project hidden states to shape `(batch_size, seq_length, vocab_size)`.
logits = embedding(hidden_states, reverse=True)
References