tf.keras.layers.Lambda( function, output_shape=None, mask=None, arguments=None, **kwargs )
Wraps arbitrary expressions as a
Lambda layer exists so that arbitrary expressions can be used
Layer when constructing
and Functional API models.
Lambda layers are best suited for simple
operations or quick experimentation. For more advanced use cases, follow
tf.keras.layers.Lambda layers have (de)serialization limitations!
The main reason to subclass
tf.keras.layers.Layer instead of using a
Lambda layer is saving and inspecting a Model.
are saved by serializing the Python bytecode, which is fundamentally
non-portable. They should only be loaded in the same environment where
they were saved. Subclassed layers can be saved in a more portable way
by overriding their
get_config method. Models that rely on
subclassed Layers are also often easier to visualize and reason about.
# add a x -> x^2 layer model.add(Lambda(lambda x: x ** 2))
# add a layer that returns the concatenation # of the positive part of the input and # the opposite of the negative part def antirectifier(x): x -= K.mean(x, axis=1, keepdims=True) x = K.l2_normalize(x, axis=1) pos = K.relu(x) neg = K.relu(-x) return K.concatenate([pos, neg], axis=1) model.add(Lambda(antirectifier))
Variables: While it is possible to use Variables with Lambda layers, this practice is discouraged as it can easily lead to bugs. For instance, consider the following layer:
scale = tf.Variable(1.)
scale_layer = tf.keras.layers.Lambda(lambda x: x * scale)
Because scale_layer does not directly track the
scale variable, it will
not appear in
scale_layer.trainable_weights and will therefore not be
scale_layer is used in a Model.
A better pattern is to write a subclassed Layer:
```python class ScaleLayer(tf.keras.layers.Layer): def init(self): super(ScaleLayer, self).init() self.scale = tf.Variable(1.)
def call(self, inputs): return inputs * self.scale
In general, Lambda layers can be convenient for simple stateless computation, but anything more complex should use a subclass Layer instead.
output_shape = (input_shape, ) + output_shapeor, the input is
Noneand the sample dimension is also
output_shape = (None, ) + output_shapeIf a function, it specifies the entire shape as a function of the input shape:
output_shape = f(input_shape)
compute_masklayer method, or a tensor that will be returned as output mask regardless of what the input is.
Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.