keras.layers.Layer( activity_regularizer=None, trainable=True, dtype=None, autocast=True, name=None, **kwargs )
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and
that outputs one or more tensors. It involves computation, defined
call() method, and a state (weight variables). State can be
__init__(), for instance via
build()method, which is invoked by the first
__call__()to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time.
Layers are recursively composable: If you assign a Layer instance as an
attribute of another Layer, the outer layer will start tracking the weights
created by the inner layer. Nested layers should be instantiated in the
__init__() method or
Users will just instantiate a layer and then treat it as a callable.
keras.mixed_precision.DTypePolicy, which allows the computation and weight dtype to differ. Defaults to
Nonemeans to use
keras.mixed_precision.dtype_policy(), which is a
float32policy unless set to different value (via
keras.mixed_precision.DTypePolicy, this will be different than
InputSpecobject(s) specifying the constraints on inputs that can be accepted by the layer.
We recommend that descendants of
Layer implement the following methods:
__init__(): Defines custom layer attributes, and creates layer weights that do not depend on input shapes, using
add_weight(), or other state.
build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), using
add_weight(), or other state.
__call__()will automatically build the layer (if it has not been built yet) by calling
call(self, *args, **kwargs): Called in
__call__after making sure
build()has been called.
call()performs the logic of applying the layer to the input arguments. Two reserved keyword arguments you can optionally use in
training(boolean, whether the call is in inference mode or training mode). 2.
mask(boolean tensor encoding masked timesteps in the input, used e.g. in RNN layers). A typical signature for this method is
call(self, inputs), and user could optionally add
maskif the layer need them.
get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in
__init__(), then override
from_config(self)as well. This method is used when saving the layer or a model that contains this layer.
Here's a basic example: a layer with two variables,
y = w . x + b.
It shows how to implement
Variables set as attributes of a layer are tracked as weights
of the layers (in
class SimpleDense(Layer): def __init__(self, units=32): super().__init__() self.units = units # Create the state of the layer (weights) def build(self, input_shape): self.kernel = self.add_weight( shape=(input_shape[-1], self.units), initializer="glorot_uniform", trainable=True, name="kernel", ) self.bias = self.add_weight( shape=(self.units,), initializer="zeros", trainable=True, name="bias", ) # Defines the computation def call(self, inputs): return ops.matmul(inputs, self.kernel) + self.bias # Instantiates the layer. linear_layer = SimpleDense(4) # This will also call `build(input_shape)` and create the weights. y = linear_layer(ops.ones((2, 2))) assert len(linear_layer.weights) == 2 # These weights are trainable, so they're listed in `trainable_weights`: assert len(linear_layer.trainable_weights) == 2
Besides trainable weights, updated via backpropagation during training,
layers can also have non-trainable weights. These weights are meant to
be updated manually during
call(). Here's a example layer that computes
the running sum of its inputs:
class ComputeSum(Layer): def __init__(self, input_dim): super(ComputeSum, self).__init__() # Create a non-trainable weight. self.total = self.add_weight( shape=(), initializer="zeros", trainable=False, name="total", ) def call(self, inputs): self.total.assign(self.total + ops.sum(inputs)) return self.total my_sum = ComputeSum(2) x = ops.ones((2, 2)) y = my_sum(x) assert my_sum.weights == [my_sum.total] assert my_sum.non_trainable_weights == [my_sum.total] assert my_sum.trainable_weights == 
List of all weight variables of the layer.
layer.variables this excludes metric state and random seeds.
List of all trainable weight variables of the layer.
These are the weights that get updated by the optimizer during training.
List of all non-trainable weight variables of the layer.
These are the weights that should not be updated by the optimizer during
layer.non_trainable_variables this excludes metric
state and random seeds.
Layer.add_weight( shape=None, initializer=None, dtype=None, trainable=True, regularizer=None, constraint=None, name=None, )
Add a weight variable to the layer.
Noneentries). Defaults to
()(scalar) if unspecified.
"random_normal"). If unspecified, defaults to
"glorot_uniform"for floating-point variables and to
"zeros"for all other types (e.g. int, bool).
"float32". If unspecified, defaults to the layer's variable dtype (which itself defaults to
Settable boolean, whether this layer should be trainable or not.
Return the values of
layer.weights as a list of NumPy arrays.
Sets the values of
layer.weights from a list of NumPy arrays.
Returns the config of the object.
An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.
Can be called inside of the
call() method to add a scalar loss.
class MyLayer(Layer): ... def call(self, x): self.add_loss(ops.sum(x)) return x
List of scalar losses from
add_loss, regularizers and sublayers.