tf.keras.layers.RNN( cell, return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, time_major=False, **kwargs )
Base class for recurrent layers.
See the Keras RNN API guide for details about the usage of RNN API.
call(input_at_t, states_at_t)method, returning
(output_at_t, states_at_t_plus_1). The call method of the cell can also take the optional argument
constants, see section "Note on passing external constants" below.
state_sizeattribute. This can be a single integer (single state) in which case it is the size of the recurrent state. This can also be a list/tuple of integers (one size per state). The
state_sizecan also be TensorShape or tuple/list of TensorShape, to represent high dimension state.
output_sizeattribute. This can be a single integer or a TensorShape, which represent the shape of the output. For backward compatible reason, if this attribute is not available for the cell, the value will be inferred by the first element of the
get_initial_state(inputs=None, batch_size=None, dtype=None)method that creates a tensor meant to be fed to
call()as the initial state, if the user didn't specify any initial state via other means. The returned initial state should have a shape of [batch_size, cell.state_size]. The cell might choose to create a tensor full of zeros, or full of other values based on the cell's implementation.
inputsis the input tensor to the RNN layer, which should contain the batch size as its shape, and also dtype. Note that the shape might be
Noneduring the graph construction. Either the
inputsor the pair of
batch_sizeis a scalar tensor that represents the batch size of the inputs.
tf.DTypethat represents the dtype of the inputs. For backward compatibility, if this method is not implemented by the cell, the RNN layer will create a zero filled tensor with the size of [batch_size, cell.state_size]. In the case that
cellis a list of RNN cell instances, the cells will be stacked on top of each other in the RNN, resulting in an efficient stacked RNN.
False). Whether to return the last output in the output sequence, or the full sequence.
False). Whether to return the last state in addition to the output.
False). If True, process the input sequence backwards and return the reversed sequence.
False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
outputstensors. If True, the inputs and outputs will be in shape
(timesteps, batch, ...), whereas in the False case, it will be
(batch, timesteps, ...). Using
time_major = Trueis a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
False). Whether the output should use zeros for the masked timesteps. Note that this field is only used when
return_sequencesis True and mask is provided. It can useful if you want to reuse the raw output sequence of the RNN without interference from the masked timesteps, eg, merging bidirectional RNNs.
[batch_size, timesteps]indicating whether a given timestep should be masked. An individual
Trueentry indicates that the corresponding timestep should be utilized, while a
Falseentry indicates that the corresponding timestep should be ignored.
N-D tensor with shape
[batch_size, timesteps, ...] or
[timesteps, batch_size, ...] when time_major is True.
return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each with shape
[batch_size, state_size], where
state_sizecould be a high dimension tensor shape.
return_sequences: N-D tensor with shape
[batch_size, timesteps, output_size], where
output_sizecould be a high dimension tensor shape, or
[timesteps, batch_size, output_size]when
[batch_size, output_size], where
output_sizecould be a high dimension tensor shape.
This layer supports masking for input data with a variable number
of timesteps. To introduce masks to your data,
use an [tf.keras.layers.Embedding] layer with the
Note on using statefulness in RNNs: You can set RNN layers to be 'stateful', which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches.
To enable statefulness:
stateful=True in the layer constructor.
- Specify a fixed batch size for your model, by passing
If sequential model:
batch_input_shape=(...) to the first layer in your model.
Else for functional model with 1 or more Input layers:
batch_shape=(...) to all the first layers in your model.
This is the expected shape of your inputs
including the batch size.
It should be a tuple of integers, e.g.
(32, 10, 100).
shuffle=False when calling
To reset the states of your model, call
.reset_states() on either
a specific layer, or on your entire model.
Note on specifying the initial state of RNNs:
You can specify the initial state of RNN layers symbolically by
calling them with the keyword argument
initial_state. The value of
initial_state should be a tensor or list of tensors representing
the initial state of the RNN layer.
You can specify the initial state of RNN layers numerically by
reset_states with the keyword argument
states. The value of
states should be a numpy array or list of numpy arrays representing
the initial state of the RNN layer.
Note on passing external constants to RNNs:
You can pass "external" constants to the cell using the
keyword argument of
RNN.__call__ (as well as
RNN.call) method. This
requires that the
cell.call method accepts the same keyword argument
constants. Such constants can be used to condition the cell
transformation on additional static inputs (not changing over time),
a.k.a. an attention mechanism.
# First, let's define a RNN Cell, as a layer subclass. class MinimalRNNCell(keras.layers.Layer): def __init__(self, units, **kwargs): self.units = units self.state_size = units super(MinimalRNNCell, self).__init__(**kwargs) def build(self, input_shape): self.kernel = self.add_weight(shape=(input_shape[-1], self.units), initializer='uniform', name='kernel') self.recurrent_kernel = self.add_weight( shape=(self.units, self.units), initializer='uniform', name='recurrent_kernel') self.built = True def call(self, inputs, states): prev_output = states h = backend.dot(inputs, self.kernel) output = h + backend.dot(prev_output, self.recurrent_kernel) return output, [output] # Let's use this cell in a RNN layer: cell = MinimalRNNCell(32) x = keras.Input((None, 5)) layer = RNN(cell) y = layer(x) # Here's how to use the cell to build a stacked RNN: cells = [MinimalRNNCell(32), MinimalRNNCell(64)] x = keras.Input((None, 5)) layer = RNN(cells) y = layer(x)