SimpleRNN
classtf.keras.layers.SimpleRNN(
units,
activation="tanh",
use_bias=True,
kernel_initializer="glorot_uniform",
recurrent_initializer="orthogonal",
bias_initializer="zeros",
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
return_sequences=False,
return_state=False,
go_backwards=False,
stateful=False,
unroll=False,
**kwargs
)
Fully-connected RNN where the output is to be fed back to input.
See the Keras RNN API guide for details about the usage of RNN API.
Arguments
tanh
).
If you pass None, no activation is applied
(ie. "linear" activation: a(x) = x
).True
), whether the layer uses a bias vector.kernel
weights matrix,
used for the linear transformation of the inputs. Default:
glorot_uniform
.recurrent_kernel
weights matrix, used for the linear transformation of the recurrent state.
Default: orthogonal
.zeros
.kernel
weights
matrix. Default: None
.recurrent_kernel
weights matrix. Default: None
.None
.None
.kernel
weights
matrix. Default: None
.recurrent_kernel
weights matrix. Default: None
.None
.False
.False
Call arguments
[batch, timesteps, feature]
.[batch, timesteps]
indicating whether
a given timestep should be masked. An individual True
entry indicates
that the corresponding timestep should be utilized, while a False
entry
indicates that the corresponding timestep should be ignored.dropout
or
recurrent_dropout
is used.Examples
inputs = np.random.random([32, 10, 8]).astype(np.float32)
simple_rnn = tf.keras.layers.SimpleRNN(4)
output = simple_rnn(inputs) # The output has shape `[32, 4]`.
simple_rnn = tf.keras.layers.SimpleRNN(
4, return_sequences=True, return_state=True)
# whole_sequence_output has shape `[32, 10, 4]`.
# final_state has shape `[32, 4]`.
whole_sequence_output, final_state = simple_rnn(inputs)