tf.keras.layers.LocallyConnected1D( filters, kernel_size, strides=1, padding="valid", data_format=None, activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, implementation=1, **kwargs )
Locally-connected layer for 1D inputs.
LocallyConnected1D layer works similarly to
Conv1D layer, except that weights are unshared,
that is, a different set of filters is applied at each different patch
of the input.
Note: layer attributes cannot be modified after the layer has been called
once (except the
# apply a unshared weight convolution 1d of length 3 to a sequence with # 10 timesteps, with 64 output filters model = Sequential() model.add(LocallyConnected1D(64, 3, input_shape=(10, 32))) # now model.output_shape == (None, 8, 64) # add a new conv1d on top model.add(LocallyConnected1D(32, 3)) # now model.output_shape == (None, 6, 32)
"same"may be supported in the future.
"valid"means no padding.
channels_first. The ordering of the dimensions in the inputs.
channels_lastcorresponds to inputs with shape
(batch, length, channels)while
channels_firstcorresponds to inputs with shape
(batch, channels, length). It defaults to the
image_data_formatvalue found in your Keras config file at
~/.keras/keras.json. If you never set it, then it will be "channels_last".
a(x) = x).
1loops over input spatial locations to perform the forward pass. It is memory-efficient but performs a lot of (small) ops.
2stores layer weights in a dense but sparsely-populated 2D matrix and implements the forward pass as a single matrix-multiply. It uses a lot of RAM but performs few (large) ops.
3stores layer weights in a sparse tensor and implements the forward pass as a single sparse matrix-multiply. How to choose:
1: large, dense models,
2: small models,
3: large, sparse models, where "large" stands for large input/output activations (i.e. many
output_size), and "sparse" stands for few connections between inputs and outputs, i.e. small ratio
filters * input_filters * kernel_size / (input_size * strides), where inputs to and outputs of the layer are assumed to have shapes
(output_size, filters)respectively. It is recommended to benchmark each in the setting of interest to pick the most efficient one (in terms of speed and memory usage). Correct choice of implementation can lead to dramatic speed improvements (e.g. 50X), potentially at the expense of RAM. Also, only
padding="valid"is supported by
3D tensor with shape:
(batch_size, steps, input_dim)
3D tensor with shape:
(batch_size, new_steps, filters)
might have changed due to padding or strides.