Dense classkeras.layers.Dense(
units,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
lora_rank=None,
lora_alpha=None,
**kwargs
)
Just your regular densely-connected NN layer.
Dense implements the operation:
output = activation(dot(input, kernel) + bias)
where activation is the element-wise activation function
passed as the activation argument, kernel is a weights matrix
created by the layer, and bias is a bias vector created by the layer
(only applicable if use_bias is True).
Note: If the input to the layer has a rank greater than 2, Dense
computes the dot product between the inputs and the kernel along the
last axis of the inputs and axis 0 of the kernel (using tf.tensordot).
For example, if input has dimensions (batch_size, d0, d1), then we create
a kernel with shape (d1, units), and the kernel operates along axis 2
of the input, on every sub-tensor of shape (1, 1, d1) (there are
batch_size * d0 such sub-tensors). The output in this case will have
shape (batch_size, d0, units).
Arguments
a(x) = x).kernel weights matrix.kernel weights matrix.kernel weights matrix.Dense layer by calling layer.enable_lora(rank).lora_alpha / lora_rank, allowing you to fine-tune the strength of
the LoRA adjustment independently of lora_rank.Input shape
N-D tensor with shape: (batch_size, ..., input_dim).
The most common situation would be
a 2D input with shape (batch_size, input_dim).
Output shape
N-D tensor with shape: (batch_size, ..., units).
For instance, for a 2D input with shape (batch_size, input_dim),
the output would have shape (batch_size, units).