» Keras API reference / Layers API / Layer activation functions

Layer activation functions

Usage of activations

Activations can either be used through an Activation layer, or through the activation argument supported by all forward layers:

model.add(layers.Dense(64, activation=activations.relu))

This is equivalent to:

from tensorflow.keras import layers
from tensorflow.keras import activations

model.add(layers.Dense(64))
model.add(layers.Activation(activations.relu))

All built-in activations may also be passed via their string identifier:

model.add(layers.Dense(64, activation='relu'))

Available activations

relu function

tf.keras.activations.relu(x, alpha=0.0, max_value=None, threshold=0)

Applies the rectified linear unit activation function.

With default values, this returns the standard ReLU activation: max(x, 0), the element-wise maximum of 0 and the input tensor.

Modifying default parameters allows you to use non-zero thresholds, change the max value of the activation, and to use a non-zero multiple of the input for values below the threshold.

For example:

>>> foo = tf.constant([-10, -5, 0.0, 5, 10], dtype = tf.float32)
>>> tf.keras.activations.relu(foo).numpy()
array([ 0.,  0.,  0.,  5., 10.], dtype=float32)
>>> tf.keras.activations.relu(foo, alpha=0.5).numpy()
array([-5. , -2.5,  0. ,  5. , 10. ], dtype=float32)
>>> tf.keras.activations.relu(foo, max_value=5).numpy()
array([0., 0., 0., 5., 5.], dtype=float32)
>>> tf.keras.activations.relu(foo, threshold=5).numpy()
array([-0., -0.,  0.,  0., 10.], dtype=float32)

Arguments

  • x: Input tensor or variable.
  • alpha: A float that governs the slope for values lower than the threshold.
  • max_value: A float that sets the saturation threshold (the largest value the function will return).
  • threshold: A float giving the threshold value of the activation function below which values will be damped or set to zero.

Returns

A Tensor representing the input tensor, transformed by the relu activation function. Tensor will be of the same shape and dtype of input x.


sigmoid function

tf.keras.activations.sigmoid(x)

Sigmoid activation function, sigmoid(x) = 1 / (1 + exp(-x)).

Applies the sigmoid activation function. For small values (<-5), sigmoid returns a value close to zero, and for large values (>5) the result of the function gets close to 1.

Sigmoid is equivalent to a 2-element Softmax, where the second element is assumed to be zero. The sigmoid function always returns a value between 0 and 1.

For example:

>>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)
>>> b = tf.keras.activations.sigmoid(a)
>>> b.numpy()
array([2.0611537e-09, 2.6894143e-01, 5.0000000e-01, 7.3105860e-01,
         1.0000000e+00], dtype=float32)

Arguments

  • x: Input tensor.

Returns

Tensor with the sigmoid activation: 1 / (1 + exp(-x)).


softmax function

tf.keras.activations.softmax(x, axis=-1)

Softmax converts a real vector to a vector of categorical probabilities.

The elements of the output vector are in range (0, 1) and sum to 1.

Each vector is handled independently. The axis argument sets which axis of the input the function is applied along.

Softmax is often used as the activation for the last layer of a classification network because the result could be interpreted as a probability distribution.

The softmax of each vector x is computed as exp(x) / tf.reduce_sum(exp(x)).

The input values in are the log-odds of the resulting probability.

Arguments

  • x : Input tensor.
  • axis: Integer, axis along which the softmax normalization is applied.

Returns

Tensor, output of softmax transformation (all values are non-negative and sum to 1).

Raises

  • ValueError: In case dim(x) == 1.

softplus function

tf.keras.activations.softplus(x)

Softplus activation function, softplus(x) = log(exp(x) + 1).

Example Usage:

>>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)
>>> b = tf.keras.activations.softplus(a) 
>>> b.numpy()
array([2.0611537e-09, 3.1326166e-01, 6.9314718e-01, 1.3132616e+00,
         2.0000000e+01], dtype=float32)

Arguments

  • x: Input tensor.

Returns

The softplus activation: log(exp(x) + 1).


softsign function

tf.keras.activations.softsign(x)

Softsign activation function, softsign(x) = x / (abs(x) + 1).

Example Usage:

>>> a = tf.constant([-1.0, 0.0, 1.0], dtype = tf.float32)
>>> b = tf.keras.activations.softsign(a)
>>> b.numpy()
array([-0.5,  0. ,  0.5], dtype=float32)

Arguments

  • x: Input tensor.

Returns

The softsign activation: x / (abs(x) + 1).


tanh function

tf.keras.activations.tanh(x)

Hyperbolic tangent activation function.

For example:

>>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)
>>> b = tf.keras.activations.tanh(a)
>>> b.numpy()
array([-0.9950547, -0.7615942,  0.,  0.7615942,  0.9950547], dtype=float32)

Arguments

  • x: Input tensor.

Returns

Tensor of same shape and dtype of input x, with tanh activation: tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x))).


selu function

tf.keras.activations.selu(x)

Scaled Exponential Linear Unit (SELU).

The Scaled Exponential Linear Unit (SELU) activation function is defined as:

  • if x > 0: return scale * x
  • if x < 0: return scale * alpha * (exp(x) - 1)

where alpha and scale are pre-defined constants (alpha=1.67326324 and scale=1.05070098).

Basically, the SELU activation function multiplies scale (> 1) with the output of the tf.keras.activations.elu function to ensure a slope larger than one for positive inputs.

The values of alpha and scale are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see tf.keras.initializers.LecunNormal initializer) and the number of input units is "large enough" (see reference paper for more information).

Example Usage:

>>> num_classes = 10  # 10-class problem
>>> model = tf.keras.Sequential()
>>> model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal',
...                                 activation='selu'))
>>> model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal',
...                                 activation='selu'))
>>> model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal',
...                                 activation='selu'))
>>> model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))

Arguments

  • x: A tensor or variable to compute the activation function for.

Returns

The scaled exponential unit activation: scale * elu(x, alpha).

Notes: - To be used together with the tf.keras.initializers.LecunNormal initializer. - To be used together with the dropout variant tf.keras.layers.AlphaDropout (not regular dropout).

References: - Klambauer et al., 2017


elu function

tf.keras.activations.elu(x, alpha=1.0)

Exponential linear unit.

Arguments

  • x: Input tensor.
  • alpha: A scalar, slope of negative section.

Returns

The exponential linear activation: x if x > 0 and alpha * (exp(x)-1) if x < 0.

Reference


exponential function

tf.keras.activations.exponential(x)

Exponential activation function.

For example:

>>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)
>>> b = tf.keras.activations.exponential(a)
>>> b.numpy()
array([0.04978707,  0.36787945,  1.,  2.7182817 , 20.085537], dtype=float32)

Arguments

  • x: Input tensor.

Returns

Tensor with exponential activation: exp(x).



Creating custom activations

You can also use a TensorFlow callable as an activation (in this case it should take a tensor and return a tensor of the same shape and dtype):

model.add(layers.Dense(64, activation=tf.nn.tanh))

About "advanced activation" layers

Activations that are more complex than a simple TensorFlow function (eg. learnable activations, which maintain a state) are available as Advanced Activation layers, and can be found in the module tf.keras.layers.advanced_activations. These include PReLU and LeakyReLU. If you need a custom activation that requires a state, you should implement it as a custom layer.

Note that you should not pass activation layers instances as the activation argument of a layer. They're meant to be used just like regular layers, e.g.:

x = layers.Dense(10)(x)
x = layers.LeakyReLU()(x)