set_random_seed functionkeras.utils.set_random_seed(seed)
Sets all random seeds (Python, NumPy, and backend framework, e.g. TF).
You can use this utility to make almost any Keras program fully deterministic. Some limitations apply in cases where network communications are involved (e.g. parameter server distribution), which creates additional sources of randomness, or when certain non-deterministic cuDNN ops are involved.
Calling this utility does the following:
import random
random.seed(seed)
import numpy as np
np.random.seed(seed)
import tensorflow as tf # Only if TF is installed
tf.random.set_seed(seed)
import torch # Only if the backend is 'torch'
torch.manual_seed(seed)
Additionally, it resets the global Keras SeedGenerator, which is used by
keras.random functions when the seed is not provided.
Note that the TensorFlow seed is set even if you're not using TensorFlow
as your backend framework, since many workflows leverage tf.data
pipelines (which feature random shuffling). Likewise many workflows
might leverage NumPy APIs.
Arguments
Guides and examples using set_random_seed
split_dataset functionkeras.utils.split_dataset(
dataset,
left_size=None,
right_size=None,
shuffle=False,
seed=None,
preferred_backend=None,
)
Splits a dataset into a left half and a right half (e.g. train / test).
Arguments
tf.data.Dataset, a torch.utils.data.Dataset object,
or a list/tuple of arrays with the same length.[0, 1]), it signifies
the fraction of the data to pack in the left dataset. If integer, it
signifies the number of samples to pack in the left dataset. If
None, defaults to the complement to right_size.
Defaults to None.[0, 1]), it signifies
the fraction of the data to pack in the right dataset.
If integer, it signifies the number of samples to pack
in the right dataset.
If None, defaults to the complement to left_size.
Defaults to None.None, the
backend is inferred from the type of dataset - if
dataset is a tf.data.Dataset, "tensorflow" backend
is used, if dataset is a torch.utils.data.Dataset,
"torch" backend is used, and if dataset is a list/tuple/np.array
the current Keras backend is used. Defaults to None.Returns
A tuple of two dataset objects, the left and right splits. The exact
type of the returned objects depends on the preferred_backend.
For example, with a "tensorflow" backend,
tf.data.Dataset objects are returned. With a "torch" backend,
torch.utils.data.Dataset objects are returned.
Example
>>> data = np.random.random(size=(1000, 4))
>>> left_ds, right_ds = keras.utils.split_dataset(data, left_size=0.8)
>>> # For a tf.data.Dataset, you can use .cardinality()
>>> # >>> int(left_ds.cardinality())
>>> # 800
>>> # For a torch.utils.data.Dataset, you can use len()
>>> # >>> len(left_ds)
>>> # 800
pack_x_y_sample_weight functionkeras.utils.pack_x_y_sample_weight(x, y=None, sample_weight=None)
Packs user-provided data into a tuple.
This is a convenience utility for packing data into the tuple formats
that Model.fit() uses.
Example
>>> x = ops.ones((10, 1))
>>> data = pack_x_y_sample_weight(x)
>>> isinstance(data, ops.Tensor)
True
>>> y = ops.ones((10, 1))
>>> data = pack_x_y_sample_weight(x, y)
>>> isinstance(data, tuple)
True
>>> x, y = data
Arguments
Model.Model.Returns
Tuple in the format used in Model.fit().
unpack_x_y_sample_weight functionkeras.utils.unpack_x_y_sample_weight(data)
Unpacks user-provided data tuple.
This is a convenience utility to be used when overriding
Model.train_step, Model.test_step, or Model.predict_step.
This utility makes it easy to support data of the form (x,),
(x, y), or (x, y, sample_weight).
Example
>>> features_batch = ops.ones((10, 5))
>>> labels_batch = ops.zeros((10, 5))
>>> data = (features_batch, labels_batch)
>>> # `y` and `sample_weight` will default to `None` if not provided.
>>> x, y, sample_weight = unpack_x_y_sample_weight(data)
>>> sample_weight is None
True
Arguments
(x,), (x, y), or (x, y, sample_weight).Returns
The unpacked tuple, with Nones for y and sample_weight if they are
not provided.
Guides and examples using unpack_x_y_sample_weight
get_file functionkeras.utils.get_file(
fname=None,
origin=None,
untar=False,
md5_hash=None,
file_hash=None,
cache_subdir="datasets",
hash_algorithm="auto",
extract=False,
archive_format="auto",
cache_dir=None,
force_download=False,
)
Downloads a file from a URL if it not already in the cache.
By default the file at the url origin is downloaded to the
cache_dir ~/.keras, placed in the cache_subdir datasets,
and given the filename fname. The final location of a file
example.txt would therefore be ~/.keras/datasets/example.txt.
Files in .tar, .tar.gz, .tar.bz, and .zip formats can
also be extracted.
Passing a hash will verify the file after download. The command line
programs shasum and sha256sum can compute the hash.
Example
path_to_downloaded_file = get_file(
origin="https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz",
extract=True
)
Arguments
None, the name of the file at origin will be used.
If downloading and extracting a directory archive,
the provided fname will be used as extraction directory
name (only if it doesn't have an extension).extract argument.
Boolean, whether the file is a tar archive that should
be extracted.file_hash argument.
md5 hash of the file for file integrity verification."/path/to/folder" is
specified, the file will be saved at that location."md5', "sha256', and "auto'.
The default 'auto' detects the hash algorithm in use.True, extracts the archive. Only applicable to compressed
archive files like tar or zip."auto', "tar', "zip', and None.
"tar" includes tar, tar.gz, and tar.bz files.
The default "auto" corresponds to ["tar", "zip"].
None or an empty list will return no matches found.$KERAS_HOME if the KERAS_HOME environment
variable is set or ~/.keras/.True, the file will always be re-downloaded
regardless of the cache state.Returns
Path to the downloaded file.
⚠️ Warning on malicious downloads ⚠️
Downloading something from the Internet carries a risk.
NEVER download a file/archive if you do not trust the source.
We recommend that you specify the file_hash argument
(if the hash of the source file is known) to make sure that the file you
are getting is the one you expect.
Guides and examples using get_file
Progbar classkeras.utils.Progbar(
target, width=20, verbose=1, interval=0.05, stateful_metrics=None, unit_name="step"
)
Displays a progress bar.
Arguments
Guides and examples using Progbar
to_categorical functionkeras.utils.to_categorical(x, num_classes=None)
Converts a class vector (integers) to binary class matrix.
E.g. for use with categorical_crossentropy.
Arguments
num_classes - 1).None, this would be inferred
as max(x) + 1. Defaults to None.Returns
A binary matrix representation of the input as a NumPy array. The class axis is placed last.
Example
>>> a = keras.utils.to_categorical([0, 1, 2, 3], num_classes=4)
>>> print(a)
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
>>> b = np.array([.9, .04, .03, .03,
... .3, .45, .15, .13,
... .04, .01, .94, .05,
... .12, .21, .5, .17]).reshape(4,4)
>>> loss = keras.ops.categorical_crossentropy(a, b)
>>> print(np.around(loss, 5))
[0.10536 0.82807 0.1011 1.77196]
>>> loss = keras.ops.categorical_crossentropy(a, a)
>>> print(np.around(loss, 5))
[0. 0. 0. 0.]
Guides and examples using to_categorical
normalize functionkeras.utils.normalize(x, axis=-1, order=2)
Normalizes an array.
If the input is a NumPy array, a NumPy array will be returned. If it's a backend tensor, a backend tensor will be returned.
Arguments
order=2 for L2 norm).Returns
A normalized copy of the array.
Guides and examples using normalize