device functionkeras.device(device_name)
Context manager for backend-agnostic device placement.
Use this context manager to control on which device operations are performed and tensors are allocated. This works across all backends (TensorFlow, JAX, PyTorch). This is useful for memory management, data preprocessing, and multi-device setups.
Arguments
"device_type:device_index". For example: "cpu:0", "gpu:0",
"gpu:1". For the PyTorch backend, "gpu" is automatically
converted to "cuda".Example
Basic usage with CPU and GPU:
# Allocate tensors on CPU
with keras.device("cpu:0"):
cpu_tensor = keras.ops.ones((2, 2))
# Allocate tensors on GPU (if available)
with keras.device("gpu:0"):
gpu_tensor = keras.ops.ones((2, 2))
Practical example with CPU preprocessing and GPU training:
# Create dummy data and model
x_raw = np.random.rand(128, 784)
y_train = np.random.randint(0, 10, size=(128,))
model = keras.Sequential([
keras.Input(shape=(784,)),
keras.layers.Dense(10)
])
model.compile(
optimizer="adam",
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True)
)
# Preprocess data on CPU
with keras.device("cpu:0"):
x_processed = keras.ops.cast(x_raw, "float32")
# Train on GPU (if available)
with keras.device("gpu:0"):
model.fit(x_processed, y_train, epochs=2)
Use cases:
Device naming conventions:
"cpu:0" - First CPU"gpu:0" - First GPU (works across all backends)"gpu:1" - Second GPUNote: For distributed training across multiple devices, see the distributed training guides.
name_scope classkeras.name_scope(name, **kwargs)
Creates a sub-namespace for variable paths.
Arguments
True, if caller was passed,
and the previous caller matches the current caller,
and the previous name matches the current name,
do not reenter a new namespace.