Keras 3 API documentation / Multi-device distribution / Distribution utilities

Distribution utilities

[source]

set_distribution function

keras.distribution.set_distribution(value)

Set the distribution as the global distribution setting.

Arguments

  • value: a Distribution instance.

[source]

distribution function

keras.distribution.distribution()

Retrieve the current distribution from global context.


[source]

list_devices function

keras.distribution.list_devices(device_type=None)

Return all the available devices based on the device type.

Note: in a distributed setting, global devices are returned.

Arguments

  • device_type: string, one of "cpu", "gpu" or "tpu". Defaults to "gpu" or "tpu" if available when device_type is not provided. Otherwise will return the "cpu" devices.

Return: List of devices that are available for distribute computation.


[source]

initialize function

keras.distribution.initialize(
    job_addresses=None, num_processes=None, process_id=None
)

Initialize the distribution system for multi-host/process setting.

Calling initialize will prepare the backend for execution on multi-host GPU or TPUs. It should be called before any computations.

Note that the parameters can also be injected via environment variables, which can be better controlled by the launch script at startup time. For certain backend that also rely on the environment variables to configure, Keras will properly forward them.

Arguments

  • job_addresses: string. Comma separated IP addresses for all the jobs that will form the whole computation cluster. Note that for JAX backend, only the address for job 0 (coodinator) is needed. For certain runtime like cloud TPU, this value can be None, and the backend will figure it out with the TPU environment variables. You can also config this value via environment variable KERAS_DISTRIBUTION_JOB_ADDRESSES.
  • num_processes: int. The number of worker/processes that will form the whole computation cluster. For certain runtime like cloud TPU, this value can be None, and the backend will figure it out with the TPU environment variables. You can also configure this value via environment variable KERAS_DISTRIBUTION_NUM_PROCESSES.
  • process_id: int. The ID number of the current worker/process. The value should be ranged from 0 to num_processes - 1. 0 will indicate the current worker/process is the master/coordinate job. You can also configure this value via environment variable KERAS_DISTRIBUTION_PROCESS_ID.

Example

Suppose there are two GPU processes, and process 0 is running at address 10.0.0.1:1234, and process 1 is running at address 10.0.0.2:2345. To configure such cluster, you can run - __ On process 0__:

keras.distribute.initialize(
  job_addresses="10.0.0.1:1234,10.0.0.2:2345",
  num_processes=2,
  process_id=0)
  • __ On process 1__:
keras.distribute.initialize(
  job_addresses="10.0.0.1:1234,10.0.0.2:2345",
  num_processes=2,
  process_id=1)
  • __ or via the environment variables__:
  • On process 0:
os.environ[
  "KERAS_DISTRIBUTION_JOB_ADDRESSES"] = "10.0.0.1:1234,10.0.0.2:2345"
os.environ["KERAS_DISTRIBUTION_NUM_PROCESSES"] = "2
os.environ["KERAS_DISTRIBUTION_PROCESS_ID"] = "0"
keras.distribute.initialize()
  • __ On process 1__:
os.environ[
  "KERAS_DISTRIBUTION_JOB_ADDRESSES"] = "10.0.0.1:1234,10.0.0.2:2345"
os.environ["KERAS_DISTRIBUTION_NUM_PROCESSES"] = "2
os.environ["KERAS_DISTRIBUTION_PROCESS_ID"] = "1"
keras.distribute.initialize()

Also note that for JAX backend, the job_addresses can be further reduced to just the master/coordinator address, which is - __10.0.0.1__:1234.