Inpaint
classkeras_hub.models.Inpaint()
Base class for image-to-image tasks.
Inpaint
tasks wrap a keras_hub.models.Backbone
and
a keras_hub.models.Preprocessor
to create a model that can be used for
generation and generative fine-tuning.
Inpaint
tasks provide an additional, high-level generate()
function
which can be used to generate image by token with a (image, mask, string)
in, image out signature.
All Inpaint
tasks include a from_preset()
constructor which can be
used to load a pre-trained config and weights.
Example
# Load a Stable Diffusion 3 backbone with pre-trained weights.
reference_image = np.ones((1024, 1024, 3), dtype="float32")
reference_mask = np.ones((1024, 1024), dtype="float32")
inpaint = keras_hub.models.Inpaint.from_preset(
"stable_diffusion_3_medium",
)
inpaint.generate(
reference_image,
reference_mask,
"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k",
)
# Load a Stable Diffusion 3 backbone at bfloat16 precision.
inpaint = keras_hub.models.Inpaint.from_preset(
"stable_diffusion_3_medium",
dtype="bfloat16",
)
inpaint.generate(
reference_image,
reference_mask,
"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k",
)
from_preset
methodInpaint.from_preset(preset, load_weights=True, **kwargs)
Instantiate a keras_hub.models.Task
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as
one of:
'bert_base_en'
'kaggle://user/bert/keras/bert_base_en'
'hf://user/bert_base_en'
'./bert_base_en'
For any Task
subclass, you can run cls.presets.keys()
to list all
built-in presets available on the class.
This constructor can be called in one of two ways. Either from a task
specific base class like keras_hub.models.CausalLM.from_preset()
, or
from a model class like keras_hub.models.BertTextClassifier.from_preset()
.
If calling from the a base class, the subclass of the returning object
will be inferred from the config in the preset directory.
Arguments
True
, saved weights will be loaded into
the model architecture. If False
, all weights will be
randomly initialized.Examples
# Load a Gemma generative task.
causal_lm = keras_hub.models.CausalLM.from_preset(
"gemma_2b_en",
)
# Load a Bert classification task.
model = keras_hub.models.TextClassifier.from_preset(
"bert_base_en",
num_classes=2,
)
Preset name | Parameters | Description |
---|---|---|
stable_diffusion_3_medium | 2.99B | 3 billion parameter, including CLIP L and CLIP G text encoders, MMDiT generative model, and VAE autoencoder. Developed by Stability AI. |
compile
methodInpaint.compile(optimizer="auto", loss="auto", metrics="auto", **kwargs)
Configures the Inpaint
task for training.
The Inpaint
task extends the default compilation signature of
keras.Model.compile
with defaults for optimizer
, loss
, and
metrics
. To override these defaults, pass any value
to these arguments during compilation.
Arguments
"auto"
, an optimizer name, or a keras.Optimizer
instance. Defaults to "auto"
, which uses the default optimizer
for the given model and task. See keras.Model.compile
and
keras.optimizers
for more info on possible optimizer
values."auto"
, a loss name, or a keras.losses.Loss
instance.
Defaults to "auto"
, where a
keras.losses.MeanSquaredError
loss will be applied. See
keras.Model.compile
and keras.losses
for more info on
possible loss
values."auto"
, or a list of metrics to be evaluated by
the model during training and testing. Defaults to "auto"
,
where a keras.metrics.MeanSquaredError
will be applied to
track the loss of the model during training. See
keras.Model.compile
and keras.metrics
for more info on
possible metrics
values.keras.Model.compile
for a full list of arguments
supported by the compile method.save_to_preset
methodInpaint.save_to_preset(preset_dir)
Save task to a preset directory.
Arguments
preprocessor
propertykeras_hub.models.Inpaint.preprocessor
A keras_hub.models.Preprocessor
layer used to preprocess input.
backbone
propertykeras_hub.models.Inpaint.backbone
A keras_hub.models.Backbone
model with the core architecture.
generate
methodInpaint.generate(inputs, num_steps, guidance_scale, strength, seed=None)
Generate image based on the provided inputs
.
Typically, inputs
is a dict with "images"
"masks"
and "prompts"
keys. "images"
are reference images within a value range of
[-1.0, 1.0]
, which will be resized to self.backbone.height
and
self.backbone.width
, then encoded into latent space by the VAE
encoder. "masks"
are mask images with a boolean dtype, where white
pixels are repainted while black pixels are preserved. "prompts"
are
strings that will be tokenized and encoded by the text encoder.
Some models support a "negative_prompts"
key, which helps steer the
model away from generating certain styles and elements. To enable this,
add "negative_prompts"
to the input dict.
If inputs
are a tf.data.Dataset
, outputs will be generated
"batch-by-batch" and concatenated. Otherwise, all inputs will be
processed as batches.
Arguments
tf.data.Dataset
. The format
must be one of the following:"images"
, "masks"
, "prompts"
and/or
"negative_prompts"
keys.tf.data.Dataset
with "images"
, "masks"
, "prompts"
and/or "negative_prompts"
keys.images
are transformed. Must be between 0.0
and 1.0
. When
strength=1.0
, images
is essentially ignore and added noise
is maximum and the denoising process runs for the full number of
iterations specified in num_steps
.