SAMImageSegmenterPreprocessor
classkeras_hub.models.SAMImageSegmenterPreprocessor(
image_converter=None, resize_output_mask=False, **kwargs
)
Base class for image segmentation preprocessing layers.
ImageSegmenterPreprocessor
wraps a
keras_hub.layers.ImageConverter
to create a preprocessing layer for
image segmentation tasks. It is intended to be paired with a
keras_hub.models.ImageSegmenter
task.
All ImageSegmenterPreprocessor
instances take three inputs: x
, y
, and
sample_weight
.
x
: The first input, should always be included. It can be an image or
a batch of images.y
: (Optional) Usually the segmentation mask(s), if resize_output_mask
is set to True
this will be resized to input image shape else will be
passed through unaltered.sample_weight
: (Optional) Will be passed through unaltered.resize_output_mask
bool: If set to True
the output mask will be resized to the same size as the input image. Defaults to False
.The layer will output either x
, an (x, y)
tuple if labels were provided,
or an (x, y, sample_weight)
tuple if labels and sample weight were
provided. x
will be the input images after all model preprocessing has
been applied.
All ImageSegmenterPreprocessor
tasks include a from_preset()
constructor which can be used to load a pre-trained config.
You can call the from_preset()
constructor directly on this base class, in
which case the correct class for your model will be automatically
instantiated.
Examples.
preprocessor = keras_hub.models.ImageSegmenterPreprocessor.from_preset(
"deeplabv3_resnet50",
)
# Resize a single image for the model.
x = np.ones((512, 512, 3))
x = preprocessor(x)
# Resize an image and its mask.
x, y = np.ones((512, 512, 3)), np.zeros((512, 512, 1))
x, y = preprocessor(x, y)
# Resize a batch of images and masks.
x, y = [np.ones((512, 512, 3)), np.zeros((512, 512, 3))],
[np.ones((512, 512, 1)), np.zeros((512, 512, 1))]
x, y = preprocessor(x, y)
# Use a [`tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset).
ds = tf.data.Dataset.from_tensor_slices((x, y)).batch(2)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
from_preset
methodSAMImageSegmenterPreprocessor.from_preset(
preset, config_file="preprocessor.json", **kwargs
)
Instantiate a keras_hub.models.Preprocessor
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as
one of:
'bert_base_en'
'kaggle://user/bert/keras/bert_base_en'
'hf://user/bert_base_en'
'./bert_base_en'
For any Preprocessor
subclass, you can run cls.presets.keys()
to
list all built-in presets available on the class.
As there are usually multiple preprocessing classes for a given model,
this method should be called on a specific subclass like
keras_hub.models.BertTextClassifierPreprocessor.from_preset()
.
Arguments
Examples
# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.GemmaCausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.BertTextClassifierPreprocessor.from_preset(
"bert_base_en",
)
Preset | Parameters | Description |
---|---|---|
sam_base_sa1b | 93.74M | The base SAM model trained on the SA1B dataset. |
sam_huge_sa1b | 312.34M | The huge SAM model trained on the SA1B dataset. |
sam_large_sa1b | 641.09M | The large SAM model trained on the SA1B dataset. |
image_converter
propertykeras_hub.models.SAMImageSegmenterPreprocessor.image_converter
The image converter used to preprocess image data.