DeepLabV3Backbone
classkeras_hub.models.DeepLabV3Backbone(
image_encoder,
spatial_pyramid_pooling_key,
upsampling_size,
dilation_rates,
low_level_feature_key=None,
projection_filters=48,
image_shape=(None, None, 3),
**kwargs
)
DeepLabV3 & DeepLabV3Plus architecture for semantic segmentation.
This class implements a DeepLabV3 & DeepLabV3Plus architecture as described in Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation(ECCV 2018) and Rethinking Atrous Convolution for Semantic Image Segmentation(CVPR 2017)
Arguments
keras.Model
. An instance that is used as a feature
extractor for the Encoder. Should either be a
keras_hub.models.Backbone
or a keras.Model
that implements the
pyramid_outputs
property with keys "P2", "P3" etc as values.
A somewhat sensible backbone to use in many cases is
the keras_hub.models.ResNetBackbone.from_preset("resnet_v2_50")
.image_encoder
.spatial_pyramid_pooling
, one of the key from the image_encoder
pyramid_outputs
property such as "P4", "P5" etc.spatial_pyramid_pooling
layer.
If low_level_feature_key
is given then spatial_pyramid_pooling
s
layer resolution should match with the low_level_feature
s layer
resolution to concatenate both the layers for combined encoder
outputs.list
of integers for parallel dilated conv applied to
SpatialPyramidPooling
. Usually a
sample choice of rates are [6, 12, 18]
.image_encoder
s pyramid_outputs
property such as "P2", "P3" etc which will be the Decoder block.
Required only when the DeepLabV3Plus architecture needs to be applied.(None, None, 3)
.Example
# Load a trained backbone to extract features from it's `pyramid_outputs`.
image_encoder = keras_hub.models.ResNetBackbone.from_preset("resnet_50_imagenet")
model = keras_hub.models.DeepLabV3Backbone(
image_encoder=image_encoder,
projection_filters=48,
low_level_feature_key="P2",
spatial_pyramid_pooling_key="P5",
upsampling_size = 8,
dilation_rates = [6, 12, 18]
)
from_preset
methodDeepLabV3Backbone.from_preset(preset, load_weights=True, **kwargs)
Instantiate a keras_hub.models.Backbone
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as a
one of:
'bert_base_en'
'kaggle://user/bert/keras/bert_base_en'
'hf://user/bert_base_en'
'./bert_base_en'
This constructor can be called in one of two ways. Either from the base
class like keras_hub.models.Backbone.from_preset()
, or from
a model class like keras_hub.models.GemmaBackbone.from_preset()
.
If calling from the base class, the subclass of the returning object
will be inferred from the config in the preset directory.
For any Backbone
subclass, you can run cls.presets.keys()
to list
all built-in presets available on the class.
Arguments
True
, the weights will be loaded into the
model architecture. If False
, the weights will be randomly
initialized.Examples
# Load a Gemma backbone with pre-trained weights.
model = keras_hub.models.Backbone.from_preset(
"gemma_2b_en",
)
# Load a Bert backbone with a pre-trained config and random weights.
model = keras_hub.models.Backbone.from_preset(
"bert_base_en",
load_weights=False,
)
Preset | Parameters | Description |
---|---|---|
deeplab_v3_plus_resnet50_pascalvoc | 39.19M | DeepLabV3+ model with ResNet50 as image encoder and trained on augmented Pascal VOC dataset by Semantic Boundaries Dataset(SBD)which is having categorical accuracy of 90.01 and 0.63 Mean IoU. |