ConvNeXtTiny
functiontf_keras.applications.ConvNeXtTiny(
model_name="convnext_tiny",
include_top=True,
include_preprocessing=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
)
Instantiates the ConvNeXtTiny architecture.
References
For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
The base
, large
, and xlarge
models were first pre-trained on the
ImageNet-21k dataset and then fine-tuned on the ImageNet-1k dataset. The
pre-trained parameters of the models were assembled from the
official repository. To get a
sense of how these parameters were converted to TF-Keras compatible
parameters, please refer to
this repository.
Note: Each TF-Keras Application expects a specific kind of input
preprocessing. For ConvNeXt, preprocessing is included in the model using a
Normalization
layer. ConvNeXt models expect their inputs to be float or
uint8 tensors of pixels with values in the [0-255] range.
When calling the summary()
method after instantiating a ConvNeXt model,
prefer setting the expand_nested
argument summary()
to True
to better
investigate the instantiated model.
Arguments
True
.None
(random initialization),
"imagenet"
(pre-training on ImageNet-1k), or the path to the weights
file to be loaded. Defaults to "imagenet"
.layers.Input()
)
to use as image input for the model.include_top
is False.
It should have exactly 3 inputs channels.include_top
is False
.None
means that the output of the model will be
the 4D tensor output of the last convolutional layer.avg
means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.max
means that global max pooling will
be applied.
Defaults to None
.include_top
is True, and
if no weights
argument is specified. 1000 is how many
ImageNet classes there are. Defaults to 1000
.str
or callable. The activation function to use
on the "top" layer. Ignored unless include_top=True
. Set
classifier_activation=None
to return the logits of the "top" layer.
When loading pretrained weights, classifier_activation
can only
be None
or "softmax"
. Defaults to "softmax"
.Returns
A keras.Model
instance.
ConvNeXtSmall
functiontf_keras.applications.ConvNeXtSmall(
model_name="convnext_small",
include_top=True,
include_preprocessing=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
)
Instantiates the ConvNeXtSmall architecture.
References
For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
The base
, large
, and xlarge
models were first pre-trained on the
ImageNet-21k dataset and then fine-tuned on the ImageNet-1k dataset. The
pre-trained parameters of the models were assembled from the
official repository. To get a
sense of how these parameters were converted to TF-Keras compatible
parameters, please refer to
this repository.
Note: Each TF-Keras Application expects a specific kind of input
preprocessing. For ConvNeXt, preprocessing is included in the model using a
Normalization
layer. ConvNeXt models expect their inputs to be float or
uint8 tensors of pixels with values in the [0-255] range.
When calling the summary()
method after instantiating a ConvNeXt model,
prefer setting the expand_nested
argument summary()
to True
to better
investigate the instantiated model.
Arguments
True
.None
(random initialization),
"imagenet"
(pre-training on ImageNet-1k), or the path to the weights
file to be loaded. Defaults to "imagenet"
.layers.Input()
)
to use as image input for the model.include_top
is False.
It should have exactly 3 inputs channels.include_top
is False
.None
means that the output of the model will be
the 4D tensor output of the last convolutional layer.avg
means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.max
means that global max pooling will
be applied.
Defaults to None
.include_top
is True, and
if no weights
argument is specified. 1000 is how many
ImageNet classes there are. Defaults to 1000
.str
or callable. The activation function to use
on the "top" layer. Ignored unless include_top=True
. Set
classifier_activation=None
to return the logits of the "top" layer.
When loading pretrained weights, classifier_activation
can only
be None
or "softmax"
. Defaults to "softmax"
.Returns
A keras.Model
instance.
ConvNeXtBase
functiontf_keras.applications.ConvNeXtBase(
model_name="convnext_base",
include_top=True,
include_preprocessing=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
)
Instantiates the ConvNeXtBase architecture.
References
For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
The base
, large
, and xlarge
models were first pre-trained on the
ImageNet-21k dataset and then fine-tuned on the ImageNet-1k dataset. The
pre-trained parameters of the models were assembled from the
official repository. To get a
sense of how these parameters were converted to TF-Keras compatible
parameters, please refer to
this repository.
Note: Each TF-Keras Application expects a specific kind of input
preprocessing. For ConvNeXt, preprocessing is included in the model using a
Normalization
layer. ConvNeXt models expect their inputs to be float or
uint8 tensors of pixels with values in the [0-255] range.
When calling the summary()
method after instantiating a ConvNeXt model,
prefer setting the expand_nested
argument summary()
to True
to better
investigate the instantiated model.
Arguments
True
.None
(random initialization),
"imagenet"
(pre-training on ImageNet-1k), or the path to the weights
file to be loaded. Defaults to "imagenet"
.layers.Input()
)
to use as image input for the model.include_top
is False.
It should have exactly 3 inputs channels.include_top
is False
.None
means that the output of the model will be
the 4D tensor output of the last convolutional layer.avg
means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.max
means that global max pooling will
be applied.
Defaults to None
.include_top
is True, and
if no weights
argument is specified. 1000 is how many
ImageNet classes there are. Defaults to 1000
.str
or callable. The activation function to use
on the "top" layer. Ignored unless include_top=True
. Set
classifier_activation=None
to return the logits of the "top" layer.
When loading pretrained weights, classifier_activation
can only
be None
or "softmax"
. Defaults to "softmax"
.Returns
A keras.Model
instance.
ConvNeXtLarge
functiontf_keras.applications.ConvNeXtLarge(
model_name="convnext_large",
include_top=True,
include_preprocessing=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
)
Instantiates the ConvNeXtLarge architecture.
References
For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
The base
, large
, and xlarge
models were first pre-trained on the
ImageNet-21k dataset and then fine-tuned on the ImageNet-1k dataset. The
pre-trained parameters of the models were assembled from the
official repository. To get a
sense of how these parameters were converted to TF-Keras compatible
parameters, please refer to
this repository.
Note: Each TF-Keras Application expects a specific kind of input
preprocessing. For ConvNeXt, preprocessing is included in the model using a
Normalization
layer. ConvNeXt models expect their inputs to be float or
uint8 tensors of pixels with values in the [0-255] range.
When calling the summary()
method after instantiating a ConvNeXt model,
prefer setting the expand_nested
argument summary()
to True
to better
investigate the instantiated model.
Arguments
True
.None
(random initialization),
"imagenet"
(pre-training on ImageNet-1k), or the path to the weights
file to be loaded. Defaults to "imagenet"
.layers.Input()
)
to use as image input for the model.include_top
is False.
It should have exactly 3 inputs channels.include_top
is False
.None
means that the output of the model will be
the 4D tensor output of the last convolutional layer.avg
means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.max
means that global max pooling will
be applied.
Defaults to None
.include_top
is True, and
if no weights
argument is specified. 1000 is how many
ImageNet classes there are. Defaults to 1000
.str
or callable. The activation function to use
on the "top" layer. Ignored unless include_top=True
. Set
classifier_activation=None
to return the logits of the "top" layer.
When loading pretrained weights, classifier_activation
can only
be None
or "softmax"
. Defaults to "softmax"
.Returns
A keras.Model
instance.
ConvNeXtXLarge
functiontf_keras.applications.ConvNeXtXLarge(
model_name="convnext_xlarge",
include_top=True,
include_preprocessing=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
)
Instantiates the ConvNeXtXLarge architecture.
References
For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
The base
, large
, and xlarge
models were first pre-trained on the
ImageNet-21k dataset and then fine-tuned on the ImageNet-1k dataset. The
pre-trained parameters of the models were assembled from the
official repository. To get a
sense of how these parameters were converted to TF-Keras compatible
parameters, please refer to
this repository.
Note: Each TF-Keras Application expects a specific kind of input
preprocessing. For ConvNeXt, preprocessing is included in the model using a
Normalization
layer. ConvNeXt models expect their inputs to be float or
uint8 tensors of pixels with values in the [0-255] range.
When calling the summary()
method after instantiating a ConvNeXt model,
prefer setting the expand_nested
argument summary()
to True
to better
investigate the instantiated model.
Arguments
True
.None
(random initialization),
"imagenet"
(pre-training on ImageNet-1k), or the path to the weights
file to be loaded. Defaults to "imagenet"
.layers.Input()
)
to use as image input for the model.include_top
is False.
It should have exactly 3 inputs channels.include_top
is False
.None
means that the output of the model will be
the 4D tensor output of the last convolutional layer.avg
means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.max
means that global max pooling will
be applied.
Defaults to None
.include_top
is True, and
if no weights
argument is specified. 1000 is how many
ImageNet classes there are. Defaults to 1000
.str
or callable. The activation function to use
on the "top" layer. Ignored unless include_top=True
. Set
classifier_activation=None
to return the logits of the "top" layer.
When loading pretrained weights, classifier_activation
can only
be None
or "softmax"
. Defaults to "softmax"
.Returns
A keras.Model
instance.