A model grouping layers into an object with training/inference features.
keras.Inputobject or a combination of
keras.Inputobjects in a dict, list or tuple.
keras.Inputobjects or a combination of such tensors in a dict, list or tuple. See Functional API example below.
There are two ways to instantiate a
1 - With the "Functional API", where you start from
you chain layer calls to specify the model's forward pass,
and finally you create your model from inputs and outputs:
import tensorflow as tf inputs = tf.keras.Input(shape=(3,)) x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs) outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x) model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
inputs = keras.Input(shape=(None, None, 3)) processed = keras.layers.RandomCrop(width=32, height=32)(inputs) conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed) pooling = keras.layers.GlobalAveragePooling2D()(conv) feature = keras.layers.Dense(10)(pooling) full_model = keras.Model(inputs, feature) backbone = keras.Model(processed, conv) activations = keras.Model(conv, feature)
Note that the
activations models are not
keras.Input objects, but with the tensors that are originated
keras.Input objects. Under the hood, the layers and weights will
be shared across these models, so that user can train the
activations to do feature extraction.
The inputs and outputs of the model can be nested structures of tensors as
well, and the created models are standard Functional API models that support
all the existing APIs.
2 - By subclassing the
Model class: in that case, you should define your
__init__() and you should implement the model's forward pass
import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) def call(self, inputs): x = self.dense1(inputs) return self.dense2(x) model = MyModel()
If you subclass
Model, you can optionally have
training argument (boolean) in
call(), which you can use to specify
a different behavior in training and inference:
import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) self.dropout = tf.keras.layers.Dropout(0.5) def call(self, inputs, training=False): x = self.dense1(inputs) if training: x = self.dropout(x, training=training) return self.dense2(x) model = MyModel()
Once the model is created, you can config the model with losses and metrics
model.compile(), train the model with
model.fit(), or use the model
to do prediction with
Model.summary( line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None, )
Prints a string summary of the network.
[.33, .55, .67, 1.].
stdoutdoesn't work in your environment, change to
layer_rangeand the end predicate will be the last element it matches to
layer_range. By default
Nonewhich considers all layers of model.
summary()is called before the model is built.
Retrieves a layer based on either its name (unique) or index.
index are both provided,
index will take precedence.
Indices are based on order of horizontal graph traversal (bottom-up).
A layer instance.