<!DOCTYPE html>
Layer nodes¶
In this reading, we will be looking at the concept of layer nodes when creating a computational graph with shared layers.
import tensorflow as tf
print(tf.__version__)
Creating a simple computational graph¶
You have previously seen how to construct multiple input or output models, and also how to access model layers. Let's start by creating two inputs:
# Create the input layers
from tensorflow.keras.layers import Input
a = Input(shape=(128, 128, 3), name="input_a")
b = Input(shape=(64, 64, 3), name="input_b")
Now, we create a 2D convolutional layer, and call it on one of the inputs.
# Create and use the convolutional layer
from tensorflow.keras.layers import Conv2D
conv = Conv2D(32, (6, 6), padding='same')
conv_out_a = conv(a)
print(type(conv_out_a))
The output of the layer is now a new Tensor, which captures the operation of calling the layer conv
on the input a
.
By defining this new operation in our computational graph, we have added a node to the conv
layer. This node relates the input tensor to the output tensor.
Layer input and outputs¶
We can retrieve the output of a layer using the output
attribute, and we can also get the input by using the input
attribute.
Similarly, we can retrieve the input/output shape using input_shape
and output_shape
.
# Print the input and output tensors
print(conv.input)
print(conv.output)
# Verify the input and output shapes
assert conv.input_shape == (None, 128, 128, 3)
assert conv.output_shape == (None, 128, 128, 32)
Creating a new layer node¶
Now, let's call this layer again on a different input:
# Call the layer a second time
conv_out_b = conv(b)
When we call the same layer multiple times, that layer owns multiple nodes indexed as 0, 1, 2...
Now, what happens if we call input
and output
for this layer?
# Check the input and output attributes
assert conv.input.name == a.name
assert conv.output.name == conv_out_a.name
As you can see, the layer's input is identified as being a
and its output as being conved_a
, something is going wrong here. As long as a layer is only connected to one input, there is no confusion about what should be the input, and .output
will return the one output of the layer, but when the layer is called on multiple inputs we end up in an ambiguous situation.
Let's try to get the input/output shape:
# Try accessing the input_shape
print(conv.input_shape)
# Try accessing the output_shape
print(conv.output_shape)
input_shape
and output_shape
did not return the shape of the two inputs and outputs, instead they returned an error.
Indexing layer nodes¶
We have applied the same Conv2D layer to an input of shape (128, 128, 3), and then to an input of shape (64, 64, 3), therefore the layer has multiple input/output shapes, for this reason we now have to retrieve them by specifying the index of the node they belong to.
To get the inputs/outputs shapes, we now have to use get_input_shape_at
and get_output_shape_at
with the correct index:
# Print the input and output shapes for each layer node
assert conv.get_input_shape_at(0) == (None, 128, 128, 3) # Tensor a
assert conv.get_input_shape_at(1) == (None, 64, 64, 3) # Tensor b
assert conv.get_output_shape_at(0) == (None, 128, 128, 32) # Tensor conv_out_a
assert conv.get_output_shape_at(1) == (None, 64, 64, 32) # Tensor conv_out_b
Likewise, we use get_input_at
and get_output_at
to fetch the inputs/outputs:
assert conv.get_input_at(0).name == a.name
assert conv.get_input_at(1).name == b.name
assert conv.get_output_at(0).name == conv_out_a.name
assert conv.get_output_at(1).name == conv_out_b.name
Further reading and resources¶