The logs dictionary

目录

<!DOCTYPE html>

Logs dictionary

Using the logs dictionary

In this reading, we will learn how to take advantage of the logs dictionary in Keras to define our own callbacks and check the progress of a model.

In [1]:
import tensorflow as tf
print(tf.__version__)
2.0.0

The logs dictionary stores the loss value, along with all of the metrics we are using at the end of a batch or epoch.

We can incorporate information from the logs dictionary into our own custom callbacks.

Let's see this in action in the context of a model we will construct and fit to the sklearn diabetes dataset that we have been using in this module.

Let's first import the dataset, and split it into the training and test sets.

In [2]:
# Load the diabetes dataset

from sklearn.datasets import load_diabetes

diabetes_dataset = load_diabetes()
In [3]:
# Save the input and target variables

from sklearn.model_selection import train_test_split

data = diabetes_dataset['data']
targets = diabetes_dataset['target']
In [4]:
# Split the data set into training and test sets

train_data, test_data, train_targets, test_targets = train_test_split(data, targets, test_size=0.1)

Now we construct our model.

In [5]:
# Build the model

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

model = tf.keras.Sequential([
    Dense(128, activation='relu', input_shape=(train_data.shape[1],)),
    Dense(64,activation='relu'),
    tf.keras.layers.BatchNormalization(),
    Dense(64, activation='relu'),
    Dense(64, activation='relu'),
    Dense(1)        
])

We now compile the model, with

  • Mean squared error as the loss function,
  • the Adam optimizer, and
  • Mean absolute error (mae) as a metric.
In [6]:
# Compile the model
    
model.compile(loss='mse', optimizer="adam", metrics=['mae'])

Defining a custom callback

Now we define our custom callback using the logs dictionary to access the loss and metric values.

In [7]:
# Create the custom callback

class LossAndMetricCallback(tf.keras.callbacks.Callback):

    # Print the loss after every second batch in the training set
    def on_train_batch_end(self, batch, logs=None):
        if batch %2 ==0:
            print('\n After batch {}, the loss is {:7.2f}.'.format(batch, logs['loss']))
    
    # Print the loss after each batch in the test set
    def on_test_batch_end(self, batch, logs=None):
        print('\n After batch {}, the loss is {:7.2f}.'.format(batch, logs['loss']))

    # Print the loss and mean absolute error after each epoch
    def on_epoch_end(self, epoch, logs=None):
        print('Epoch {}: Average loss is {:7.2f}, mean absolute error is {:7.2f}.'.format(epoch, logs['loss'], logs['mae']))
    
    # Notify the user when prediction has finished on each batch
    def on_predict_batch_end(self,batch, logs=None):
        print("Finished prediction on batch {}!".format(batch))

We now fit the model to the data, and specify that we would like to use our custom callback LossAndMetricCallback().

In [8]:
# Train the model

history = model.fit(train_data, train_targets, epochs=20,
                    batch_size=100, callbacks=[LossAndMetricCallback()], verbose=False)
 After batch 0, the loss is 28007.27.

 After batch 2, the loss is 24777.04.
Epoch 0: Average loss is 28753.72, mean absolute error is  151.11.

 After batch 0, the loss is 22700.81.

 After batch 2, the loss is 28438.73.
Epoch 1: Average loss is 28635.25, mean absolute error is  150.76.

 After batch 0, the loss is 32815.19.

 After batch 2, the loss is 29867.44.
Epoch 2: Average loss is 28488.39, mean absolute error is  150.31.

 After batch 0, the loss is 26781.34.

 After batch 2, the loss is 27277.63.
Epoch 3: Average loss is 28270.64, mean absolute error is  149.65.

 After batch 0, the loss is 30090.87.

 After batch 2, the loss is 26736.91.
Epoch 4: Average loss is 27956.76, mean absolute error is  148.68.

 After batch 0, the loss is 29314.98.

 After batch 2, the loss is 24041.90.
Epoch 5: Average loss is 27515.78, mean absolute error is  147.30.

 After batch 0, the loss is 32484.00.

 After batch 2, the loss is 26737.73.
Epoch 6: Average loss is 26906.38, mean absolute error is  145.38.

 After batch 0, the loss is 25736.35.

 After batch 2, the loss is 24852.72.
Epoch 7: Average loss is 26091.10, mean absolute error is  142.81.

 After batch 0, the loss is 29195.69.

 After batch 2, the loss is 24360.26.
Epoch 8: Average loss is 25068.26, mean absolute error is  139.43.

 After batch 0, the loss is 25008.46.

 After batch 2, the loss is 22417.87.
Epoch 9: Average loss is 23758.47, mean absolute error is  135.03.

 After batch 0, the loss is 21949.68.

 After batch 2, the loss is 19568.85.
Epoch 10: Average loss is 22133.12, mean absolute error is  129.38.

 After batch 0, the loss is 20758.70.

 After batch 2, the loss is 20782.68.
Epoch 11: Average loss is 20259.41, mean absolute error is  122.55.

 After batch 0, the loss is 19671.01.

 After batch 2, the loss is 18059.38.
Epoch 12: Average loss is 18193.77, mean absolute error is  114.37.

 After batch 0, the loss is 13395.71.

 After batch 2, the loss is 16549.19.
Epoch 13: Average loss is 15842.08, mean absolute error is  104.87.

 After batch 0, the loss is 15596.23.

 After batch 2, the loss is 10588.64.
Epoch 14: Average loss is 13519.03, mean absolute error is   94.83.

 After batch 0, the loss is 12555.92.

 After batch 2, the loss is 10305.87.
Epoch 15: Average loss is 11242.22, mean absolute error is   84.33.

 After batch 0, the loss is 11025.24.

 After batch 2, the loss is 7438.04.
Epoch 16: Average loss is 9296.98, mean absolute error is   76.07.

 After batch 0, the loss is 7764.90.

 After batch 2, the loss is 7354.85.
Epoch 17: Average loss is 7666.56, mean absolute error is   67.18.

 After batch 0, the loss is 7910.00.

 After batch 2, the loss is 5206.54.
Epoch 18: Average loss is 6574.40, mean absolute error is   61.75.

 After batch 0, the loss is 5699.69.

 After batch 2, the loss is 6489.23.
Epoch 19: Average loss is 5968.41, mean absolute error is   59.42.

We can also use our callback in the evaluate function...

In [9]:
# Evaluate the model

model_eval = model.evaluate(test_data, test_targets, batch_size=10, 
                            callbacks=[LossAndMetricCallback()], verbose=False)
 After batch 0, the loss is 16130.74.

 After batch 1, the loss is 13773.17.

 After batch 2, the loss is 21062.39.

 After batch 3, the loss is 29929.51.

 After batch 4, the loss is 23903.98.

...And also the predict function.

In [10]:
# Get predictions from the model

model_pred = model.predict(test_data, batch_size=10,
                           callbacks=[LossAndMetricCallback()], verbose=False)
Finished prediction on batch 0!
Finished prediction on batch 1!
Finished prediction on batch 2!
Finished prediction on batch 3!
Finished prediction on batch 4!

Application - learning rate scheduler

Let's now look at a more sophisticated custom callback.

We are going to define a callback to change the learning rate of the optimiser of a model during training. We will do this by specifying the epochs and new learning rates where we would like it to be changed.

First we define the auxillary function that returns the learning rate for each epoch based on our schedule.

In [11]:
# Define the learning rate schedule. The tuples below are (start_epoch, new_learning_rate)

lr_schedule = [
    (4, 0.03), (7, 0.02), (11, 0.005), (15, 0.007)
]

def get_new_epoch_lr(epoch, lr):
    # Checks to see if the input epoch is listed in the learning rate schedule 
    # and if so, returns index in lr_schedule
    epoch_in_sched = [i for i in range(len(lr_schedule)) if lr_schedule[i][0]==int(epoch)]
    if len(epoch_in_sched)>0:
        # If it is, return the learning rate corresponding to the epoch
        return lr_schedule[epoch_in_sched[0]][1]
    else:
        # Otherwise, return the existing learning rate
        return lr

Let's now define the callback itself.

In [12]:
# Define the custom callback

class LRScheduler(tf.keras.callbacks.Callback):
    
    def __init__(self, new_lr):
        super(LRScheduler, self).__init__()
        # Add the new learning rate function to our callback
        self.new_lr = new_lr

    def on_epoch_begin(self, epoch, logs=None):
        # Make sure that the optimizer we have chosen has a learning rate, and raise an error if not
        if not hasattr(self.model.optimizer, 'lr'):
              raise ValueError('Error: Optimizer does not have a learning rate.')
                
        # Get the current learning rate
        curr_rate = float(tf.keras.backend.get_value(self.model.optimizer.lr))
        
        # Call the auxillary function to get the scheduled learning rate for the current epoch
        scheduled_rate = self.new_lr(epoch, curr_rate)

        # Set the learning rate to the scheduled learning rate
        tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_rate)
        print('Learning rate for epoch {} is {:7.3f}'.format(epoch, scheduled_rate))

Let's now train the model again with our new callback.

In [16]:
# Build the same model as before

new_model = tf.keras.Sequential([
    Dense(128, activation='relu', input_shape=(train_data.shape[1],)),
    Dense(64,activation='relu'),
    tf.keras.layers.BatchNormalization(),
    Dense(64, activation='relu'),
    Dense(64, activation='relu'),
    Dense(1)        
])
In [17]:
# Compile the model

new_model.compile(loss='mse',
                optimizer="adam",
                metrics=['mae', 'mse'])
In [19]:
# Fit the model with our learning rate scheduler callback

new_history = new_model.fit(train_data, train_targets, epochs=20,
                            batch_size=100, callbacks=[LRScheduler(get_new_epoch_lr)], verbose=False)
Learning rate for epoch 0 is   0.001
Learning rate for epoch 1 is   0.001
Learning rate for epoch 2 is   0.001
Learning rate for epoch 3 is   0.001
Learning rate for epoch 4 is   0.030
Learning rate for epoch 5 is   0.030
Learning rate for epoch 6 is   0.030
Learning rate for epoch 7 is   0.020
Learning rate for epoch 8 is   0.020
Learning rate for epoch 9 is   0.020
Learning rate for epoch 10 is   0.020
Learning rate for epoch 11 is   0.005
Learning rate for epoch 12 is   0.005
Learning rate for epoch 13 is   0.005
Learning rate for epoch 14 is   0.005
Learning rate for epoch 15 is   0.007
Learning rate for epoch 16 is   0.007
Learning rate for epoch 17 is   0.007
Learning rate for epoch 18 is   0.007
Learning rate for epoch 19 is   0.007