<!DOCTYPE html>
Capstone Project¶
Neural translation model¶
Instructions¶
In this notebook, you will create a neural network that translates from English to German. You will use concepts from throughout this course, including building more flexible model architectures, freezing layers, data processing pipeline and sequence modelling.
This project is peer-assessed. Within this notebook you will find instructions in each section for how to complete the project. Pay close attention to the instructions as the peer review will be carried out according to a grading rubric that checks key parts of the project instructions. Feel free to add extra cells into the notebook as required.
How to submit¶
When you have completed the Capstone project notebook, you will submit a pdf of the notebook for peer review. First ensure that the notebook has been fully executed from beginning to end, and all of the cell outputs are visible. This is important, as the grading rubric depends on the reviewer being able to view the outputs of your notebook. Save the notebook as a pdf (File -> Download as -> PDF via LaTeX). You should then submit this pdf for review.
Let's get started!¶
We'll start by running some imports, and loading the dataset. For this project you are free to make further imports throughout the notebook as you wish.
import tensorflow as tf
import tensorflow_hub as hub
import unicodedata
import re
For the capstone project, you will use a language dataset from http://www.manythings.org/anki/ to build a neural translation model. This dataset consists of over 200,000 pairs of sentences in English and German. In order to make the training quicker, we will restrict to our dataset to 20,000 pairs. Feel free to change this if you wish - the size of the dataset used is not part of the grading rubric.
Your goal is to develop a neural translation model from English to German, making use of a pre-trained English word embedding module.
!gdown --id 1KczOciG7sYY7SB9UlBeRP1T9659b121Q
# Run this cell to load the dataset
NUM_EXAMPLES = 20000
data_examples = []
with open('/content/deu.txt', 'r', encoding='utf8') as f:
for line in f.readlines():
if len(data_examples) < NUM_EXAMPLES:
data_examples.append(line)
else:
break
# These functions preprocess English and German sentences
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn')
def preprocess_sentence(sentence):
sentence = sentence.lower().strip()
sentence = re.sub(r"ü", 'ue', sentence)
sentence = re.sub(r"ä", 'ae', sentence)
sentence = re.sub(r"ö", 'oe', sentence)
sentence = re.sub(r'ß', 'ss', sentence)
sentence = unicode_to_ascii(sentence)
sentence = re.sub(r"([?.!,])", r" \1 ", sentence)
sentence = re.sub(r"[^a-z?.!,']+", " ", sentence)
sentence = re.sub(r'[" "]+', " ", sentence)
return sentence.strip()
The custom translation model¶
The following is a schematic of the custom translation model architecture you will develop in this project.
Key:
The custom model consists of an encoder RNN and a decoder RNN. The encoder takes words of an English sentence as input, and uses a pre-trained word embedding to embed the words into a 128-dimensional space. To indicate the end of the input sentence, a special end token (in the same 128-dimensional space) is passed in as an input. This token is a TensorFlow Variable that is learned in the training phase (unlike the pre-trained word embedding, which is frozen).
The decoder RNN takes the internal state of the encoder network as its initial state. A start token is passed in as the first input, which is embedded using a learned German word embedding. The decoder RNN then makes a prediction for the next German word, which during inference is then passed in as the following input, and this process is repeated until the special <end>
token is emitted from the decoder.
1. Text preprocessing¶
- Create separate lists of English and German sentences, and preprocess them using the
preprocess_sentence
function provided for you above. - Add a special
"<start>"
and"<end>"
token to the beginning and end of every German sentence. - Use the Tokenizer class from the
tf.keras.preprocessing.text
module to tokenize the German sentences, ensuring that no character filters are applied. Hint: use the Tokenizer's "filter" keyword argument. - Print out at least 5 randomly chosen examples of (preprocessed) English and German sentence pairs. For the German sentence, print out the text (with start and end tokens) as well as the tokenized sequence.
- Pad the end of the tokenized German sequences with zeros, and batch the complete set of sequences into one numpy array.
english_sentences = []
german_sentences = []
for line in data_examples:
sentence = re.split("CC-BY", line)[0]
sentence_prepeocess = preprocess_sentence(sentence)
pos = re.search("[?!.]", sentence_prepeocess).span()[1]
english = sentence_prepeocess[:pos]
german = sentence_prepeocess[pos:]
german = "<start>" + german + " <end>"
english_sentences.append(english)
german_sentences.append(german)
def get_sequences(sentences):
tokenizer = tf.keras.preprocessing.text.Tokenizer(filters="")
tokenizer.fit_on_texts(sentences)
sequences = tokenizer.texts_to_sequences(sentences)
return sequences
german_sentences_sequences = get_sequences(german_sentences)
german_sentences_sequences
import numpy as np
indx = np.random.choice(len(english_sentences), 5)
for i in indx:
print(english_sentences[i])
print(german_sentences[i])
print(german_sentences_sequences[i])
german_sentences_sequences_pad = tf.keras.preprocessing.sequence.pad_sequences(sequences=german_sentences_sequences,
padding="post",
value=0)
german_sentences_sequences_pad
2. Prepare the data with tf.data.Dataset objects¶
Load the embedding layer¶
As part of the dataset preproceessing for this project, you will use a pre-trained English word embedding module from TensorFlow Hub. The URL for the module is https://tfhub.dev/google/tf2-preview/nnlm-en-dim128-with-normalization/1. This module has also been made available as a complete saved model in the folder './models/tf2-preview_nnlm-en-dim128_1'
.
This embedding takes a batch of text tokens in a 1-D tensor of strings as input. It then embeds the separate tokens into a 128-dimensional space.
The code to load and test the embedding layer is provided for you below.
NB: this model can also be used as a sentence embedding module. The module will process each token by removing punctuation and splitting on spaces. It then averages the word embeddings over a sentence to give a single embedding vector. However, we will use it only as a word embedding module, and will pass each word in the input sentence as a separate token.
# Load embedding module from Tensorflow Hub
embedding_layer = hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1",
output_shape=[128], input_shape=[], dtype=tf.string)
# Test the layer
embedding_layer(tf.constant(["these", "aren't", "the", "droids", "you're", "looking", "for"])).shape
You should now prepare the training and validation Datasets.
- Create a random training and validation set split of the data, reserving e.g. 20% of the data for validation (NB: each English dataset example is a single sentence string, and each German dataset example is a sequence of padded integer tokens).
- Load the training and validation sets into a tf.data.Dataset object, passing in a tuple of English and German data for both training and validation sets.
- Create a function to map over the datasets that splits each English sentence at spaces. Apply this function to both Dataset objects using the map method. Hint: look at the tf.strings.split function.
- Create a function to map over the datasets that embeds each sequence of English words using the loaded embedding layer/model. Apply this function to both Dataset objects using the map method.
- Create a function to filter out dataset examples where the English sentence is more than 13 (embedded) tokens in length. Apply this function to both Dataset objects using the filter method.
- Create a function to map over the datasets that pads each English sequence of embeddings with some distinct padding value before the sequence, so that each sequence is length 13. Apply this function to both Dataset objects using the map method. Hint: look at the tf.pad function. You can extract a Tensor shape using tf.shape; you might also find the tf.math.maximum function useful.
- Batch both training and validation Datasets with a batch size of 16.
- Print the
element_spec
property for the training and validation Datasets. - Using the Dataset
.take(1)
method, print the shape of the English data example from the training Dataset. - Using the Dataset
.take(1)
method, print the German data example Tensor from the validation Dataset.
from sklearn.model_selection import train_test_split
train_english, val_english, train_german, val_german = train_test_split(english_sentences, german_sentences_sequences_pad, test_size=0.2)
train_dataset = tf.data.Dataset.from_tensor_slices((train_english, train_german))
val_dataset = tf.data.Dataset.from_tensor_slices((val_english, val_german))
# split english at space
def map_split_aux(english,german):
return tf.strings.split(english,sep=" "), german
train_dataset_split = train_dataset.map(map_split_aux)
val_dataset_split = val_dataset.map(map_split_aux)
def map_embed_aux(english, german):
return embedding_layer(english), german
train_dataset_embed = train_dataset_split.map(map_embed_aux)
val_dataset_embed = val_dataset_split.map(map_embed_aux)
def filter_len_aux(english, german):
return tf.shape(english).shape[0] <= 13
train_dataset_len = train_dataset_embed.filter(filter_len_aux)
val_dataset_len = val_dataset_embed.filter(filter_len_aux)
'''
def map_pad_aux(english, german):
length = tf.shape(english)[0]
paddings = tf.constant([])
english = tf.pad(english, [tf.math.maximum([13-tf.shape(english)[0], 0],tf.constant([0, 0])),tf.constant([0, 0])],
"CONSTANT", constant_values=0)
'''
def map_pad_aux(english, german):
paddings = tf.constant([[13,0],[0, 0]])
new_english = tf.pad(english, paddings)
return new_english[-13:,:], german
train_dataset_pad = train_dataset_len.map(map_pad_aux)
val_dataset_pad = train_dataset_len.map(map_pad_aux)
train_dataset_pad.element_spec[0]
train_dataset_batch = train_dataset_pad.batch(16)
val_dataset_batch = val_dataset_pad.batch(16)
temp = train_dataset_batch.take(1)
temp = list(temp)[0]
english_1 = temp[0]
temp = val_dataset_batch.take(1)
temp = list(temp)[0]
german_1 = temp[1]
print(english_1.shape)
print(german_1)
train_dataset_batch.element_spec
3. Create the custom layer¶
You will now create a custom layer to add the learned end token embedding to the encoder model:
You should now build the custom layer.
- Using layer subclassing, create a custom layer that takes a batch of English data examples from one of the Datasets, and adds a learned embedded ‘end’ token to the end of each sequence.
- This layer should create a TensorFlow Variable (that will be learned during training) that is 128-dimensional (the size of the embedding space). Hint: you may find it helpful in the call method to use the tf.tile function to replicate the end token embedding across every element in the batch.
- Using the Dataset
.take(1)
method, extract a batch of English data examples from the training Dataset and print the shape. Test the custom layer by calling the layer on the English data batch Tensor and print the resulting Tensor shape (the layer should increase the sequence length by one).
from tensorflow.keras.layers import Layer
class EndTokenLayer(Layer):
def __init__(self, embedding_dim=128, **kwargs):
super(EndTokenLayer,self).__init__(**kwargs)
self.end_token_embedding = self.add_weight(shape=(embedding_dim,),
initializer="random_uniform",
trainable=True)
def call(self, inputs):
end_token = tf.tile(tf.reshape(self.end_token_embedding, shape=(1, 1, self.end_token_embedding.shape[0])), [tf.shape(inputs)[0],1,1])
return tf.keras.layers.concatenate([inputs, end_token], axis=1)
temp = train_dataset_batch.take(1)
for x, y in temp:
print(x.shape)
print(y.shape)
layer = EndTokenLayer(128)
output = layer(list(temp)[0][0])
output.shape
4. Build the encoder network¶
The encoder network follows the schematic diagram above. You should now build the RNN encoder model.
- Using the functional API, build the encoder network according to the following spec:
- The model will take a batch of sequences of embedded English words as input, as given by the Dataset objects.
- The next layer in the encoder will be the custom layer you created previously, to add a learned end token embedding to the end of the English sequence.
- This is followed by a Masking layer, with the
mask_value
set to the distinct padding value you used when you padded the English sequences with the Dataset preprocessing above. - The final layer is an LSTM layer with 512 units, which also returns the hidden and cell states.
- The encoder is a multi-output model. There should be two output Tensors of this model: the hidden state and cell states of the LSTM layer. The output of the LSTM layer is unused.
- Using the Dataset
.take(1)
method, extract a batch of English data examples from the training Dataset and test the encoder model by calling it on the English data Tensor, and print the shape of the resulting Tensor outputs. - Print the model summary for the encoder network.
from tensorflow.keras.models import Model
def get_encoder(input_shape):
inputs = tf.keras.layers.Input(shape=input_shape)
x = EndTokenLayer(128)(inputs)
x = tf.keras.layers.Masking(mask_value=0)(x)
sequence, hidden, cell = tf.keras.layers.LSTM(units=512, return_state=True)(x)
model = Model(inputs=inputs, outputs=[hidden, cell])
return model
temp = train_dataset_batch.take(1)
temp = list(temp)[0][0]
encoder_model = get_encoder((13, 128))
encoder_model.summary()
hidden_state, cell_state = encoder_model(temp)
print(hidden_state)
print(cell_state)
5. Build the decoder network¶
The decoder network follows the schematic diagram below.
You should now build the RNN decoder model.
- Using Model subclassing, build the decoder network according to the following spec:
- The initializer should create the following layers:
- An Embedding layer with vocabulary size set to the number of unique German tokens, embedding dimension 128, and set to mask zero values in the input.
- An LSTM layer with 512 units, that returns its hidden and cell states, and also returns sequences.
- A Dense layer with number of units equal to the number of unique German tokens, and no activation function.
- The call method should include the usual
inputs
argument, as well as the additional keyword argumentshidden_state
andcell_state
. The default value for these keyword arguments should beNone
. - The call method should pass the inputs through the Embedding layer, and then through the LSTM layer. If the
hidden_state
andcell_state
arguments are provided, these should be used for the initial state of the LSTM layer. Hint: use theinitial_state
keyword argument when calling the LSTM layer on its input. - The call method should pass the LSTM output sequence through the Dense layer, and return the resulting Tensor, along with the hidden and cell states of the LSTM layer.
- The initializer should create the following layers:
- Using the Dataset
.take(1)
method, extract a batch of English and German data examples from the training Dataset. Test the decoder model by first calling the encoder model on the English data Tensor to get the hidden and cell states, and then call the decoder model on the German data Tensor and hidden and cell states, and print the shape of the resulting decoder Tensor outputs. - Print the model summary for the decoder network.
tokenizer = tf.keras.preprocessing.text.Tokenizer(filters="")
tokenizer.fit_on_texts(german_sentences)
max_index = len(tokenizer.word_index)
class get_decoder_model(Model):
def __init__(self):
super(get_decoder_model, self).__init__()
self.embedding = tf.keras.layers.Embedding(input_dim=max_index+1, output_dim=128,mask_zero=True)
self.lstm = tf.keras.layers.LSTM(units=512, return_sequences=True, return_state=True)
self.dense = tf.keras.layers.Dense(units=max_index+1)
def call(self, inputs, hidden_status=None, cell_status=None):
x = self.embedding(inputs)
sequences,hidden,cell = self.lstm(x,initial_state=[hidden_status, cell_status])
output = self.dense(sequences)
return output, hidden, cell
decoder_model = get_decoder_model()
temp = train_dataset_batch.take(1)
dataset_english = list(temp)[0][0]
dataset_german = list(temp)[0][1]
dataset_english.shape
dataset_german.shape
hidden_states, cell_states = encoder_model(dataset_english)
output, hidden, cell = decoder_model(inputs=dataset_german, hidden_status=hidden_states, cell_status=cell_states)
print(output.shape)
decoder_model.summary()
6. Make a custom training loop¶
You should now write a custom training loop to train your custom neural translation model.
- Define a function that takes a Tensor batch of German data (as extracted from the training Dataset), and returns a tuple containing German inputs and outputs for the decoder model (refer to schematic diagram above).
- Define a function that computes the forward and backward pass for your translation model. This function should take an English input, German input and German output as arguments, and should do the following:
- Pass the English input into the encoder, to get the hidden and cell states of the encoder LSTM.
- These hidden and cell states are then passed into the decoder, along with the German inputs, which returns a sequence of outputs (the hidden and cell state outputs of the decoder LSTM are unused in this function).
- The loss should then be computed between the decoder outputs and the German output function argument.
- The function returns the loss and gradients with respect to the encoder and decoder’s trainable variables.
- Decorate the function with @tf.function
- Define and run a custom training loop for a number of epochs (for you to choose) that does the following:
- Iterates through the training dataset, and creates decoder inputs and outputs from the German sequences.
- Updates the parameters of the translation model using the gradients of the function above and an optimizer object.
- Every epoch, compute the validation loss on a number of batches from the validation and save the epoch training and validation losses.
- Plot the learning curves for loss vs epoch for both training and validation sets.
Hint: This model is computationally demanding to train. The quality of the model or length of training is not a factor in the grading rubric. However, to obtain a better model we recommend using the GPU accelerator hardware on Colab.
def get_german_decoder(german_inputs):
return german_inputs[:,:-1], german_inputs[:,1:]
@tf.function
def forward_backwark_pass(english_inputs, german_inputs, german_outputs):
with tf.GradientTape() as tape:
hidden_states, cell_states = encoder_model(english_inputs)
decoder_outputs, _, _ = decoder_model(german_inputs, hidden_states, cell_states)
loss_value = tf.keras.losses.sparse_categorical_crossentropy(y_true=german_outputs, y_pred=decoder_outputs,from_logits=True)
return loss_value, tape.gradient(loss_value, encoder_model.trainable_variables+ decoder_model.trainable_variables)
def train_model(epochs):
training_loss = []
val_loss = []
training_accuracy = []
val_accuracy = []
optimizer = tf.keras.optimizers.Adam()
for epoch in range(epochs):
epoch_train_loss_avg = tf.keras.metrics.Mean()
epoch_train_accuracy_avg = tf.keras.metrics.CategoricalAccuracy()
epoch_val_loss_avg = tf.keras.metrics.Mean()
epoch_val_accuracy_avg = tf.keras.metrics.CategoricalAccuracy()
for english, german in train_dataset_batch:
german_inputs, german_outputs = get_german_decoder(german)
loss_value, grads = forward_backwark_pass(english, german_inputs, german_outputs)
epoch_train_loss_avg(loss_value)
#epoch_train_accuracy_avg(german_outputs,decoder_outputs)
optimizer.apply_gradients(zip(grads, encoder_model.trainable_variables+ decoder_model.trainable_variables))
for english, german in val_dataset_batch.take(20):
german_inputs, german_outputs = get_german_decoder(german)
loss_value, _ = forward_backwark_pass(english, german_inputs, german_outputs)
epoch_val_loss_avg(loss_value)
#epoch_val_accuracy_avg(german_outputs,decoder_outputs)
print("Epoch: {:03d}, loss:{:.3f}, va_loss:{:.3f}".format(epoch, epoch_train_loss_avg.result(),epoch_val_loss_avg.result()))
training_loss.append(epoch_train_loss_avg.result())
#training_accuracy.append(epoch_train_accuracy_avg.result())
val_loss.append(epoch_val_loss_avg.result())
#val_accuracy.append(epoch_val_accuracy_avg())
return training_loss, val_loss
training_loss, val_loss = train_model(10)
import matplotlib.pyplot as plt
plt.plot(training_loss, label="training")
plt.plot(val_loss, label="validation")
plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Loss")
7. Use the model to translate¶
Now it's time to put your model into practice! You should run your translation for five randomly sampled English sentences from the dataset. For each sentence, the process is as follows:
- Preprocess and embed the English sentence according to the model requirements.
- Pass the embedded sentence through the encoder to get the encoder hidden and cell states.
- Starting with the special
"<start>"
token, use this token and the final encoder hidden and cell states to get the one-step prediction from the decoder, as well as the decoder’s updated hidden and cell states. - Create a loop to get the next step prediction and updated hidden and cell states from the decoder, using the most recent hidden and cell states. Terminate the loop when the
"<end>"
token is emitted, or when the sentence has reached a maximum length. - Decode the output token sequence into German text and print the English text and the model's German translation.
test_sentences = ["What's the weather ?"]
tokenizer = tf.keras.preprocessing.text.Tokenizer(filters="")
tokenizer.fit_on_texts(german_sentences)
first_token = tokenizer.texts_to_sequences([["<start>"]])
end_token = tokenizer.texts_to_sequences([["<end>"]])[0][0]
first_token_sequence = tf.convert_to_tensor(first_token)
first_token_sequence
for i in np.random.choice(len(english_sentences), 5):
output_sequences = []
max_num = 15
print("---- example #", i, sep="")
test_english = english_sentences[i]
print(test_english)
test_english_split = tf.strings.split(test_english, sep=" ")
#print(test_english_split)
test_english_embed = embedding_layer(test_english_split)
#print(test_english_embed.shape)
test_english_embed_padd = tf.pad(test_english_embed,[[13,0],[0,0]])[-13:,:]
#print(test_english_embed_padd.shape)
hidden, cell = encoder_model(test_english_embed_padd[None,...])
outputs, hidden, cell = decoder_model(first_token_sequence, hidden, cell)
outputs_token = tf.argmax(outputs[0][0]).numpy()
num = 1
while (outputs_token != end_token and num < max_num):
output_sequences.append(outputs_token)
outputs = tf.convert_to_tensor([[outputs_token]])
outputs, hidden, cell = decoder_model(outputs, hidden, cell)
outputs_token = tf.argmax(outputs[0][0]).numpy()
num += 1
print(output_sequences)
print(tokenizer.sequences_to_texts([output_sequences]))