Style Transferring in TensorFlow

Last updated on Oct 25 2021
Ashutosh Wakiroo

Table of Contents

Style Transferring in TensorFlow

Neural Style Transfer (NST) refers as a class of software algorithm manipulate digital images, or videos, or adopt the appearance or visual style of another image. When we implement the algorithm, we define two distances; one for the content (Dc) and another for the form (Ds).
In the topic, we will implement an artificial system based on Deep Neural Network, which will create images of high perceptual quality. The system will use neural representation to separate, recombine content-image (a style image) as input, and returns the content image as it is printed using the artistic style of the style image.
Neural style transfer is an optimization technique mainly used to take two images- a content image and a style reference image and blend them. So, the output image looks like the content image to match the content statistics of the content image and style statistics of the style reference image. These statistics are derived from the images using a convolutional network.

tensorFlow 26
tensorFlow

Working of the neural style transfer algorithm

When we implement the given algorithm, we define two distances; one for the style (Ds) and the other for the content (Dc). Dc measures the different the content is between two images, and Ds measures the different the style is between two images. We get the third image as an input and transform it into both minimize its content-distance with content-image and its style-distance with the style-image.
Libraries Required
1. import tensorflow as tf
2. #we transform and models because we will modify our images and we will use pre-trained model VGG-19
3. from torchvision import transforms, models from PIL
4. import Image
5. import matplotlib.pyplot as plt
6. import numpy as np

VGG-19 model

VGG-19 model is similar to the VGG-16 model. Simonyan and Zisserman introduced the VGG model. VGG-19 is trained on more than a million images from ImageNet database. This model has 19 layers of the deep neural network, which can classify the images into 1000 object categories.

tensorFlow 27
tensorFlow

High-level architecture
Neural style transfer uses a pertained convolution neural network. Then define a loss function which blends two images absolutely to create visually appealing art, NST defines the following inputs:
• A content image (c)- Image we want to transfer a style to
• A styling image (s)- The image we want to move the method from
• An input image (g) – The image which contains the final result.
The architecture of the model is same, as well as the loss, which is computed, is shown below. We do not need to develop a profound understanding of what is going on in the image below, as we will see each component in detail in the next several sections to come. The idea is to give a high-level of understanding of the workflow taking place style transfer.

tensorFlow 28
tensorFlow

Downloading and loading the pertained VGG-16

We will be borrowing the VGG-16 weights from this webpage. We will need to download the vgg16_weights.npz file and replace that in a folder called vgg in our project home directory. We will only be needing the convolution and the pooling layers. Explicitly, we will be loading the first seven convolutional layers to be used as the NST network. We can do this using the load_weights(…) function given in the notebook.
Note: We have to try more layers. But beware of the memory limitations of our CPU and GPU.
1. # This function takes in a file path to the file containing weights
2. # and an integer that denotes how many layers to be loaded.
3. vgg_layers=load_weights(os.path.join(‘vgg’,’vgg16_weights.npz’),7)
Define the functions to build the style transfer network
We define several functions that will help us later to fully define the computational graph of the CNN given an input.
Creating TensorFlow variables
We loaded the numpy arrays into TensorFlow variables. We are creating following variables:
• content image (tf.placeholder)
• style image (tf.placeholder)
• generated image (tf.Variable and trainable=True)
• Pretrained weight and biases (tf.Variable and trainable=False)
Make sure we leave the generated image trainable while keeping pretrained weights and weights and biases frozen. We show two functions to define input and neural network weight.

1. def define_inputs (input_shape): 
2. """ 
3. This function defines the inputs (placeholders) and image to be generated (variable) 
4. """ 
5. content = tf.placeholder(name='content' , shape=input_shape, dtype=tf.float32) 
6. style= tf.placeholder(name='style', shape=input_shape, dtype=tf.float32) 
7. generated= tf.get_variable(name='generated', initializer=tf.random_normal_initalizer=tf.random_normal_initiallizer(), shape=input_shape, dtype=tf.float32, trainable=true) 
8. return {'content':content,'style,'generated': generated} 
9. def define_tf_weights(): 
10. """ 
11. This function defines the tensorflow variables for VGG weights and biases 
12. """ 
13. for k, w_dict in vgg_layers.items(): 
14. w, b=w_dict['weights'], w_dict['bias'] 
15. with tf.variable_scope(k): 
16. tf.get_variable(name='weights', initializer=tf.constant(w, dtype=tf.float32), trainable=false) 
17. tf.get_variable(name='bias', initializer=tf.constant(b, dtype=tf.float32), trainable=False)

Computing the VGG net output

1. Computing the VGG net output 
2. Def build_vggnet(inp, layer_ids, pool_inds, on_cpu=False): 
3. "This function computes the output of full VGG net """ 
4. outputs = OrderedDict() 
5. 
6. out = inp 
7. 
8. 
9. for lid in layer_ids: 
10. with tf.variable_scope(lid, reuse=tf.AUTO_REUSE): 
11. print('Computing outputs for the layer {}'.format(lid)) 
12. w, b = tf.get_variable('weights'), tf.get_variable('bias') 
13. out = tf.nn.conv2d(filter=w, input=out, strides=[1,1,1,1], padding='SAME') 
14. out = tf.nn.relu(tf.nn.bias_add(value=out, bias=b)) 
15. outputs[lid] = out 
16. 
17. 
18. if lid in pool_inds: 
19. with tf.name_scope(lid.replace('conv','pool')): 
20. out = tf.nn.avg_pool(input=out, ksize=[1,2,2,1], strides=[1, 2, 2, 1], padding='SAME') 
21. outputs[lid.replace('conv','pool')] = out 
22. 
23. 
24. return outputs

Loss functions
In the section, we define two loss functions; the style loss function and the content function. The content loss function ensures that the activation of the higher layer is similar between the generated image and the content image.

Content cost function

The content cost function is sure that the content present in the content image is captured into the generated image. It has been found that CNN captures information about the content in the higher levels, where the lower levels are more focused on single-pixel values.
Let A^l_{ij}(I) is the activation of the lth layer, ith feature map, and j th position achieve using the image I. Then the content loss is defined as

tensorFlow 29
tensorFlow

The intuition behind the content loss
If we visualize what is learned by a neural network, there’s evidence that suggests that different features maps in higher layers are activated in the presence of various objects. So if two images have the same content, they have similar activations in the top tiers.
We define the content cost as follows.

1. def define_content_loss(inputs, layer_ids, pool_inds, c_weight): 
2. c_outputs= build_vggnet (inputs ["content"], layer_ids, pool_inds) 
3. g_outputs= build_vggnet (inputs ["generated"], layer_ids, pool_inds) 
4. content_loss= c_weight * tf.reduce_mean(0.5*(list(c_outputs.values())[-1]-list(g_outputs.values())[-1])**2)

Style Loss function

It define the style loss function which desires more work. To derive the style information from the VGG network, we will use full layers of CNN. Style information is measured the amount of correlation present between feature maps in a layer. Mathematically, the style loss is defined as,

tensorFlow 30
tensorFlow

Intuition behind the style loss

By the above equation system, the idea is simple. The main goal is to compute a style matrix for the originated image and the style image.
Then, the style loss is defined as a root mean square difference between the two styles matrices.

tensorFlow 31
tensorFlow
1. def define_style_matrix(layer_out): 
2. """ 
3. This function computes the style matrix, which essentially computes 
4. how correlated the activations of a given filter to all the other filers. 
5. Therefore, if there are C channels, the matrix will be of size C x C 
6. """ 
7. n_channels = layer_out.get_shape().as_list()[-1] 
8. unwrapped_out = tf.reshape(layer_out, [-1, n_channels]) 
9. style_matrix = tf.matmul(unwrapped_out, unwrapped_out, transpose_a=True) 
10. return style_matrix 
11. 
12. def define_style_loss(inputs, layer_ids, pool_inds, s_weight, layer_weights=None): 
13. """ 
14. This function computes the style loss using the style matrix computed for 
15. the style image and the generated image 
16. """ 
17. c_outputs = build_vggnet(inputs["style"], layer_ids, pool_inds) 
18. g_outputs = build_vggnet(inputs["generated"], layer_ids, pool_inds) 
19. 
20. c_grams = [define_style_matrix(v) for v in list(c_outputs.values())] 
21. g_grams = [define_style_matrix(v) for v in list(g_outputs.values())] 
22. 
23. if layer_weights is None: 
24. style_loss = s_weight * \ 
25. tf.reduce_sum([(1.0/len(layer_ids)) * tf.reduce_mean((c - g)**2) for c,g in zip(c_grams, g_grams)]) 
26. else: 
27. style_loss = s_weight * \

So, this brings us to the end of blog. This Tecklearn ‘Style Transferring in Tensor Flow’ blog helps you with commonly asked questions if you are looking out for a job in Artificial Intelligence. If you wish to learn Artificial Intelligence and build a career in AI or Machine Learning domain, then check out our interactive, Artificial Intelligence and Deep Learning with TensorFlow Training, that comes with 24*7 support to guide you throughout your learning period. Please find the link for course details:

https://www.tecklearn.com/course/artificial-intelligence-and-deep-learning-with-tensorflow/

Artificial Intelligence and Deep Learning with TensorFlow Training

About the Course

Tecklearn’s Artificial Intelligence and Deep Learning with Tensor Flow course is curated by industry professionals as per the industry requirements & demands and aligned with the latest best practices. You’ll master convolutional neural networks (CNN), TensorFlow, TensorFlow code, transfer learning, graph visualization, recurrent neural networks (RNN), Deep Learning libraries, GPU in Deep Learning, Keras and TFLearn APIs, backpropagation, and hyperparameters via hands-on projects. The trainee will learn AI by mastering natural language processing, deep neural networks, predictive analytics, reinforcement learning, and more programming languages needed to shine in this field.

Why Should you take Artificial Intelligence and Deep Learning with Tensor Flow Training?

• According to Paysa.com, an Artificial Intelligence Engineer earns an average of $171,715, ranging from $124,542 at the 25th percentile to $201,853 at the 75th percentile, with top earners earning more than $257,530.
• Worldwide Spending on Artificial Intelligence Systems Will Be Nearly $98 Billion in 2023, According to New IDC Spending Guide at a GACR of 28.5%.
• IBM, Amazon, Apple, Google, Facebook, Microsoft, Oracle and almost all the leading companies are working on Artificial Intelligence to innovate future technologies.

What you will Learn in this Course?

Introduction to Deep Learning and AI
• What is Deep Learning?
• Advantage of Deep Learning over Machine learning
• Real-Life use cases of Deep Learning
• Review of Machine Learning: Regression, Classification, Clustering, Reinforcement Learning, Underfitting and Overfitting, Optimization
• Pre-requisites for AI & DL
• Python Programming Language
• Installation & IDE
Environment Set Up and Essentials
• Installation
• Python – NumPy
• Python for Data Science and AI
• Python Language Essentials
• Python Libraries – Numpy and Pandas
• Numpy for Mathematical Computing
More Prerequisites for Deep Learning and AI
• Pandas for Data Analysis
• Machine Learning Basic Concepts
• Normalization
• Data Set
• Machine Learning Concepts
• Regression
• Logistic Regression
• SVM – Support Vector Machines
• Decision Trees
• Python Libraries for Data Science and AI
Introduction to Neural Networks
• Creating Module
• Neural Network Equation
• Sigmoid Function
• Multi-layered perception
• Weights, Biases
• Activation Functions
• Gradient Decent or Error function
• Epoch, Forward & backword propagation
• What is TensorFlow?
• TensorFlow code-basics
• Graph Visualization
• Constants, Placeholders, Variables
Multi-layered Neural Networks
• Error Back propagation issues
• Drop outs
Regularization techniques in Deep Learning
Deep Learning Libraries
• Tensorflow
• Keras
• OpenCV
• SkImage
• PIL
Building of Simple Neural Network from Scratch from Simple Equation
• Training the model
Dual Equation Neural Network
• TensorFlow
• Predicting Algorithm
Introduction to Keras API
• Define Keras
• How to compose Models in Keras
• Sequential Composition
• Functional Composition
• Predefined Neural Network Layers
• What is Batch Normalization
• Saving and Loading a model with Keras
• Customizing the Training Process
• Using TensorBoard with Keras
• Use-Case Implementation with Keras
GPU in Deep Learning
• Introduction to GPUs and how they differ from CPUs
• Importance of GPUs in training Deep Learning Networks
• The GPU constituent with simpler core and concurrent hardware
• Keras Model Saving and Reusing
• Deploying Keras with TensorBoard
Keras Cat Vs Dog Modelling
• Activation Functions in Neural Network
Optimization Techniques
• Some Examples for Neural Network
Convolutional Neural Networks (CNN)
• Introduction to CNNs
• CNNs Application
• Architecture of a CNN
• Convolution and Pooling layers in a CNN
• Understanding and Visualizing a CNN
RNN: Recurrent Neural Networks
• Introduction to RNN Model
• Application use cases of RNN
• Modelling sequences
• Training RNNs with Backpropagation
• Long Short-Term memory (LSTM)
• Recursive Neural Tensor Network Theory
• Recurrent Neural Network Model
Application of Deep Learning in image recognition, NLP and more
Real world projects in recommender systems and others
Got a question for us? Please mention it in the comments section and we will get back to you.

 

0 responses on "Style Transferring in TensorFlow"

Leave a Message

Your email address will not be published. Required fields are marked *