Time Series in RNN

Last updated on Oct 24 2021
Ashutosh Wakiroo

Table of Contents

Time Series in RNN

In this blog, we will use an RNN with time-series data. Time series is dependent on the previous time, which means past values include significant information that the network can learn. The time series prediction is to estimate the future value of any series, let’s say, stock price, temperature, GDP, and many more.
The data preparation for RNN and time-series make a little bit tricky. The objective is to predict the other value of the series, and we will use the past information to estimate the cost at t +1. The label is equal to the input succession one period along.
Secondly, the number of inputs is set to 1, i.e., one observation per time. In the end, the time step is equal to the sequence of the numerical value. If we set the time step to 10, the input sequence will return ten consecutive times.
Look at the graph below, and we have to represent the time series data on the left and a fictive input sequence on the right. We create a function to return a dataset with a random value for each day from January 2001 to December 2016

1. # To plotting amazing figure 
2. %matplotlib inline 
3. import matplotlib 
4. import pandas as pd 
5. import matplotlib.pyplot as plt 
6. def create_ts(start = '2001', n = 201, freq = 'M'): 
7. ring = pd.date_range(start=start, periods=n, freq=freq) 
8. ts =pd.Series(np.random.uniform(-18, 18, size=len(rng)), ring).cumsum() 
9. return ts 
10. ts= create_ts(start = '2001', n = 192, freq = 'M') 
11. ts.tail(5) 
Output:
2016-08-31 -93.459631
2016-09-30 -95.264791
2016-10-31 -95.551935
2016-11-30 -105.879611
2016-12-31 -123.729319
Freq: M, dtype: float64

ts = create_ts(start = '2001', n = 222)

1. # Left plotting diagram 
2. plt.figure(figsize=(11,4)) 
3. plt.subplot(121) 
4. plt.plot(ts.index, ts) 
5. plt.plot(ts.index[90:100], ts[90:100], "b-",linewidth=3, label="A train illustration in the plotting area") 
6. plt.title("A time series (generated)", fontsize=14) 
7. 
8. ## Right side plotted Diagram 
9. plt.subplot(122) 
10. plt.title("A training instance", fontsize=14) 
11. plt.plot(ts.index[90:100], ts[90:100], "b-", markersize=8, label="instance") 
12. plt.plot(ts.index[91:101], ts[91:101], "bo", markersize=10, label="target", markerfacecolor='red') 
13. plt.legend(loc="upper left") 
14. plt.xlabel("Time") 
15. plt.show()
tensorFlow 44
tensorFlow

The right part of the graph shows all the series. It starts in 2001 and finishes in 2019. There is no sense to makes no sense to feed all the data in the network; instead, we have to create a batch of data with a length equal to the time step. This batch will be the X variable. The Y variable is the same as the X but shifted by one period (i.e., we want to forecast t+1).
Both vectors have the same length. We can see this in the right part of the graph above. The line represents ten values of the x input, while the red dots label has ten values, y. Note that, the label starts one period forward of X and ends after one period.

Build an RNN to analyze Time Series in TensorFlow

It is time to build our first RNN to predict the series. We have to specify some hyperparameters (the parameters of the model, i.e., number of neurons, etc.) for the model.
• Number of input: 1
• Time step (windows in time series): 10
• Number of neurons: 120
• Number of output: 1
Our network will learn from a sequence of 10 days and contain 120 recurrent neurons. We feed the model with one input.
Before constructing the model, we need to split the dataset into the train set and test set. The full dataset has 222 data points; We will use the first 201 points to train the model and the last 21 points to test our model.
After we define a train and test set, we need to create an object containing the batches. In these batches, we have X values and Y values. Remember that the X value is one period straggle. Therefore, We use the first 200 observations, and the time step is equal to 10. The x_batches object must have 20 batches of size 10 or 1. The size of the Y_batches is the same as the X_batches object, but with a period above.
Step 1) Create the train and test
Firstly, we convert the series into a numpy array; then, we define the windows (the number of time networks will learn from), the number of input, output, and the size of the train set.

1. series = np.array(ts) 
2. n_windows = 20 
3. n_input = 1 
4. n_output = 1 
5. size_train = 201 
After that, we split the array into two datasets.
1. # Split data 
2. train = series[:size_train] 
3. test = series[size_train:] 
4. print(train.shape, test.shape) 
5. (201) (21)

Step 2) Create the function return X_batches and y_batches
We can create a function that returns two different arrays, one for X_batches and one for y_batches. To make it easier.
Let’s make a function to construct the batches.
Note that, the X_batches are logged by one period (we take value t-1). The output of the function has three dimensions. The first dimensions are equal to the number of batches, the second is the size of the windows, and the last one is the number of input.
The tricky part of the time series is to select the data points correctly. For the X data points, we choose the observations from t = 1 to t =200, while for the Y data point, we return the observations from t = 2 to 201. Once we have the correct data points, it is effortless to reshape the series.
To construct the object with the batches, we need to split the dataset into ten batches of the same length. We can use the reshape method and pass -1 so that the series is the same as the batch size. The value 20 is the number of comments per batch, and 1 is the number of inputs.
We need to do the same step for the label.
Note that we need to shift the data to the number of times we want to forecast. For instance, if we want to predict one time, then we shift the series by 1. If we want to forecast two days, then shift the data by 2 points.

1. x_data = train[:size_train-1]: Select the training instance.
2. X_batches = x_data.reshape(-1, Windows, input): creating the right shape for the batch.
3. def create_batches(df, Windows, input, output):
4. ## Create X
5. x_data = train[:size_train-1] # Select the data
6. X_batches = x_data.reshape(-1, windows, input) # Reshaping the data in this line of code
7. ## Create y
8. y_data = train[n_output:size_train]
9. y_batches = y_data.reshape(-1, Windows, output)
10. return X_batches, y_batches #return the function
Now the function is defined, we call it for creating the batches.
1. Windows = n_
2. Windows, # Creating windows
3. input = n_input,
4. output = n_output)
We can print the shape to make sure the dimensions are correct.
1. print(X_batches.shape, y_batches.shape)
2. (10, 20, 1) (10, 20, 1)

We need to create the test set with only one batch of data and 20 observations.
Note that our forecast days after days, it means the second predicted value will be based on the actual value of the first day (t+1) of the test dataset. The true value will be known.
If you want to forecast t+2, we need to use the predicted value t+1; if you’re going to predict t+3, we need to use the expected value t+1 and t+2. It makes it is difficult to predict precisely “t+n” days.

1. X_test, y_test = create_batches(df = test, windows = 20,input = 1, output = 1)
2. print(X_test.shape, y_test.shape)
3. (10, 20, 1) (10, 20, 1)

Our batch size is ready, we can build the RNN architecture. Remember, we have 120 recurrent neurons.
Step 3) Build the model
To create the model, we need to define three parts:
1. The variable with the tensors
2. The RNN
3. The loss and optimization
1. Variables
We need to specify the X and y variables with an appropriate shape. This step is trivial. The tensors are the same dimension as the objects X_batches and the object y_batches.
For instance, the tensors X is a placeholder has almost three dimensions:
• Note: size of the batch
• n_windows: Lenght of the windows.
• n_input: Number of input
The result is:

1. tf.placeholder(tf.float32, [None, n_windows, n_input])
2. ## 1. Construct the tensors
3. X = tf.placeholder(tf.float32, [None, n_windows, n_input])
4. y = tf.placeholder(tf.float32, [None, n_windows, n_output])
2. Create the RNN
In the second part, we need to define the architecture of the network. As before, we use the object BasicRNNCell and the dynamic_rnn from TensorFlow estimator.
1. ## 2. create the model
2. basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=r_neuron, activation=tf.nn.relu)
3. rnn_output, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
The next part is trickier but allows faster computation. We need to transform the run output to a dense layer and then convert it to has the same dimension like the input field.
1. stacked_rnn_output = tf.reshape(rnn_output, [-1, r_neuron])
2. stacked_outputs = tf.layers.dense(stacked_rnn_output, n_output)
3. outputs = tf.reshape(stacked_outputs, [-1, n_windows, n_output])
3. Create the loss and optimization
The model optimization depends on the task which we are performing.
This difference is important because it can change the optimization problem. The optimization problem for a continuous variable use to minimize the mean square error. To construct these metrics in TF, we can use:
1. tf.reduce_sum(tf.square(outputs - y))
The enduring code is the same as before; we use an Adam optimizer to reduce the loss.
1. tf.train.AdamOptimizer(learning_rate=learning_rate)
2. optimizer.minimize(loss)
We can pack everything together, and our model is ready to train.
1. tf.reset_default_graph()
2. r_neuron = 120
3.
4. ## 1. Constructing the tensors
5. X = tf.placeholder(tf.float32, [None, n_windows, n_input])
6. y = tf.placeholder(tf.float32, [None, n_windows, n_output])

1. ## 2. creating our models
2. basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=r_neuron, activation=tf.nn.relu)
3. rnn_output, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
4.
5. stacked_rnn_output = tf.reshape(rnn_output, [-1, r_neuron])
6. stacked_outputs = tf.layers.dense(stacked_rnn_output, n_output)
7. outputs = tf.reshape(stacked_outputs, [-1, n_windows, n_output])
8.
9. ## 3. Loss optimization of RNN
10. learning_rate = 0.001
11.
12. loss = tf.reduce_sum(tf.square(outputs - y))
13. optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
14. training_op = optimizer.minimize(loss)
15.
16. init = tf.global_variables_initializer()
We will train the model using 1500 epochs and print the loss every 150 iterations. Once the model is trained, we evaluate the model on the test set and create an object containing the prediction.
1. iteration = 1500
2. with tf.Session() as sess:
3. init.run()
4. for iters in range(iteration):
5. sess.run(training_op, feed_dict={X: X_batches, y: y_batches})
6. if iters % 150 == 0:
7. mse = loss.eval(feed_dict={X: X_batches, y: y_batches})
8. print(iters, "\tMSE:", mse)
9. y_pred = sess.run(outputs, feed_dict={X: X_test})
10. "0 MSE: 502893.34
11. 150 MSE: 13839.129
12. 300 MSE: 3964.835
13. 450 MSE: 2619.885
14. 600 MSE: 2418.772
15. 750 MSE: 2110.5923
16. 900 MSE: 1887.9644
17. 1050 MSE: 1747.1377
18. 1200 MSE: 1556.3398
19. 1350 MSE: 1384.6113"
At last, we can plot the actual value of the series with the predicted value. If our model is corrected, the predicted values should be put on top of the actual values.
As we can see, the model has room of improvement. It is up to us to change the hyper parameters like the windows, the batch size of the number of recurrent neurons in the current files.
1. plt.title("Forecast vs Actual", fontsize=14)
2. plt.plot(pd.Series(np.ravel(y_test)), "bo", markersize=8, label="actual", color='green')
3. plt.plot(pd.Series(np.ravel(y_pred)), "r.", markersize=8, label="forecast", color='red')
4. plt.legend(loc="lower left")
5. plt.xlabel("Time")
6. plt.show()
tensorFlow 45
tensorFlow

A recurrent neural network is an architecture to work with time series and text analysis. The output of the previous state is used to conserve the memory of the system over time or sequence of words.
In TensorFlow, we can use the be;ow given code to train a recurrent neural network for time series:
Parameters of the model

1. n_windows = 20
2. n_input = 1
3. n_output = 1
4. size_train = 201
Define the model
1. X = tf.placeholder(tf.float32, [none, n_windows, n_input])
2. y = tf.placeholder(tf.float32, [none, n_windows, n_output])
3. basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=r_neuron, activation=tf.nn.relu)
4. rnn_output, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
5. stacked_rnn_output = tf.reshape(rnn_output, [-1, r_neuron])
6. stacked_outputs = tf.layers.dense(stacked_rnn_output, n_output)
7. outputs = tf.reshape(stacked_outputs, [-1, n_windows, n_output])
Constructing the optimization function
1. learning_rate = 0.001
2. loss = tf.reduce_sum(tf.square(outputs - y))
3. optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
4. training_op = optimizer.minimize(loss)
Training the model
1. init = tf.global_variables_initializer()
2. iteration = 1500
3.
4. with tf.Session() as sess:
5. init.run()
6. for iters in range(iteration):
7. sess.run(training_op, feed_dict={X: X_batches, y: y_batches})
8. if iters % 150 == 0:
9. mse = loss.eval(feed_dict={X: X_batches, y: y_batches})
10. print(iters, "\tMSE:", mse)
11. y_pred = sess.run(outputs, feed_dict={X: X_test})

So, this brings us to the end of blog. This Tecklearn ‘Time Series in RNN’ blog helps you with commonly asked questions if you are looking out for a job in Artificial Intelligence. If you wish to learn Artificial Intelligence and build a career in AI or Machine Learning domain, then check out our interactive, Artificial Intelligence and Deep Learning with TensorFlow Training, that comes with 24*7 support to guide you throughout your learning period. Please find the link for course details:

https://www.tecklearn.com/course/artificial-intelligence-and-deep-learning-with-tensorflow/

Artificial Intelligence and Deep Learning with TensorFlow Training

About the Course

Tecklearn’s Artificial Intelligence and Deep Learning with Tensor Flow course is curated by industry professionals as per the industry requirements & demands and aligned with the latest best practices. You’ll master convolutional neural networks (CNN), TensorFlow, TensorFlow code, transfer learning, graph visualization, recurrent neural networks (RNN), Deep Learning libraries, GPU in Deep Learning, Keras and TFLearn APIs, backpropagation, and hyperparameters via hands-on projects. The trainee will learn AI by mastering natural language processing, deep neural networks, predictive analytics, reinforcement learning, and more programming languages needed to shine in this field.

Why Should you take Artificial Intelligence and Deep Learning with Tensor Flow Training?

• According to Paysa.com, an Artificial Intelligence Engineer earns an average of $171,715, ranging from $124,542 at the 25th percentile to $201,853 at the 75th percentile, with top earners earning more than $257,530.
• Worldwide Spending on Artificial Intelligence Systems Will Be Nearly $98 Billion in 2023, According to New IDC Spending Guide at a GACR of 28.5%.
• IBM, Amazon, Apple, Google, Facebook, Microsoft, Oracle and almost all the leading companies are working on Artificial Intelligence to innovate future technologies.

What you will Learn in this Course?

Introduction to Deep Learning and AI
• What is Deep Learning?
• Advantage of Deep Learning over Machine learning
• Real-Life use cases of Deep Learning
• Review of Machine Learning: Regression, Classification, Clustering, Reinforcement Learning, Underfitting and Overfitting, Optimization
• Pre-requisites for AI & DL
• Python Programming Language
• Installation & IDE
Environment Set Up and Essentials
• Installation
• Python – NumPy
• Python for Data Science and AI
• Python Language Essentials
• Python Libraries – Numpy and Pandas
• Numpy for Mathematical Computing
More Prerequisites for Deep Learning and AI
• Pandas for Data Analysis
• Machine Learning Basic Concepts
• Normalization
• Data Set
• Machine Learning Concepts
• Regression
• Logistic Regression
• SVM – Support Vector Machines
• Decision Trees
• Python Libraries for Data Science and AI
Introduction to Neural Networks
• Creating Module
• Neural Network Equation
• Sigmoid Function
• Multi-layered perception
• Weights, Biases
• Activation Functions
• Gradient Decent or Error function
• Epoch, Forward & backword propagation
• What is TensorFlow?
• TensorFlow code-basics
• Graph Visualization
• Constants, Placeholders, Variables
Multi-layered Neural Networks
• Error Back propagation issues
• Drop outs
Regularization techniques in Deep Learning
Deep Learning Libraries
• Tensorflow
• Keras
• OpenCV
• SkImage
• PIL
Building of Simple Neural Network from Scratch from Simple Equation
• Training the model
Dual Equation Neural Network
• TensorFlow
• Predicting Algorithm
Introduction to Keras API
• Define Keras
• How to compose Models in Keras
• Sequential Composition
• Functional Composition
• Predefined Neural Network Layers
• What is Batch Normalization
• Saving and Loading a model with Keras
• Customizing the Training Process
• Using TensorBoard with Keras
• Use-Case Implementation with Keras
GPU in Deep Learning
• Introduction to GPUs and how they differ from CPUs
• Importance of GPUs in training Deep Learning Networks
• The GPU constituent with simpler core and concurrent hardware
• Keras Model Saving and Reusing
• Deploying Keras with TensorBoard
Keras Cat Vs Dog Modelling
• Activation Functions in Neural Network
Optimization Techniques
• Some Examples for Neural Network
Convolutional Neural Networks (CNN)
• Introduction to CNNs
• CNNs Application
• Architecture of a CNN
• Convolution and Pooling layers in a CNN
• Understanding and Visualizing a CNN
RNN: Recurrent Neural Networks
• Introduction to RNN Model
• Application use cases of RNN
• Modelling sequences
• Training RNNs with Backpropagation
• Long Short-Term memory (LSTM)
• Recursive Neural Tensor Network Theory
• Recurrent Neural Network Model
Application of Deep Learning in image recognition, NLP and more
Real world projects in recommender systems and others
Got a question for us? Please mention it in the comments section and we will get back to you.

 

0 responses on "Time Series in RNN"

Leave a Message

Your email address will not be published. Required fields are marked *