CIFAR-10 and CIFAR-100 Dataset in TensorFlow

Last updated on Oct 06 2022
Kalpana Kapoor

Table of Contents

CIFAR-10 and CIFAR-100 Dataset in TensorFlow

The CIFAR-10 (Canadian Institute for Advanced Research) and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Geoffrey Hinton and Vinod Nair. The dataset is divided into five training batches and only one test batch, each with 10000 images.

The test batch contains 1000 randomly-selected images from each class. The training batches contain the remaining images in a random order, but some training batches contain the remaining images in a random order, but some training batches contain more images from one class to another. Between them, the training batches contain exactly 5000 images for each class.

ai 33

 

The classes will be entirely mutually exclusive. There will be no overlapping between automobiles and trucks. Automobiles include things which are similar to sedans and SUVs. Trucks class includes only big trucks, and it neither includes pickup trucks. If we are looked through the CIFAR dataset, we realize that is not just one type of bird or cat. The bird and cat class contains many different types of birds and cat. The bird and Cat class provide many kinds of birds and cat varying in size, color, magnification, different angles, and different poses.

With endless datasets, there are many ways by which we can write number one and number two. It just wasn’t as diverse, and on top of that, the endless dataset is a gray scalar. The CIFAR dataset consists of 32 larger by 32 color images, and each photograph with three different color channels. Now our most important question is that the LeNet model that has performed so well on an endless dataset will it be enough to classify the CIFAR dataset?

CIFAR-100 Dataset

It is just like CIFAR-10 dataset. The only difference is that it has 100 classes containing 600 images per class. There are 100 testing images and 500 training images in per class. These 100 classes are grouped into 20 superclasses, and each image comes with a “coarse” label (the superclass to which it belongs) and a “fine” label (the class which it belongs to) and a “fine” label (the class to which it belongs to).

Below classes in the CIFAR-100 dataset:

S. No Superclass Classes
1. Flowers Orchids, poppies, roses, sunflowers, tulips
2. Fish Aquarium fish, flatfish, ray, shark, trout
3. Aquatic mammals Beaver, dolphin, otter, seal, whale
4. food containers Bottles, bowls, cans, cups, plates
5. Household electrical devices Clock, lamp, telephone, television, computer keyboard
6. Fruit and vegetables Apples, mushrooms, oranges, pears, sweet peppers
7. Household furniture Table, Chair, couch, wardrobe, bed,
8. Insects bee, beetle, butterfly, caterpillar, cockroach
9. Large natural outdoor scenes Cloud, forest, mountain, plain, sea
10. Large human-made outdoor things Bridge, castle, house, road, skyscraper
11. Large carnivores Bear, leopard, lion, tiger, wolf
12. Medium-sized mammals Fox, porcupine, possum, raccoon, skunk
13. Large Omnivores and herbivores Camel, cattle, chimpanzee, elephant, kangaroo
14. Non-insect invertebrates Crab, lobster, snail, spider, worm
15. reptiles Crocodile, dinosaur, lizards, snake, turtle
16. trees Maple, oak, palm, pine, willow
17. people girl, man, women, baby, boy
18. Small mammals Hamster, rabbit, mouse, shrew, squirrel
19. Vehicles 1 Bicycle, bus, motorcycle, pickup truck, train
20. Vehicles 2 Lawn-mower, rocket, streetcar, tractor, tank

Use-Case: Implementation of CIFAR10 with the help of Convolutional Neural Networks Using TensorFlow

Now, train a network to classify images from the CIFAR10 Dataset using a Convolution Neural Network built-in TensorFlow.

ai 34

 

Consider the following Flowchart to understand the working of the use-case:

Img

Install Necessary packages:
pip3 install numpy tensorflow pickle
Train The Network:
import numpy as np
import tensorflow as tf
from time import time
import math
from include .data import get_data_set
from include.model import model, lr
train_x, train_y= get_data_set("train")
test_x, test_y = get_data_set("test")
tf. set_random_seed(21)
x, y, output, y_pred_cls, global_step, learning_rate=model()
global_accuracy =0
epoch_start=0
#PARAM
_BATCH_SIZE=128
_EPOCH=60
_SAVE_PATH="./tensorboard/cifar-10-v1.0.0/"
#LOSS AND OPTIMIZER
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=y))
optimizer=tf.train.AdamOptimizer(learning_rate= learning_rate, beta1=0.9, beta2=0.999, epsilon=1e-08). Minimize(loss, global_step=global_step)
#PREDICTION AND ACCURACY CALCULATION
correct_prediction=tf.equal(y_pred_cls, tf.argmax(y, axis=1))
accuracy = tf.reduce_mean(tf.cast(correct_predictiction, tf.float32))
# SAVER
merged = tf.summary.merge_all()
saver = tf.train.Saver()
sess = tf.Session()
train_writer= tf.summary.FileWriter(_SAVE_PATH, sess.graph)
try:
print(" Trying to restore last checkpoint?")
last_chk_path= tf.train.latest_checkpoint(checkpoint_dir=SAVE_PATH)
saver.restore(sess, save_path=last_chk_path)
print("Restored checkpoint from:", last_chk_path)
except ValueError:
print("Failed to restore checkpoint. Initializing variable instead.")
sess.run(tf.global_variables_initializer())
def train(epoch):
    global epoch_start
    epoch_start= time()
    batch_size=int(math.ceil(len(train_x)/_BATCH_SIZE))
    i_global = 0
for s in range(batch_size):
    batch_xs= train_x[s*_BATCH_SIZE: (s+1)*_BATCH_SIZE]
    batch_ys = train_y[s*_BATCH_SIZE: (s+1)*_BATCH_SIZE]
   start_time= time()
i_global, _, batch_loss, batch_acc=sess.run( [global_step, optimizer, loss, accuracy],
feed_dict={x: batch_xs, y: batch_ys, learning_rate: lr(epoch)})
duration = time() - start_time
if s% 10== 0:
percentage = int(round((s/batch_size)*100))
bar_len=29
filled_len= int ((bar_len*int(percentage))/100)
bar='=' *filled_len + ?>' + ?-? * (bar_len _filled_len)
msg= "Global step: { :>5} - [{}] {:>3}% -acc: {:.{:>4f} - loss: {:.4f} -{:.1f} sample/sec"
print(msg.format(i_global, bar, percentage, batch_acc, batch_loss, _BATCH_SIZE/duration))
test_and_save(i_global, epoch)
def test_and_save(_global_step, epoch):
global global_accuracy
global epoch_start
i=0
predicted_class=np.zeroes(shape=len(test_x), dtype=np.int)
while i< len (test_x) : j=min(i+_BATCH_SIZE, len(test_x)) batch_xs=test_x[I:j, :] batch_ys=test_y[i:j,:] predicted_class[i:j]=sess.run(y_pred_cls, feed_dict=x:batch_xs, y: batch_ys, learning_rate: lr(epoch)} ) i=j correct= (np.argmax(test_y, axis=1) == predicted_class) acc = correct.mean()*100 correct_numbers = correct.sum() hours, rem = divmod(time() - epoch_start, 3600) minutes, seconds = divmod(rem, 60) mes = "
Epoch {} - accuracy: {: .2f}% ({}/{})- time: {:0>2}:{:0.2}:{:05.2f}"
print(mes.format((epoch+1), acc, correct_numbers, len(test_x), int(hours), int(minutes), seconds))
if global_accuracy != 0 and global_accuracy < acc: summary = tf.Summary(value=[ tf.Summary.Value(tag="Accuracy/test", simple_value=acc), ]) train_writer.add_summary(summary, _global_step) saver.save(sess, save_path=_SAVE_PATH, global_step=_global_step) mes = "This epoch receive better accuracy: {:.2f} > {:.2f}. Saving session...
 print(mes.format((acc, global_accuracy))
global_accuracy = acc
elif global_accuracy==0:
global_accuracy=acc
print("################################################################
def main():
train_start=time()
for i in range(_EPOCH):
print(" Epoch: {}/{}".format(( i+1),_EPOCH))
train(i)
hours, rem=divmod(time()-train_start, 3600 minutes, seconds=divmod(rem,60)
mes= "Best accuracy per session: {:.2f}, time: {:0>2}:{:0>2}:{:05.2f}"
print(mes.format(global_accuracy, int(hours), int(minutes), seconds))
if _name_ =="_main_":
main()
sess.close()
 Output:
Epoch: 60/60
Global step: 23070 - [>-----------------------------]   0% - acc: 0.9531 - loss: 1.5081 - 7045.4 sample/sec
Global step: 23080 - [>-----------------------------]   3% - acc: 0.9453 - loss: 1.5159 - 7147.6 sample/sec
Global step: 23090 - [=>----------------------------]   5% - acc: 0.9844 - loss: 1.4764 - 7154.6 sample/sec
Global step: 23100 - [==>---------------------------]   8% - acc: 0.9297 - loss: 1.5307 - 7104.4 sample/sec
Global step: 23110 - [==>---------------------------]  10% - acc: 0.9141 - loss: 1.5462 - 7091.4 sample/sec
Global step: 23120 - [===>--------------------------]  13% - acc: 0.9297 - loss: 1.5314 - 7162.9 sample/sec
Global step: 23130 - [====>-------------------------]  15% - acc: 0.9297 - loss: 1.5307 - 7174.8 sample/sec
Global step: 23140 - [=====>------------------------]  18% - acc: 0.9375 - loss: 1.5231 - 7140.0 sample/sec
Global step: 23150 - [=====>------------------------]  20% - acc: 0.9297 - loss: 1.5301 - 7152.8 sample/sec
Global step: 23160 - [======>-----------------------]  23% - acc: 0.9531 - loss: 1.5080 - 7112.3 sample/sec
Global step: 23170 - [=======>----------------------]  26% - acc: 0.9609 - loss: 1.5000 - 7154.0 sample/sec
Global step: 23180 - [========>---------------------]  28% - acc: 0.9531 - loss: 1.5074 - 6862.2 sample/sec
Global step: 23190 - [========>---------------------]  31% - acc: 0.9609 - loss: 1.4993 - 7134.5 sample/sec
Global step: 23200 - [=========>--------------------]  33% - acc: 0.9609 - loss: 1.4995 - 7166.0 sample/sec
Global step: 23210 - [==========>-------------------]  36% - acc: 0.9375 - loss: 1.5231 - 7116.7 sample/sec
Global step: 23220 - [===========>------------------]  38% - acc: 0.9453 - loss: 1.5153 - 7134.1 sample/sec
Global step: 23230 - [===========>------------------]  41% - acc: 0.9375 - loss: 1.5233 - 7074.5 sample/sec
Global step: 23240 - [============>-----------------]  43% - acc: 0.9219 - loss: 1.5387 - 7176.9 sample/sec
Global step: 23250 - [=============>----------------]  46% - acc: 0.8828 - loss: 1.5769 - 7144.1 sample/sec
Global step: 23260 - [==============>---------------]  49% - acc: 0.9219 - loss: 1.5383 - 7059.7 sample/sec
Global step: 23270 - [==============>---------------]  51% - acc: 0.8984 - loss: 1.5618 - 6638.6 sample/sec
Global step: 23280 - [===============>--------------]  54% - acc: 0.9453 - loss: 1.5151 - 7035.7 sample/sec
Global step: 23290 - [================>-------------]  56% - acc: 0.9609 - loss: 1.4996 - 7129.0 sample/sec
Global step: 23300 - [=================>------------]  59% - acc: 0.9609 - loss: 1.4997 - 7075.4 sample/sec
Global step: 23310 - [=================>------------]  61% - acc: 0.8750 - loss:1.5842 - 7117.8 sample/sec
Global step: 23320 - [==================>-----------]  64% - acc: 0.9141 - loss:1.5463 - 7157.2 sample/sec
Global step: 23330 - [===================>----------]  66% - acc: 0.9062 - loss: 1.5549 - 7169.3 sample/sec
Global step: 23340 - [====================>---------]  69% - acc: 0.9219 - loss: 1.5389 - 7164.4 sample/sec
Global step: 23350 - [====================>---------]  72% - acc: 0.9609 - loss: 1.5002 - 7135.4 sample/sec
Global step: 23360 - [=====================>--------]  74% - acc: 0.9766 - loss: 1.4842 - 7124.2 sample/sec
Global step: 23370 - [======================>-------]  77% - acc: 0.9375 - loss: 1.5231 - 7168.5 sample/sec
Global step: 23380 - [======================>-------]  79% - acc: 0.8906 - loss: 1.5695 - 7175.2 sample/sec
Global step: 23390 - [=======================>------]  82% - acc: 0.9375 - loss: 1.5225 - 7132.1 sample/sec
Global step: 23400 - [========================>-----]  84% - acc: 0.9844 - loss: 1.4768 - 7100.1 sample/sec
Global step: 23410 - [=========================>----]  87% - acc: 0.9766 - loss: 1.4840 - 7172.0 sample/sec
Global step: 23420 - [==========================>---]  90% - acc: 0.9062 - loss: 1.5542 - 7122.1 sample/sec
Global step: 23430 - [==========================>---]  92% - acc: 0.9297 - loss: 1.5313 - 7145.3 sample/sec
Global step: 23440 - [===========================>--]  95% - acc: 0.9297 - loss: 1.5301 - 7133.3 sample/sec
Global step: 23450 - [============================>-]  97% - acc: 0.9375 - loss: 1.5231 - 7135.7 sample/sec
Global step: 23460 - [=============================>] 100% - acc: 0.9250 - loss: 1.5362 - 10297.5 sample/sec
Epoch 60 - accuracy: 78.81% (7881/10000)
This epoch receive better accuracy: 78.81 > 78.78. Saving session...
##################################################################################################
Run Network on Test Dataset:
import numpy as np
import tensorflow as tf
from include.data import get_data_set
from include.model import model
test_x, test_y= get_data_set("test")
x, y, output, y_pred_cls, global_step, learning_rate =model()
_BATCH_SIZE = 128
_CLASS_SIZE = 10
_SAVE_PATH = "./tensorboard/cifar-10-v1.0.0/"
saver= tf.train.Saver()
Sess=tf.Session()
try;
 print(" Trying to restore last checkpoint ...")
last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=_SAVE_PATH
saver.restore(sess, save_path=last_chk_path)
print("Restored checkpoint from:", last_chk_path)
expect ValueError:
print("
Failed to restore checkpoint. Initializing variables instead.")
sess.run(tf.global_variables_initializer())
def main():
i=0
predicted_class= np.zeros(shape=len(test_x), dtype=np.int)
while i< lens(test_x):
j=min(i+_BATCH_SIZE, len(test_x))
batch_xs=test_x[i:j,:]
batch_xs=test_y[i:j,:]
pre dicted_class[i:j] = sess.run(y_pred_cls, feed_dict={x: batch_xs, y: batch_ys})
i=j
corr ect = (np.argmax(test_y, axis=1) == predicted_class)
acc=correct.mean()*100
correct_numbers=correct.sum()
print()
print("Accuracy is on Test-Set: {0:.2f}% ({1} / {2})".format(acc, correct_numbers, len(test_x)))
if__name__=="__main__":
main()
sess.close()
Simple Output
Trying to restore last checkpoint ...
Restored checkpoint from: ./tensorboard/cifar-10-v1.0.0/-23460
Accuracy on Test-Set: 78.81% (7881 / 10000)
ai 35

Training Time

Here, we see that how much time takes 60 epochs:

Device Batch Time Accuracy[%]
NVidia 128 8m4s 79.12
Inteli77700HQ 128 3h30m 78.91

So, this brings us to the end of blog. This Tecklearn ‘CIFAR-10 and CIFAR-100 Dataset in Tensor Flow’ blog helps you with commonly asked questions if you are looking out for a job in Artificial Intelligence. If you wish to learn Artificial Intelligence and build a career in AI or Machine Learning domain, then check out our interactive, Artificial Intelligence and Deep Learning with TensorFlow Training, that comes with 24*7 support to guide you throughout your learning period. Please find the link for course details:

https://www.tecklearn.com/course/artificial-intelligence-and-deep-learning-with-tensorflow/

Artificial Intelligence and Deep Learning with TensorFlow Training

About the Course

Tecklearn’s Artificial Intelligence and Deep Learning with Tensor Flow course is curated by industry professionals as per the industry requirements & demands and aligned with the latest best practices. You’ll master convolutional neural networks (CNN), TensorFlow, TensorFlow code, transfer learning, graph visualization, recurrent neural networks (RNN), Deep Learning libraries, GPU in Deep Learning, Keras and TFLearn APIs, backpropagation, and hyperparameters via hands-on projects. The trainee will learn AI by mastering natural language processing, deep neural networks, predictive analytics, reinforcement learning, and more programming languages needed to shine in this field.

Why Should you take Artificial Intelligence and Deep Learning with Tensor Flow Training?

  • According to Paysa.com, an Artificial Intelligence Engineer earns an average of $171,715, ranging from $124,542 at the 25th percentile to $201,853 at the 75th percentile, with top earners earning more than $257,530.
  • Worldwide Spending on Artificial Intelligence Systems Will Be Nearly $98 Billion in 2023, According to New IDC Spending Guide at a GACR of 28.5%.
  • IBM, Amazon, Apple, Google, Facebook, Microsoft, Oracle and almost all the leading companies are working on Artificial Intelligence to innovate future technologies.

What you will Learn in this Course?

Introduction to Deep Learning and AI

  • What is Deep Learning?
  • Advantage of Deep Learning over Machine learning
  • Real-Life use cases of Deep Learning
  • Review of Machine Learning: Regression, Classification, Clustering, Reinforcement Learning, Underfitting and Overfitting, Optimization
  • Pre-requisites for AI & DL
  • Python Programming Language
  • Installation & IDE

Environment Set Up and Essentials

  • Installation
  • Python – NumPy
  • Python for Data Science and AI
  • Python Language Essentials
  • Python Libraries – Numpy and Pandas
  • Numpy for Mathematical Computing

More Prerequisites for Deep Learning and AI

  • Pandas for Data Analysis
  • Machine Learning Basic Concepts
  • Normalization
  • Data Set
  • Machine Learning Concepts
  • Regression
  • Logistic Regression
  • SVM – Support Vector Machines
  • Decision Trees
  • Python Libraries for Data Science and AI

Introduction to Neural Networks

  • Creating Module
  • Neural Network Equation
  • Sigmoid Function
  • Multi-layered perception
  • Weights, Biases
  • Activation Functions
  • Gradient Decent or Error function
  • Epoch, Forward & backword propagation
  • What is TensorFlow?
  • TensorFlow code-basics
  • Graph Visualization
  • Constants, Placeholders, Variables

Multi-layered Neural Networks

  • Error Back propagation issues
  • Drop outs

Regularization techniques in Deep Learning

Deep Learning Libraries

  • Tensorflow
  • Keras
  • OpenCV
  • SkImage
  • PIL

Building of Simple Neural Network from Scratch from Simple Equation

  • Training the model

Dual Equation Neural Network

  • TensorFlow
  • Predicting Algorithm

Introduction to Keras API

  • Define Keras
  • How to compose Models in Keras
  • Sequential Composition
  • Functional Composition
  • Predefined Neural Network Layers
  • What is Batch Normalization
  • Saving and Loading a model with Keras
  • Customizing the Training Process
  • Using TensorBoard with Keras
  • Use-Case Implementation with Keras

GPU in Deep Learning

  • Introduction to GPUs and how they differ from CPUs
  • Importance of GPUs in training Deep Learning Networks
  • The GPU constituent with simpler core and concurrent hardware
  • Keras Model Saving and Reusing
  • Deploying Keras with TensorBoard

Keras Cat Vs Dog Modelling

  • Activation Functions in Neural Network

Optimization Techniques

  • Some Examples for Neural Network

Convolutional Neural Networks (CNN)

  • Introduction to CNNs
  • CNNs Application
  • Architecture of a CNN
  • Convolution and Pooling layers in a CNN
  • Understanding and Visualizing a CNN

RNN: Recurrent Neural Networks

  • Introduction to RNN Model
  • Application use cases of RNN
  • Modelling sequences
  • Training RNNs with Backpropagation
  • Long Short-Term memory (LSTM)
  • Recursive Neural Tensor Network Theory
  • Recurrent Neural Network Model

Application of Deep Learning in image recognition, NLP and more

Real world projects in recommender systems and others

Got a question for us? Please mention it in the comments section and we will get back to you.

 

 

0 responses on "CIFAR-10 and CIFAR-100 Dataset in TensorFlow"

Leave a Message

Your email address will not be published. Required fields are marked *