You are on page 1of 3

Report on Deep learning Tutorial

In this Lab Session we learned how to Create Single & multi layer neural network model.

For this we need to install some packages in our systems. The Installation must be in the
following order.

1. Install GIT
2. Install Python3
3. Install Python3-Matplotlib
4.Install Python3-pip
5.Install --upgrade tensorflow

After installation of all these things, make sure that your back end will set to Tensorflow.Our
first job is to create a single layer neural network in which we use MNIST dataset as the training
data and that consist of 10 class classifiers and it can take 28 28 MNIST data images as an
input and can classify the into 0 9 digits (10 Classes).

The Creation of above neural network includes following steps.

Step-1.

Every image of size 28*28 can be seen as a single row vector of dimensions (1*784). Then we
multiply the whole matrix with the weight matrix. The Whole can be written in a single step as-

Y=softmax(X,W)+b

The code for this can be written in tensorflow(phython) as

Y=tf.nn.softmax(tf.matmul (X,W)+b)

Step-2

For the correct learning we can use the cross entropy function as the loss function. The code for
using the cross entropy can be written as-

Cross_entropy=-tf.reduce_sum(Y*tf.log(y))

In this case we will fix out Learning Rate (i.e. 0.003).

Step-3

After all above steps we will write the codes for training
train_step=optimizer.minimize(cross_entropy)

After all the above steps we will run the whole program.

When we analyze the result then we can easily determine that the accuracy is around 92% but
there is so much variation in between the training data and test data.

In order to make this variation lesser we can use the 5 layer neural network model.

We can also use some more features for this neural network. Like we can use RELU in place of
softmax function.

The code for RELU can be written as

Y=tf.nn.relu(tf.matmul (X,W)+b)

The use of RELU can increase the accuracy.

In any neural network there is also a probability of overfitting. So, in this problem we use
Dropouts as the regularizing technique. It constrains network adaptation to the data at training
time, to avoid it becoming too smart in learning the input data ; it thus helps to avoid
overfitting.

The code can be written as

Y=tf.nn.dropout(Yf,pkeep)

Hence, by using all the above techniques we can increase the accuracy of our neural network
upto 98%.

You might also like