In this tutorial, we are going to learn how to make a simple neural network model using Keras and Tensorflow using the famous MNIST dataset. Before we begin, we should note that this guide is geared toward beginners who are interested in *applied* deep learning. Our goal is to introduce you to one of the most popular and powerful libraries for building neural networks in Python.

## Why Keras?

*Keras* is our recommended library for deep learning in Python, especially for beginners.

Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. The goal of Keras is to facilitate fast experimentation.

- Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
- Supports both convolutional networks and recurrent networks, as well as combinations of the two.
- Runs seamlessly on CPU and GPU.

Documentation on Keras: https://keras.io/

## What is Tensorflow?

TensorFlow is an open source software library for high-performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices.

For more details : https://www.tensorflow.org/

## What is Deep Learning?

Deep learning refers to neural networks with multiple hidden layers that can learn increasingly abstract representations of the input data. It is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms.

**Step 1—> Setting up the environment**

Make sure you have the following installed on your computer:

*Python 3.6(Python 2.7 is fine )**NumPy**Matplotlib**TensorFlow installation instructions.***Installation**Keras

It is strongly recommended installing *Python, NumPy, SciPy, and Matplotlib* through the Anaconda Distribution. It comes with all of those packages and is a very handy tool.

**Step 2—> Importing the libraries**

We will first import the required libraries.

**Step 2—>Loading the dataset**(Loading the **MNIST dataset in Keras**. The **Keras** deep learning library provides a convenience method for loading the **MNIST dataset**. The **dataset** is downloaded automatically the first time this function is called and is stored in the home directory in ~/*.***keras**/**datasets**/**mnist**.pkl.gz as a 15MB file.

Let’s also plot an image using plt.imshow function and see the number. Feel free to check with indices other than 0. We can see the number 4 displayed here.

**Step 3—> Normalization**

Let us now normalize the data to make sure the points lie between 0 and 1

As we can see, we have already scaled resulting points between 0 and 1.

**Step 4—> Plotting image**

Let’s plot the image again after normalizing and putting cmap=plt.cm.binary. The image appears somewhat faded now.

**Step 5—>Training the model and compiling it**

We will be using ** adam **our optimizer and

*as our loss function. There are several other optimizers like Stochastic Gradient Descent but here we shall use adam as it serves our purpose. We will then set metrics equal to accuracy inside our compile function. Then, we will set*

**categorical_crossentropy***epochs=4*as we see it gives a reasonable accuracy. We can adjust the epochs accordingly and check how the accuracy and loss vary.

Let’s start by declaring a sequential model format:

For Dense layers, the first parameter is the output size of the layer. Keras automatically handles the connections between layers.

Note that the final layer has an output size of 10, corresponding to the 10 classes of digits. Also, note that we must flatten the weights from the layers (made 1-dimensional) before passing them to the fully connected Dense layer.

**Step 6—>Fitting the model**

To fit the model, all we have to do is declare the batch size and number of epochs to train for, then pass in our training data.

** Step 7—>Calculating Validating loss and Validation Accuracy**

**Step 8—> Let’s check how well it has predicted.**

*np.argmax().*