**Keras Deep Learning Library :**

**Keras** is **High-Level Deep learning Python library** extensively used by Data-scientists when it comes to architect the neural networks for complex problems. Higher level API means that Keras can act as front end while you can ask Tensor-flow or Theano to work as back end.

**Special things about Keras :**

Keras eases the data scientists’ work in implementing the complex neural networks. It is highly popular for its great interpret-able API with easy implementation Documentation is very clear for any one to get started. The other most important thing is it can serve as higher-level API. It means can act as interface for tensorflow, theano etc. working on backend with Keras at frontend.

Here, we shall look at what is Keras and how it ranks among other similar frameworks in addition to its background.

**History of Deep learning Libraries :**

Back then, implementing even a **two layered convolution neural network** was to take some hundreds of lines of codes in python. Right from implementing the optimizing algorithms, Back-prop , number of units in each layer, type of activation, kernel [ Filter ] size . It could take lots of parameters to code and sync all of them to training process . So evolving from that stage to pluck and play stage a lot of things developed. Let’s try to know why Keras is better than other deep learning libraries. Here are some of the Deep learning libraries which are in industry usage apart from Keras.

**Deep learning libraries:**

**Caffe**: It just started as college academia project for a student in**University of CalifBerkley,**and this led to a great community usage in Deep learning in early context of time. Interfacing it with python for implementation of the Neural networks first, worked pretty well in terms of speed . But it was later overridden by**caffe2**, updated version of caffe which is really good at speed, implementing the matrix multiplication and ease of use.

**Torch**: Torch is another Deep learning library written in Lua and C programming. A most sought skill in Data science is ability to work with Torch, besides others. It is lighting fast in implementing the matrix multiplications using numpy as its base data arrays.**Pytorch**is the python version of the torch(developed by**Facebook**)**.**

**Tensor-flow**: Tensor-flow is the number one populous deep learning library across the industry till date and it is developed by Google. It uses tensors as the basic operations ( e.g Matrix multiplication ) . Dynamic computation graph as its specialty, which means you create a computation graph once and run the graph again and again. There is actually no need of recreating the same graphs for running next computation. But this is not the case in above mentioned libraries where you run through the network / graph many times which may sometimes not be optimal.

**Implementing Sequential neural newtork model using Keras : **

As mentioned earlier it has nicer and more interpret-able way of calling the functions to actually create your custom neural network. Customization can be a choice of your required loss function, activation function, number of neurons in each layer and various other technical details.

**Simple Neural Network**

For example lets create simple **neural network layer** with **three convolution layers**. Just take a look at the number of lines it takes to create and see the methods used in code.

# Import all the necessary functions to build the neural network import keras import keras.layers import Conv1D from keras.optimizers import Adam fromkeras.models import sequential # Lets start building 3 layered convolutional network def create_model(): model = sequential() # First layer model.add(Conv1D ( filters = 10, kernel_size = 10, input_shape, activation = 'relu') # Second layer model.add(Conv1D ( filters = 10, Kernel_size = 10, activation = 'relu' ) # third layer model.add(Conv1D ( filters = 10, kernel_size = 10, activation = 'relu' ) # flatten model.add(Flatten()) # compile the model model.compile( loss = 'binary_corssentropy', optimizers = Adam(1e-4), metrics= ['accuracy']) return model

So , the above code exactly took 12 lines to build a whole model. But it only took 3 lines to actually create a 3 -layered neural network . It seems the easiest way of implementation with different loss functions, activation functions etc and is efficient. You can also easily scale up the model to have “N” number of layers , “N” number of filters, and your choice of layers it may **Dense** , **Max-pooling** , **Min-pooling, CNN, RNN etc.**

You can see the most populous deep learning libraries which was recently surveyed from KDnuggets finds Google Tensor-flow at top and then comes the Keras . Keras is a Higher Level API which on implementation call up the tensor-flow for implementing the math and other basic operations at back end. It can make Theano to work as back end and get better results.

** Popular Deep Learning Libraries**

** SOURCE IMG : KDnuggets**

**Keras Functional API**

There is also one more thing in keras called as functional API. Consequently, we use it for more customized implementation of neural networks. You can also read the documentation of keras to get familiar with Keras functional API in all keras documentation.

**Conclusion :**

Finally, Keras comes with really good methods by acting as a layer upon frameworks like Tensor-flow and Theano. That makes it a best choice for all levels of data scientists no matter one is amateur or a pro in the field .