**“A breakthrough in machine learning would be worth ten Microsofts”**Â , Bill Gates once quoted while emphasizing on Â how machine learning and the machine learning algorithms can change today`s world with a boom in the technologies that are revolving around us in today’s lifestyle , be it the ongoing projects of Self â€“ Driving cars or the projects of Google to cater as a customer executive as displayed in the Google I/O 2018Â and many more such classic things going around us.

With a fusion of machine learning in the vast number ofÂ technologies that are going to coming in the future, we all are moving towards a new height of living standards, but with the soaring demands of **machine learning engineers** to make these things happen in reality, they require a mix of knowledge of multiple domains like **Mathematics**, **Statistics** along with **algorithms** that are to be used in the **models** to be built-in for â€śMACHINE LEARNING â€ś solutions.

Now, let’s dive into some major algorithms that Machine Learning sector usually requires like the much famous ones â€“ **Linear Regression** , **Decision Trees** etc. Broadly there are 3 major types of algorithms that usually a person thinking of a career in Machine Learning must know about – Â **Supervised Learning**, **Unsupervised Learning** , **Reinforcement Learning**.

### Broad Classification of Machine Learning Algorithms

**Supervised Learning algorithms** consists of a target / outcome variable (or dependent variable) that has to be predicted from a given set of predictors (independent variables). Using these set of variables, weÂ generate a function that map inputsÂ to desired outputs.Â Major examples of Supervised Learning are Regression, Decision Tree , Random ForestÂ etc.

Whereas , **Unsupervised Learning algorithms**,Â does not have any target or outcome variable to predict / estimate. Â It is used for clustering population in different groups. Examples of Unsupervised Learning: **K-means Clustering** etc.

While, **Reinforcement Learning algorithms** train the machine to make specific decisions. And machine trains itself continually using trial and error. This machine learns from experience and tries to capture the best possible knowledge to make accurate business decisions. Example of Reinforcement Learning: **Markov Decision Process** etc.

As we discussed the major ML algorithm categories, now letâ€™s discus the algorithms in these categories that are highly in demand to work upon Machine Learning models .

Starting with **Linear Regression** **algorithm**, in this we give output as a continuous value from a linear combination of input features & we draw a relationship between independent and dependent variables by fitting a bestÂ line of the format Y = m*x + c & it’s mainly used to predict the real values (cost of houses, number of calls, total sales etc.).

In **K nearest neighboursÂ algorithm** , we store all available cases and classify new casesÂ by a majority vote of its k neighbours. The caseÂ being assigned to the class is most common amongst its K nearest neighbors measured by a distance function. These distance functions can be Euclidean, Manhattan, Minkowski and Hamming distance but this algorithm requires a lot of computations .

Similarly , **Logistic Regression algorithm **Â is used to estimate discrete valuesÂ ( like 0/1, yes/no, true/falseÂ ) based on given set of independent variable(s).Â and henceÂ predicts the probability ofÂ occurrence of an event by fitting data to a logistic function .

But in **SVMÂ (Support Vector Machine) algorithm** , we Â plotÂ each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate.

Whereas in **Naive Bayes classification algorithm** , we rely on Baye’s Theorem with an assumption of independence between predictors i.e. we assume that the presence of a particular feature in a class is unrelated to the presence of any other feature.

While in **Decision tree algorithm** ; we split the population of the features into two or more homogeneous sets. This is done based on most significant attributes/ independent variables to make as distinct groups as possible.

While, **K means clustering algorithm **classifies a given data set through aÂ certain number ofÂ clusters (assume k clusters). Data points inside a cluster are homogeneous and heterogeneous to peerÂ groups.

In **Random Forest algorithm **, we use a Â collection ofÂ decisionÂ trees (i.e a â€śForestâ€ť) in order to classify a new object based on attributes, each tree gives a classification and we say the tree â€śvotesâ€ť for that class. The forest chooses the classification having the most votes (over all the trees in the forest).

In addition to these , we have another machine learning algorithm called **Gradient Boosting Algorithm** , in whichÂ we dealÂ withÂ plenty of data to make a predictionÂ withÂ high prediction power. Apart from this , we have an algorithm called **Catboost** ,in which weÂ deal with categorical variables without showing the type conversion error, which helps us in tuning the model better rather than sorting out trivial errors . Moreover we have another algorithms like **XGBoost** etc.

The above mentioned algorithms are a way to just begin with exploring the Machine Learning techniques in this vast field of awesome innovations & creativity . Read in detail about a few ofÂ Â them.