Machine Learning Ideas: The trajectory of Intelligence

Machine Learning is becoming one of the fastest growing trades in computer science. It has been in development ever since the Turing test deciphered the different elements of an artificially intelligent system. There have been tremendous developments ever since that and with the improvement in processing power, machine learning ideas have been brought to life. It is one of the most in-demand emerging technologies as of now and has great potential in research work.

Here, we are going to take look down the archives to how machine learning has been developed and more importantly, where it is headed and why should take it up.

Development

People have been working on machine learning ever since Alan Turing identified it as a part of artificial intelligence. It immediately became clear that the process of acquiring knowledge is fundamental to the fabrication of artificial intelligence. Since then, people have tried their hands at developing artificial intelligence through machine learning only. Practices like Computer Vision and Robotics took the back seat in mainstream research since the inception of machine learning. The most primitive form of machine learning is the decision tree or the nearest neighbors algorithm. We shall get to these shortly.

Basics of Machine Learning

Machine Learning ideas.
Source: KDNuggets

Machine learning is of two types, supervised and unsupervised. In supervised learning, a machine is made to learn from its previous experience. The experience here is data. And the knowledge gained is, again, applied on data. Therefore, we have two datasets in supervised learning. We call these two sets of data: training data and testing data.

  • Training data is the data over which a machine learning model ( popularly called learner ) is trained. It is the experience from which knowledge is acquired.
  • Testing data is the data on which the model applies said knowledge and displays its accuracy. It is the application of acquired knowledge.

We have pretty much summed up the basic machine learning ideas here: you train an algorithm on some data and you make it work on more data. Similarly, unsupervised learning directly acquires knowledge without training over any data. It is more powerful and efficient,  although it is very hard to control.

Problems to be solved and algorithms

Machine learning algorithms are elegant and the math behind them is gripping, to say the least. Firstly, they can overperform and amaze you. Additionally, you can tune them for specific purposes and create customized, more effective models. There are two main problems that you have to solve: Regression and Classification. Classification is the act of  determining the class of a thing (a thing is referred to as a point on a graph). For example, given a person’s photo, you have classify whether it is a male or a female. Regression is the act of spotting a trend and therefore predicting a value for some other time. For example, when we plot a line passing through two points and find the y value of another point from its known x value.

There are many algorithms that you should know for the same. Here are some models that you must know:

Decision trees: In decision trees, we create a tree structure, through asking some questions to the dataset. We then use this structure to evaluate the test data. It’s the simplest machine learning model. Just look at the image.

Decision tree

Clustering: It is an unsupervised technique where we put closest data points together to form a cluster.

K Nearest neighbours: The nearest neighbours algorithm classifies a point based on the class of its graphically closest neighbours from the training data. The algorithm takes a value “k” and checks the k nearest neighbour points of the point to be classified on a graph. The class which has more neighbors among these points is assigned to the point to be classified. Read t

Neural Networks: Here we make up units that mimic the neuron of a human brain. It takes inputs, gives weightage to these inputs and then outputs a value. These units( called perceptrons ) are then linked to form a network which then gives a concrete output.

You can look these up in details on my post on machine learning algorithms.

The great milestones

Some people believe that AI is the next big domain of research, call ing it the “new electricity”. The truthfulness of that statement is widely debated, but there have been major advancements in the field so far.

  • The first learner: The legendary Arthur Samuel designs a program that learns to play checkers.
  • Birth of neural networks: The perceptron is formulated by Frank Rosenblatt in 1957. Great potential is unlocked but the implementation is limited due to low processing power.
  • Students take the mantle: As active research work by full-time students becomes, for significant, techies at Stanford University, create the Stanford Cart. A cart that detects and dodges obstacles. to this day, this is where the self-driving cars research started.
  • Deep Blue: In 2002 IBM’s AI, Deep Blue beat Gary freaking Kasparov at chess. The world was taken aback and many people started being cynic about the development of AI and thereats it can pose.

Motivation

What motivated me is a line from the book “Artificial Intelligence: A modern approach” by Norvig and Russell. It said that the Einsteins and Edisons of AI are yet to be seen. The research potential of AI is there for everyone to see. Today, Neural networks are ruling AI. However, there are many problems with it that need solving. Doing incisive research work in machine learning is better than being a cog in the coroporate wheel.

Machine Learning is the way to break the barriers. Start here!

Free Online Certification Course of Introduction to Machine Learning: Enroll Now

Don't miss out!
Subscribe To Our Newsletter

Learn new things. Get an article everyday.

Invalid email address
Give it a try. You can unsubscribe at any time.

Comments

comments