# Application of Clustering in Market Segmentation

“Clustering is an unsupervised machine learning algorithm where the target is unknown”. Estimated targets group is an indistinguishable homogeneous observations in a single cluster while segregating those which are entirely disparate. Clustering could be used to partition n number of observations into p-type of clusters. For instance, in marketing analysis, an analyst has the access to several other measurements to statistically segment the customer groups such as age, income, sex, geographic locations etc. Based on the above attributes one must perform market segmentation by recognizing distinct and perceptible subgroups of people who might be more receptive to a form of advertising, or more likely to buy a certain product. A segment is typically a cluster of customer preferences which help in making a strategic decision on how to up sell and cross-sell entities based on user need and wants.

Now, why do we need customer segmentation or clustering?

The intent of clustering is fundamental to segmentation in grouping similar customers and products in a marketing activity. The companies cannot target each customer but rather it apportions customer based on their preferences to target individual clusters by positioning themselves in a unique segment. For case, a firm might want to segregate customer based on their price sensitivity, quality of product and brand loyalty. The resulting variables measured through a Likert scale, the higher value signifies the greater inclination towards price sensitivity, quality of product and brand loyalty where a low value confers to lower intensity.

# Clustering Algorithms:

Also known as nearest neighbor clustering, this is one of the oldest and most famous of the hierarchical techniques. The distance between the two groups shows the distance between two members in the group. Individual observation added sequentially to a single group of a cluster.

Also known as the furthest neighbor or maximum method, this method defines the distance between two groups as the distance between their two farthest-apart members. This method usually yields clusters that are well separated and compact.

## Simple Average

Also called the weighted pair-group method, this algorithm defines the distance between groups as the average distance between the members, weighted so that the two groups have an equal influence on the result.

## Centroid

Also called the unweighted pair-group centroid method, this method defines the distance between two groups as the distance between their centroids (center of gravity or vector average). The method should only be used with Euclidean distances.

There are several approaches to partitioning into groups. These approaches are hierarchical methods, partitioning methods (more precisely, k-means), and two-step clustering which is largely a combination of the first two methods. An important quandary in the application of cluster analysis is the decision about how many clusters derived from the data. There is always a trade-off between choosing many clusters allowing to find various segments and tremendous subtle differences between segments rather taking a few clusters as possible to make them easy to understand and feasible.

# Hierarchical Methods

Hierarchical clustering methods follow typical tree-based approach. The clustering consist of similarity and dissimilarity measure. This can be evaluated by calculating distances between the given pair of objects generally the objects with shorter distances clustered into the same groups otherwise they considered as dissimilar. Basically, there are two approaches to hierarchical clustering: Divisive and Agglomerative.

In divisive clustering, a top-down approach clustering starts with a single observation based on nearest calculated distance measures elements grouped into similar clusters. The steps are recursively repeated until no element left for clustering. On the other hand, Agglomerative follows a bottom-up approach we assign each observation to its own cluster by computing the similarity between different clustering groups, joining the group closer to the element.

## Distance Metrics

Euclidean distance– Commonly practiced distance metric, Euclidean distance computes the root of the square difference between the coordinates of the pair of objects.

City block or Manhattan distance– Manhattan distance computes the absolute differences between coordinates of the pair of objects

Chebyshev distance-Chebyshev Distance also known as maximum value distance and computed as the absolute magnitude of the differences between the coordinate of a pair of objects. The metric applied when observations are ordinal.

Minkowski Distance– This distance measure can be calculated for both ordinal and quantitative variables

# K-means Clustering

Another important set of clustering procedure is K-means partitioning method one of the most powerful techniques for market research which are entirely different from the earlier algorithm discussed. The algorithm demands the computation of k number of centroids every item will then be assigned to the nearest centroid and the process repeats iteratively until every observation clustered into groups.

The initial step is to identify the number of k partitioning centroids

Based on the above parameters characteristics within the elements will be homogeneous while maximizing the differences between the groups.

# Two-Step Clustering

The method resolves the issue of analyzing mixed variables measured on different scale levels. Â The algorithm based on a two-stage approach: In the first stage, the algorithm undertakes a procedure similar to the k-means algorithm. Based on the output from the previous step, the two-step procedure conducts a modified hierarchical agglomerative clustering procedure that combines the objects sequentially to form homogeneous clusters. Building a so-called cluster feature tree whose â€śleavesâ€ť represent distinct objects in the dataset. The procedure can handle categorical and continuous variables simultaneously by calculating measures-of-fit such as Akaikeâ€™s Information Criterion (AIC) or Bayes Information Criterion (BIC).

Summing-up, a good marketing strategy not only entails segmenting customer groups but also targeting and positioning groups based on customer profiling, businesses bucketize different segments to make informed decisions in terms of sales and marketing dollars to increase ROI. Eventually, this helps businesses to deliver enhanced customer service and boost customer satisfaction.

https://engineering.eckovation.com/hierarchical-clustering/

https://engineering.eckovation.com/introduction-machine-learning-algorithms/

Don't miss out!