GAUSSIAN MIXTURE MODEL CLUSTERING ALGORITHM

One of the issues with k-means is that the data must be in a circular structure. Because k-means estimates the distance between data points via a circular path, non-circular data is incorrectly grouped. This is a problem that Gaussian mixture models solve. It does not require circular data to function properly. To match arbitrarily shaped data, the Gaussian mixture model employs several Gaussian distributions. In this hybrid model, many single Gaussian models serve as hidden layers. So the model evaluates the likelihood that a data point corresponds to a given Gaussian distribution and assigns it to a cluster.
Gaussian mixture models, like K-means, may be used to cluster unlabeled data. However, there are a few advantages to utilizing Gaussian mixture models over k-means. The variance is not taken into consideration by K-means (width of the bell shape curve). The form of the distribution in two dimensions is determined by variance/covariance. The K-means model centers each cluster on a circle (or, in higher dimensions, a hyper-sphere) with a radius specified by the cluster's most distant point.

This works well when the data is circular. When data, on the other hand, takes on a new form, we get something like this.

In contrast, Gaussian mixture models can handle even very oblong clusters.

THE MAIN CODE OF GAUSSIAN MIXTURE CLUSTERING ALGORITHM
Full Code Of Implementing Gaussian Mixture Clustering Algorithm
Practicing Gaussian Mixture clustering algorithm (Download the dataset here):
from sklearn.mixture import GaussianMixture