In the earlier article, we explored distance-based clustering with Ok-Means.
additional: to enhance how the gap could be measured we add variance, in an effort to get the Mahalanobis distance.
So, if k-Means is the unsupervised model of the Nearest Centroid classifier, then the pure query is:
What’s the unsupervised model of QDA?
Because of this like QDA, every cluster now needs to be described not solely by its imply, but in addition by its variance (and we even have so as to add covariance if the variety of options is greater than 2). However right here every thing is discovered with out labels.
So that you see the concept, proper?
And effectively, the identify of this mannequin is the Gaussian Combination Mannequin (GMM)…
GMM and the names of those fashions…
As it’s usually the case, the names of the fashions come from historic causes. They aren’t all the time designed to spotlight the connections between fashions, if they don’t seem to be discovered collectively.
Totally different researchers, completely different durations, completely different use circumstances… and we find yourself with names that generally disguise the true construction behind the concepts.
Right here, the identify “Gaussian Combination Mannequin” merely implies that the information is represented as a combination of a number of Gaussian distributions.
If we observe the identical naming logic as k-Means, it could have been clearer to name it one thing like k-Gaussian Combination
As a result of, in follow, as a substitute of solely utilizing the means, we add the variance. And we may simply use the Mahalanobis distance, or one other weighted distance utilizing each means and variance. However Gaussian distribution offers us possibilities which might be simpler to interpret.
So we select a quantity okay of Gaussian parts.
And by the best way, GMM will not be the one one.
In reality, your complete machine studying framework is definitely way more current than most of the fashions it incorporates. Most of those strategies have been initially developed in statistics, sign processing, econometrics, or sample recognition.
Then, a lot later, the sphere we now name “machine studying” emerged and regrouped all these fashions beneath one umbrella. However the names didn’t change.
So at this time we use a mix of vocabularies coming from completely different eras, completely different communities, and completely different intentions.
This is the reason the relationships between fashions are usually not all the time apparent whenever you look solely on the names.
If we needed to rename every thing with a contemporary, unified machine-learning model, the panorama would really be a lot clearer:
- GMM would grow to be k-Gaussian Clustering
- QDA would grow to be Nearest Gaussian Classifier
- LDA, effectively, Nearest Gaussian Classifier with the identical variance throughout courses.
And out of the blue, all of the hyperlinks seem:
- k-Means ↔ Nearest Centroid
- GMM ↔ Nearest Gaussian (QDA)
This is the reason GMM is so pure after Ok-Means. If Ok-Means teams factors by their closest centroid, then GMM teams them by their closest Gaussian form.
Why this complete part to debate the names?
Properly, the reality is that, since we already lined the k-means algorithm, and we already did the transition from Nearest Centroids Classifier to QDA, we already know all about this algorithm, and the coaching algorithm is not going to change…
And what’s the NAME of this coaching algorithm?
Oh, Lloyd’s algorithm.
Really, earlier than k-means was known as so, it was merely referred to as Lloyd’s algorithm, printed by Stuart Lloyd in 1957. Solely later, the machine studying group modified it to “k-means”.
And this algorithm manipulated solely the means, so we’d like one other identify, proper?
You see the place that is going: the Expectation-Maximizing algorithm!
EM is just the final type of Lloyd’s thought. Lloyd updates the means, EM updates every thing: means, variances, weights, and possibilities.
So, you already know every thing about GMM!
However since my article is known as “GMM in Excel”, I can’t finish my article right here…
GMM in 1 Dimension
Allow us to begin with this straightforward dataset, the identical we used for k-means: 1, 2, 3, 11, 12, 13
Hmm, the 2 Gaussians could have the identical variances. So take into consideration taking part in with different numbers in Excel!
And we naturally need 2 clusters.
Listed here are the completely different steps.
Initialization
We begin with guesses for means, variances, and weights.

Expectation step (E-step)
For every level, we compute how doubtless it’s to belong to every Gaussian.

Maximization step (M-step)
Utilizing these possibilities, we replace the means, variances, and weights.

Iteration
We repeat E-step and M-step till the parameters stabilise.

Every step is very simple as soon as the formulation are seen.
You will notice that EM is nothing greater than updating averages, variances, and possibilities.
We will additionally do some visualization to see how the Gaussian curves transfer throughout the iterations.
At the start, the 2 Gaussian curves overlap closely as a result of the preliminary means and variances are simply guesses.
The curves slowly separate, alter their widths, and eventually settle precisely on the 2 teams of factors.
By plotting the Gaussian curves at every iteration, you’ll be able to actually watch the mannequin study:
- the means slide towards the facilities of the information
- the variances shrink to match the unfold of every group
- the overlap disappears
- the ultimate shapes match the construction of the dataset
This visible evolution is extraordinarily useful for instinct. When you see the curves transfer, EM is now not an summary algorithm. It turns into a dynamic course of you’ll be able to observe step-by-step.

GMM in 2 Dimensions
The logic is precisely the identical as in 1D. Nothing new conceptually. We merely prolong the formulation…
As a substitute of getting one function per level, we now have two.
Every Gaussian should now study:
- a imply for x1
- a imply for x2
- a variance for x1
- a variance for x2
- AND a covariance time period between the 2 options.
When you write the formulation in Excel, you will notice that the method stays precisely the identical:
Properly, the reality is that in the event you take a look at the screenshot, you may assume: “Wow, the method is so lengthy!” And this isn’t all of it.

However don’t be fooled. The method is lengthy solely as a result of we write out the 2-dimensional Gaussian density explicitly:
- one half for the gap in x1
- one half for the gap in x2
- the covariance time period
- the normalization fixed
Nothing extra.
It’s merely the density method expanded cell by cell.
Lengthy to sort, however completely comprehensible when you see the construction: a weighted distance, inside an exponential, divided by the determinant.
So sure, the method appears to be like massive… however the thought behind this can be very easy.
Conclusion
Ok-Means offers onerous boundaries.
GMM offers possibilities.
As soon as the EM formulation are written in Excel, the mannequin turns into easy to observe: the means transfer, the variances alter, and the Gaussians naturally settle across the knowledge.
GMM is simply the following logical step after k-Means, providing a extra versatile technique to symbolize clusters and their shapes.
















