for an Previous Problem
You might be coaching your mannequin for spam detection. Your dataset has many extra positives than negatives, so that you make investments numerous hours of labor to rebalance it to a 50/50 ratio. Now you might be glad since you have been in a position to handle the category imbalance. What if I informed you that 60/40 might have been not solely sufficient, however even higher?
In most machine studying classification purposes, the variety of cases of 1 class outnumbers that of different lessons. This slows down studying [1] and might probably induce biases within the skilled fashions [2]. Essentially the most extensively used strategies to deal with this depend on a easy prescription: discovering a option to give all lessons the identical weight. Most frequently, that is executed by way of easy strategies corresponding to giving extra significance to minority class examples (reweighting), eradicating majority class examples from the dataset (undersampling), or together with minority class cases greater than as soon as (oversampling).
The validity of those strategies is commonly mentioned, with each theoretical and empirical work indicating that which resolution works greatest will depend on your particular software [3]. Nevertheless, there’s a hidden speculation that’s seldom mentioned and too usually taken without any consideration: Is rebalancing even a good suggestion? To some extent, these strategies work, so the reply is sure. However ought to we totally rebalance our datasets? To make it easy, allow us to take a binary classification downside. Ought to we rebalance our coaching knowledge to have 50% of every class? Instinct says sure, and instinct guided follow till now. On this case, instinct is incorrect. For intuitive causes.
What Do We Imply by ‘Coaching Imbalance’?
Earlier than we delve into how and why 50% will not be the optimum coaching imbalance in binary classification, allow us to outline some related portions. We name n₀ the variety of cases of 1 class (normally, the minority class), and n₁ these of the opposite class. This manner, the entire variety of knowledge cases within the coaching set is n=n₀+n₁ . The amount we analyze at this time is the coaching imbalance,
ρ⁽ᵗʳᵃⁱⁿ⁾ = n₀/n .
Proof that fifty% Is Suboptimal
Preliminary proof comes from empirical work on random forests. Kamalov and collaborators measured the optimum coaching imbalance, ρ⁽ᵒᵖᵗ⁾, on 20 datasets [4]. They discover its worth varies from downside to downside, however conclude that it is kind of ρ⁽ᵒᵖᵗ⁾=43%. Which means, in response to their experiments, you need barely extra majority than minority class examples. That is nonetheless not the complete story. If you wish to purpose at optimum fashions, don’t cease right here and straightaway set your ρ⁽ᵗʳᵃⁱⁿ⁾ to 43%.
The truth is, this yr, theoretical work by Pezzicoli et al. [5], confirmed that the the optimum coaching imbalance will not be a common worth that’s legitimate for all purposes. It’s not 50% and it isn’t 43%. It seems, the optimum imbalance varies. It could some instances be smaller than 50% (as Kamalov and collaborators measured), and others bigger than 50%. The particular worth of ρ⁽ᵒᵖᵗ⁾ will rely upon particulars of every particular classification downside. One option to discover ρ⁽ᵒᵖᵗ⁾ is to coach the mannequin for a number of values of ρ⁽ᵗʳᵃⁱⁿ⁾, and measure the associated efficiency. This might for instance seem like this:

Though the precise patterns figuring out ρ⁽ᵒᵖᵗ⁾ are nonetheless unclear, plainly when knowledge is considerable in comparison with the mannequin dimension, the optimum imbalance is smaller than 50%, as in Kamalov’s experiments. Nevertheless, many different components — from how intrinsically uncommon minority cases are, to how noisy the coaching dynamics is — come collectively to set the optimum worth of the coaching imbalance, and to find out how a lot efficiency is misplaced when one trains away from ρ⁽ᵒᵖᵗ⁾.
Why Excellent Steadiness Isn’t At all times Finest
As we stated, the reply is definitely intuitive: as completely different lessons have completely different properties, there isn’t any cause why each lessons would carry the identical data. The truth is, Pezzicoli’s workforce proved that they normally don’t. Due to this fact, to deduce the perfect resolution boundary we’d want extra cases of a category than of the opposite. Pezzicoli’s work, which is within the context of anomaly detection, gives us with a easy and insightful instance.
Allow us to assume that the information comes from a multivariate Gaussian distribution, and that we label all of the factors to the proper of a choice boundary as anomalies. In 2D, it might seem like this:

The dashed line is our resolution boundary, and the factors on the proper of the choice boundary are the n₀ anomalies. Allow us to now rebalance our dataset to ρ⁽ᵗʳᵃⁱⁿ⁾=0.5. To take action, we have to discover extra anomalies. For the reason that anomalies are uncommon, those who we’re most definitely to seek out are near the choice boundary. Already by eye, the situation is strikingly clear:

Anomalies, in yellow, are stacked alongside the choice boundary, and are subsequently extra informative about its place than the blue factors. This would possibly induce to assume that it’s higher to privilege minority class factors. On the opposite facet, anomalies solely cowl one facet of the choice boundary, so as soon as one has sufficient minority class factors, it could turn out to be handy to spend money on extra majority class factors, as a way to higher cowl the opposite facet of the choice boundary. As a consequence of those two competing results, ρ⁽ᵒᵖᵗ⁾ is usually not 50%, and its precise worth is downside dependent.
The Root Trigger Is Class Asymmetry
Pezzicoli’s principle exhibits that the optimum imbalance is usually completely different from 50%, as a result of completely different lessons have completely different properties. Nevertheless, they solely analyze one supply of range amongst lessons, that’s, outlier habits. But, as it’s for instance proven by Sarao-Mannelli and coauthors [6], there are many results, such because the presence of subgroups inside lessons, which might produce an analogous impact. It’s the concurrence of a really massive variety of results figuring out range amongst lessons, that tells us what the optimum imbalance for our particular downside is. Till now we have a principle that treats all sources of asymmetry within the knowledge collectively (together with these induced by how the mannequin structure processes them), we can not know the optimum coaching imbalance of a dataset beforehand.
Key Takeaways & What You Can Do Otherwise
If till now you rebalanced your binary dataset to 50%, you have been doing nicely, however you have been most definitely not doing the very best. Though we nonetheless do not need a principle that may inform us what the optimum coaching imbalance must be, now you already know that it’s doubtless not 50%. The excellent news is that it’s on the best way: machine studying theorists are actively addressing this matter. Within the meantime, you’ll be able to consider ρ⁽ᵗʳᵃⁱⁿ⁾ as a hyperparameter which you’ll be able to tune beforehand, simply as some other hyperparameter, to rebalance your knowledge in essentially the most environment friendly means. So earlier than your subsequent mannequin coaching run, ask your self: is 50/50 actually optimum? Attempt tuning your class imbalance — your mannequin’s efficiency would possibly shock you.
References
[1] E. Francazi, M. Baity-Jesi, and A. Lucchi, A theoretical evaluation of the educational dynamics beneath class imbalance (2023), ICML 2023
[2] Okay. Ghosh, C. Bellinger, R. Corizzo, P. Branco,B. Krawczyk,and N. Japkowicz, The category imbalance downside in deep studying (2024), Machine Studying, 113(7), 4845–4901
[3] E. Loffredo, M. Pastore, S. Cocco and R. Monasson, Restoring steadiness: principled beneath/oversampling of knowledge for optimum classification (2024), ICML 2024
[4] F. Kamalov, A.F. Atiya and D. Elreedy, Partial resampling of imbalanced knowledge (2022), arXiv preprint arXiv:2207.04631
[5] F.S. Pezzicoli, V. Ros, F.P. Landes and M. Baity-Jesi, Class imbalance in anomaly detection: Studying from an precisely solvable mannequin (2025). AISTATS 2025
[6] S. Sarao-Mannelli, F. Gerace, N. Rostamzadeh and L. Saglietti, Bias-inducing geometries: an precisely solvable knowledge mannequin with equity implications (2022), arXiv preprint arXiv:2205.15935