
Picture by Editor
# Introduction
A harsh fact to start with: textbook information science often turns into a lie in the actual world. Ideas and methods are taught on finely curated, superbly bell-curved information variables, however as quickly as we enterprise into the wild of actual initiatives, we’re hit with numerous outliers, unduly skewed distributions, and indomitable variances.
A earlier article on constructing an exploratory information evaluation (EDA) pipeline with Pingouin confirmed learn how to detect, by way of checks, circumstances when the info violates a wide range of assumptions like homoscedasticity and normality. However what if the checks fail? Throwing the info away is not the answer: turning strong is.
This text uncovers the craftsmanship of utilizing strong statistics in information science processes. These are mathematical strategies notably constructed to yield dependable and legitimate outcomes even when the info doesn’t meet classical assumptions or is pervaded by outliers and noise. By adopting a “select your individual journey” method, we are going to create a trio of eventualities utilizing Python’s Pingouin to handle the ugliest facets throughout the information it’s possible you’ll encounter in your each day work.
# Preliminary Setup
Let’s begin by putting in (if wanted) and importing Pingouin and Pandas, after which we are going to load the wine high quality dataset obtainable right here.
!pip set up pingouin pandas
import pandas as pd
import pingouin as pg
# Loading our messy, real-world-like dataset, containing purple and white wine samples
url = "https://uncooked.githubusercontent.com/gakudo-ai/open-datasets/refs/heads/major/wine-quality-white-and-red.csv"
df = pd.read_csv(url)
# Take a small peek at what we're about to take care of
df.head()
In case you seemed on the earlier Pingouin article, you already know this can be a notoriously messy dataset that failed to satisfy a number of widespread assumptions. Now we are going to embark on three totally different “adventures”, every highlighting a state of affairs, a core downside, and a proposed strong repair to handle it.
// Journey 1: When the Normality Check Fails
Suppose we run normality checks on two teams: white wine samples and purple wine samples.
white_wine_alcohol = df[df['type'] == 'white']['alcohol']
red_wine_alcohol = df[df['type'] == 'purple']['alcohol']
print("Normality take a look at for White Wine Alcohol content material:")
print(pg.normality(white_wine_alcohol))
print("nNormality take a look at for Pink Wine Alcohol content material:")
print(pg.normality(red_wine_alcohol))
You can see that neither distribution is regular, with extraordinarily low p-values. Though non-normality itself would not immediately sign outliers or skewness, a robust deviation from normality typically suggests such traits could also be current within the information. Evaluating means by way of a t-test on this state of affairs can be harmful and prone to yield unreliable outcomes.
The strong repair for a state of affairs like that is the Mann-Whitney U take a look at. As an alternative of evaluating averages, this take a look at compares the ranks within the information — sorting all wines in a bunch from lowest to highest alcohol content material, as an illustration. This rank-based method is the grasp trick that strips outliers of their typically harmful magnitude. Here is how:
# Separating our two teams
red_wine = df[df['type'] == 'purple']['alcohol']
white_wine = df[df['type'] == 'white']['alcohol']
# Operating the strong Mann-Whitney U take a look at
mwu_results = pg.mwu(x=red_wine, y=white_wine)
print(mwu_results)
Output:
U_val various p_val RBC CLES
MWU 3829043.5 two-sided 0.181845 -0.022193 0.488903
Because the p-value is just not under 0.05, there isn’t a statistically important distinction in alcohol content material between the 2 wine varieties — and this conclusion is assured to be outlier-proof and skewness-proof.
// Journey 2: When the Paired T-Check Fails
Say you now need to evaluate two measurements taken from the identical topic — e.g. a affected person’s sugar degree earlier than and after a drug prototype, or two properties measured in the identical bottle of wine. The main focus right here is on how the variations between paired measurements are distributed. When such variations should not usually distributed, a typical paired t-test will yield unreliable confidence intervals.
The best repair on this state of affairs is the Wilcoxon Signed-Rank Check: the strong sibling of the paired t-test, which works by observing the variations between columns and rating their absolute values. In Pingouin, this take a look at is named utilizing pg.wilcoxon(), passing within the two columns containing the paired measures throughout the identical topic — e.g. two varieties of wine acidity.
# Run the strong Wilcoxon signed-rank take a look at for paired information
wilcoxon_results = pg.wilcoxon(x=df['fixed acidity'], y=df['volatile acidity'])
print(wilcoxon_results)
Outcome:
W_val various p_val RBC CLES
Wilcoxon 0.0 two-sided 0.0 1.0 1.0
The end result above reveals a statistically important distinction, or “excellent separation,” between the 2 measurements. Not solely are the 2 wine properties totally different, however additionally they function at totally totally different magnitude tiers throughout the dataset.
// Journey 3: When ANOVA Fails
On this third and remaining journey, we need to examine whether or not residual sugar ranges in wine differ considerably throughout distinct high quality rankings — observe that the latter vary between 3 and 9, taking integer values, and may subsequently be handled as discrete classes.
If Pingouin’s Levene take a look at of homoscedasticity fails dramatically — as an illustration, as a result of sugar variance in mediocre wines is large however very small in top-quality wines — a classical one-way ANOVA could produce deceptive outcomes, as this take a look at assumes equal variances amongst teams.
The repair is Welch’s ANOVA, which penalizes teams with excessive variance, thereby balancing out scales and making comparisons fairer throughout a number of classes. Right here is learn how to run this strong various to conventional ANOVA utilizing Pingouin:
# Run Welch's ANOVA to match sugar throughout high quality rankings
welch_results = pg.welch_anova(information=df, dv='residual sugar', between='high quality')
print(welch_results)
Outcome:
Supply ddof1 ddof2 F p_unc np2
0 high quality 6 54.507934 10.918282 5.937951e-08 0.008353
Even the place a one-way ANOVA might need struggled as a consequence of unequal variances, Welch’s ANOVA delivers a strong conclusion. The very small p-value is evident proof that residual sugar ranges differ considerably throughout wine high quality rankings. Keep in mind, nonetheless, that sugar is barely a small piece of the puzzle influencing wine high quality — a degree underscored by the low eta-squared worth of 0.008.
# Wrapping Up
By means of three instance eventualities, every pairing a messy-data downside with a strong statistical technique, we’ve got realized that being a talented information scientist does not imply having excellent information or tuning it completely — it means understanding what to do when the info will get troublesome for various causes. Pingouin’s capabilities implement a wide range of strong checks that assist escape the failed-assumptions lure and extract mathematically sound insights with little further effort.
Iván Palomares Carrascosa is a pacesetter, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the actual world.

Picture by Editor
# Introduction
A harsh fact to start with: textbook information science often turns into a lie in the actual world. Ideas and methods are taught on finely curated, superbly bell-curved information variables, however as quickly as we enterprise into the wild of actual initiatives, we’re hit with numerous outliers, unduly skewed distributions, and indomitable variances.
A earlier article on constructing an exploratory information evaluation (EDA) pipeline with Pingouin confirmed learn how to detect, by way of checks, circumstances when the info violates a wide range of assumptions like homoscedasticity and normality. However what if the checks fail? Throwing the info away is not the answer: turning strong is.
This text uncovers the craftsmanship of utilizing strong statistics in information science processes. These are mathematical strategies notably constructed to yield dependable and legitimate outcomes even when the info doesn’t meet classical assumptions or is pervaded by outliers and noise. By adopting a “select your individual journey” method, we are going to create a trio of eventualities utilizing Python’s Pingouin to handle the ugliest facets throughout the information it’s possible you’ll encounter in your each day work.
# Preliminary Setup
Let’s begin by putting in (if wanted) and importing Pingouin and Pandas, after which we are going to load the wine high quality dataset obtainable right here.
!pip set up pingouin pandas
import pandas as pd
import pingouin as pg
# Loading our messy, real-world-like dataset, containing purple and white wine samples
url = "https://uncooked.githubusercontent.com/gakudo-ai/open-datasets/refs/heads/major/wine-quality-white-and-red.csv"
df = pd.read_csv(url)
# Take a small peek at what we're about to take care of
df.head()
In case you seemed on the earlier Pingouin article, you already know this can be a notoriously messy dataset that failed to satisfy a number of widespread assumptions. Now we are going to embark on three totally different “adventures”, every highlighting a state of affairs, a core downside, and a proposed strong repair to handle it.
// Journey 1: When the Normality Check Fails
Suppose we run normality checks on two teams: white wine samples and purple wine samples.
white_wine_alcohol = df[df['type'] == 'white']['alcohol']
red_wine_alcohol = df[df['type'] == 'purple']['alcohol']
print("Normality take a look at for White Wine Alcohol content material:")
print(pg.normality(white_wine_alcohol))
print("nNormality take a look at for Pink Wine Alcohol content material:")
print(pg.normality(red_wine_alcohol))
You can see that neither distribution is regular, with extraordinarily low p-values. Though non-normality itself would not immediately sign outliers or skewness, a robust deviation from normality typically suggests such traits could also be current within the information. Evaluating means by way of a t-test on this state of affairs can be harmful and prone to yield unreliable outcomes.
The strong repair for a state of affairs like that is the Mann-Whitney U take a look at. As an alternative of evaluating averages, this take a look at compares the ranks within the information — sorting all wines in a bunch from lowest to highest alcohol content material, as an illustration. This rank-based method is the grasp trick that strips outliers of their typically harmful magnitude. Here is how:
# Separating our two teams
red_wine = df[df['type'] == 'purple']['alcohol']
white_wine = df[df['type'] == 'white']['alcohol']
# Operating the strong Mann-Whitney U take a look at
mwu_results = pg.mwu(x=red_wine, y=white_wine)
print(mwu_results)
Output:
U_val various p_val RBC CLES
MWU 3829043.5 two-sided 0.181845 -0.022193 0.488903
Because the p-value is just not under 0.05, there isn’t a statistically important distinction in alcohol content material between the 2 wine varieties — and this conclusion is assured to be outlier-proof and skewness-proof.
// Journey 2: When the Paired T-Check Fails
Say you now need to evaluate two measurements taken from the identical topic — e.g. a affected person’s sugar degree earlier than and after a drug prototype, or two properties measured in the identical bottle of wine. The main focus right here is on how the variations between paired measurements are distributed. When such variations should not usually distributed, a typical paired t-test will yield unreliable confidence intervals.
The best repair on this state of affairs is the Wilcoxon Signed-Rank Check: the strong sibling of the paired t-test, which works by observing the variations between columns and rating their absolute values. In Pingouin, this take a look at is named utilizing pg.wilcoxon(), passing within the two columns containing the paired measures throughout the identical topic — e.g. two varieties of wine acidity.
# Run the strong Wilcoxon signed-rank take a look at for paired information
wilcoxon_results = pg.wilcoxon(x=df['fixed acidity'], y=df['volatile acidity'])
print(wilcoxon_results)
Outcome:
W_val various p_val RBC CLES
Wilcoxon 0.0 two-sided 0.0 1.0 1.0
The end result above reveals a statistically important distinction, or “excellent separation,” between the 2 measurements. Not solely are the 2 wine properties totally different, however additionally they function at totally totally different magnitude tiers throughout the dataset.
// Journey 3: When ANOVA Fails
On this third and remaining journey, we need to examine whether or not residual sugar ranges in wine differ considerably throughout distinct high quality rankings — observe that the latter vary between 3 and 9, taking integer values, and may subsequently be handled as discrete classes.
If Pingouin’s Levene take a look at of homoscedasticity fails dramatically — as an illustration, as a result of sugar variance in mediocre wines is large however very small in top-quality wines — a classical one-way ANOVA could produce deceptive outcomes, as this take a look at assumes equal variances amongst teams.
The repair is Welch’s ANOVA, which penalizes teams with excessive variance, thereby balancing out scales and making comparisons fairer throughout a number of classes. Right here is learn how to run this strong various to conventional ANOVA utilizing Pingouin:
# Run Welch's ANOVA to match sugar throughout high quality rankings
welch_results = pg.welch_anova(information=df, dv='residual sugar', between='high quality')
print(welch_results)
Outcome:
Supply ddof1 ddof2 F p_unc np2
0 high quality 6 54.507934 10.918282 5.937951e-08 0.008353
Even the place a one-way ANOVA might need struggled as a consequence of unequal variances, Welch’s ANOVA delivers a strong conclusion. The very small p-value is evident proof that residual sugar ranges differ considerably throughout wine high quality rankings. Keep in mind, nonetheless, that sugar is barely a small piece of the puzzle influencing wine high quality — a degree underscored by the low eta-squared worth of 0.008.
# Wrapping Up
By means of three instance eventualities, every pairing a messy-data downside with a strong statistical technique, we’ve got realized that being a talented information scientist does not imply having excellent information or tuning it completely — it means understanding what to do when the info will get troublesome for various causes. Pingouin’s capabilities implement a wide range of strong checks that assist escape the failed-assumptions lure and extract mathematically sound insights with little further effort.
Iván Palomares Carrascosa is a pacesetter, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the actual world.















