

Picture by Editor
# Introduction
Keen to start out your first, manageable machine studying mission with Python’s standard libraries Pandas and Scikit-learn, however not sure the place to start out? Look no additional.
On this article, I’ll take you thru a delicate, beginner-friendly machine studying mission through which we’ll construct collectively a regression mannequin that predicts worker revenue based mostly on socio-economic attributes. Alongside the best way, we’ll be taught some key machine studying ideas and important tips.
# From Uncooked Dataset to Clear DataFrame
First, identical to with any Python-based mission, it’s a good follow to start out by importing the mandatory libraries, modules, and elements we’ll use throughout the entire course of:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
import joblib
The next directions will load a publicly out there dataset in this repository right into a Pandas DataFrame object: a neat information construction to load, analyze, and handle absolutely structured information, that’s, information in tabular format. As soon as loaded, we take a look at its primary properties and information varieties in its attributes.
url = "https://uncooked.githubusercontent.com/gakudo-ai/open-datasets/fundamental/employees_dataset_with_missing.csv"
df = pd.read_csv(url)
print(df.head())
print(df.data())
You’ll discover that the dataset accommodates 1000 entries or cases — that’s, information describing 1000 staff — however for many attributes, like age, revenue, and so forth, there are fewer than 1000 precise values. Why? As a result of this dataset has lacking values, a typical concern in real-world information, which must be handled.
In our mission, we’ll set the objective of predicting an worker’s revenue based mostly on the remainder of the attributes. Subsequently, we’ll undertake the strategy of discarding rows (staff) whose worth for this attribute is lacking. Whereas for predictor attributes it’s typically tremendous to cope with lacking values and estimate or impute them, for the goal variable, we’d like absolutely identified labels for coaching our machine studying mannequin: the catch is that our machine studying mannequin learns by being uncovered to examples with identified prediction outputs.
There may be additionally a particular instruction to examine for lacking values solely:
So, let’s clear our DataFrame to be exempt from lacking values for the goal variable: revenue. This code will take away entries with lacking values, particularly for that attribute.
goal = "revenue"
train_df = df.dropna(subset=[target])
X = train_df.drop(columns=[target])
y = train_df[target]
So, how concerning the lacking values in the remainder of the attributes? We’ll take care of that shortly, however first, we have to separate our dataset into two main subsets: a coaching set for coaching the mannequin, and a check set to guage our mannequin’s efficiency as soon as educated, consisting of various examples from these seen by the mannequin throughout coaching. Scikit-learn supplies a single instruction to do that splitting randomly:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
The following step goes a step additional in turning the info into an awesome kind for coaching a machine studying mannequin: developing a preprocessing pipeline. Usually, this preprocessing ought to distinguish between numeric and categorical options, so that every sort of characteristic is topic to completely different preprocessing duties alongside the pipeline. For example, numeric options shall be sometimes scaled, whereas categorical options could also be mapped or encoded into numeric ones in order that the machine studying mannequin can digest them. For the sake of illustration, the code under demonstrates the complete technique of constructing a preprocessing pipeline. It contains the automated identification of numeric vs. categorical options so that every sort will be dealt with accurately.
numeric_features = X.select_dtypes(embody=["int64", "float64"]).columns
categorical_features = X.select_dtypes(exclude=["int64", "float64"]).columns
numeric_transformer = Pipeline([
("imputer", SimpleImputer(strategy="median"))
])
categorical_transformer = Pipeline([
("imputer", SimpleImputer(strategy="most_frequent")),
("onehot", OneHotEncoder(handle_unknown="ignore"))
])
preprocessor = ColumnTransformer([
("num", numeric_transformer, numeric_features),
("cat", categorical_transformer, categorical_features)
])
You may be taught extra about information preprocessing pipelines in this text.
This pipeline, as soon as utilized to the DataFrame, will end in a clear, ready-to-use model for machine studying. However we’ll apply it within the subsequent step, the place we’ll encapsulate each information preprocessing and machine studying mannequin coaching into one single overarching pipeline.
# From Clear DataFrame to Prepared-to-Deploy Mannequin
Now we’ll outline an overarching pipeline that:
- Applies the beforehand outlined preprocessing course of — saved within the
preprocessorvariable — for each numeric and categorical attributes. - Trains a regression mannequin, particularly a random forest regression, to foretell revenue utilizing preprocessed coaching information.
mannequin = Pipeline([
("preprocessor", preprocessor),
("regressor", RandomForestRegressor(random_state=42))
])
mannequin.match(X_train, y_train)
Importantly, the coaching stage solely receives the coaching subset we created earlier upon splitting, not the entire dataset.
Now, we take the opposite subset of the info, the check set, and use it to guage the mannequin’s efficiency on these instance staff. We’ll use the imply absolute error (MAE) as our analysis metric:
preds = mannequin.predict(X_test)
mae = mean_absolute_error(y_test, preds)
print(f"nModel MAE: {mae:.2f}")
You could get an MAE worth of round 13000, which is suitable however not sensible, contemplating that the majority incomes are within the vary of 60-90K. Anyway, not dangerous for a primary machine studying mannequin!
Let me present you, on a ultimate be aware, how one can save your educated mannequin in a file for future deployment.
joblib.dump(mannequin, "employee_income_model.joblib")
print("Mannequin saved as employee_income_model.joblib")
Having your educated mannequin saved in a .joblib file is helpful for future deployment, by permitting you to reload and reuse it immediately with out having to coach it once more from scratch. Consider it as “freezing” all of your preprocessing pipeline and the educated mannequin into a transportable object. Quick choices for future use and deployment embody plugging it right into a easy Python script or pocket book, or constructing a light-weight net app constructed with instruments like Streamlit, Gradio, or Flask.
# Wrapping Up
On this article, we’ve got constructed collectively an introductory machine studying mannequin for regression, particularly to foretell worker incomes, outlining the mandatory steps from uncooked dataset to wash, preprocessed DataFrame, and from DataFrame to ready-to-deploy mannequin.
Iván Palomares Carrascosa is a pacesetter, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the actual world.


Picture by Editor
# Introduction
Keen to start out your first, manageable machine studying mission with Python’s standard libraries Pandas and Scikit-learn, however not sure the place to start out? Look no additional.
On this article, I’ll take you thru a delicate, beginner-friendly machine studying mission through which we’ll construct collectively a regression mannequin that predicts worker revenue based mostly on socio-economic attributes. Alongside the best way, we’ll be taught some key machine studying ideas and important tips.
# From Uncooked Dataset to Clear DataFrame
First, identical to with any Python-based mission, it’s a good follow to start out by importing the mandatory libraries, modules, and elements we’ll use throughout the entire course of:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
import joblib
The next directions will load a publicly out there dataset in this repository right into a Pandas DataFrame object: a neat information construction to load, analyze, and handle absolutely structured information, that’s, information in tabular format. As soon as loaded, we take a look at its primary properties and information varieties in its attributes.
url = "https://uncooked.githubusercontent.com/gakudo-ai/open-datasets/fundamental/employees_dataset_with_missing.csv"
df = pd.read_csv(url)
print(df.head())
print(df.data())
You’ll discover that the dataset accommodates 1000 entries or cases — that’s, information describing 1000 staff — however for many attributes, like age, revenue, and so forth, there are fewer than 1000 precise values. Why? As a result of this dataset has lacking values, a typical concern in real-world information, which must be handled.
In our mission, we’ll set the objective of predicting an worker’s revenue based mostly on the remainder of the attributes. Subsequently, we’ll undertake the strategy of discarding rows (staff) whose worth for this attribute is lacking. Whereas for predictor attributes it’s typically tremendous to cope with lacking values and estimate or impute them, for the goal variable, we’d like absolutely identified labels for coaching our machine studying mannequin: the catch is that our machine studying mannequin learns by being uncovered to examples with identified prediction outputs.
There may be additionally a particular instruction to examine for lacking values solely:
So, let’s clear our DataFrame to be exempt from lacking values for the goal variable: revenue. This code will take away entries with lacking values, particularly for that attribute.
goal = "revenue"
train_df = df.dropna(subset=[target])
X = train_df.drop(columns=[target])
y = train_df[target]
So, how concerning the lacking values in the remainder of the attributes? We’ll take care of that shortly, however first, we have to separate our dataset into two main subsets: a coaching set for coaching the mannequin, and a check set to guage our mannequin’s efficiency as soon as educated, consisting of various examples from these seen by the mannequin throughout coaching. Scikit-learn supplies a single instruction to do that splitting randomly:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
The following step goes a step additional in turning the info into an awesome kind for coaching a machine studying mannequin: developing a preprocessing pipeline. Usually, this preprocessing ought to distinguish between numeric and categorical options, so that every sort of characteristic is topic to completely different preprocessing duties alongside the pipeline. For example, numeric options shall be sometimes scaled, whereas categorical options could also be mapped or encoded into numeric ones in order that the machine studying mannequin can digest them. For the sake of illustration, the code under demonstrates the complete technique of constructing a preprocessing pipeline. It contains the automated identification of numeric vs. categorical options so that every sort will be dealt with accurately.
numeric_features = X.select_dtypes(embody=["int64", "float64"]).columns
categorical_features = X.select_dtypes(exclude=["int64", "float64"]).columns
numeric_transformer = Pipeline([
("imputer", SimpleImputer(strategy="median"))
])
categorical_transformer = Pipeline([
("imputer", SimpleImputer(strategy="most_frequent")),
("onehot", OneHotEncoder(handle_unknown="ignore"))
])
preprocessor = ColumnTransformer([
("num", numeric_transformer, numeric_features),
("cat", categorical_transformer, categorical_features)
])
You may be taught extra about information preprocessing pipelines in this text.
This pipeline, as soon as utilized to the DataFrame, will end in a clear, ready-to-use model for machine studying. However we’ll apply it within the subsequent step, the place we’ll encapsulate each information preprocessing and machine studying mannequin coaching into one single overarching pipeline.
# From Clear DataFrame to Prepared-to-Deploy Mannequin
Now we’ll outline an overarching pipeline that:
- Applies the beforehand outlined preprocessing course of — saved within the
preprocessorvariable — for each numeric and categorical attributes. - Trains a regression mannequin, particularly a random forest regression, to foretell revenue utilizing preprocessed coaching information.
mannequin = Pipeline([
("preprocessor", preprocessor),
("regressor", RandomForestRegressor(random_state=42))
])
mannequin.match(X_train, y_train)
Importantly, the coaching stage solely receives the coaching subset we created earlier upon splitting, not the entire dataset.
Now, we take the opposite subset of the info, the check set, and use it to guage the mannequin’s efficiency on these instance staff. We’ll use the imply absolute error (MAE) as our analysis metric:
preds = mannequin.predict(X_test)
mae = mean_absolute_error(y_test, preds)
print(f"nModel MAE: {mae:.2f}")
You could get an MAE worth of round 13000, which is suitable however not sensible, contemplating that the majority incomes are within the vary of 60-90K. Anyway, not dangerous for a primary machine studying mannequin!
Let me present you, on a ultimate be aware, how one can save your educated mannequin in a file for future deployment.
joblib.dump(mannequin, "employee_income_model.joblib")
print("Mannequin saved as employee_income_model.joblib")
Having your educated mannequin saved in a .joblib file is helpful for future deployment, by permitting you to reload and reuse it immediately with out having to coach it once more from scratch. Consider it as “freezing” all of your preprocessing pipeline and the educated mannequin into a transportable object. Quick choices for future use and deployment embody plugging it right into a easy Python script or pocket book, or constructing a light-weight net app constructed with instruments like Streamlit, Gradio, or Flask.
# Wrapping Up
On this article, we’ve got constructed collectively an introductory machine studying mannequin for regression, particularly to foretell worker incomes, outlining the mandatory steps from uncooked dataset to wash, preprocessed DataFrame, and from DataFrame to ready-to-deploy mannequin.
Iván Palomares Carrascosa is a pacesetter, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the actual world.
















