• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Tuesday, July 22, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Data Science

Construct Your Personal Easy Information Pipeline with Python and Docker

Admin by Admin
July 18, 2025
in Data Science
0
Build your own simple data pipeline with python and docker 1 1.png
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Build Your Own Simple Data Pipeline with Python and DockerBuild Your Own Simple Data Pipeline with Python and DockerPicture by Writer | Ideogram

 

Information is the asset that drives our work as information professionals. With out correct information, we can not carry out our duties, and our enterprise will fail to realize a aggressive benefit. Thus, securing appropriate information is essential for any information skilled, and information pipelines are the programs designed for this objective.

Information pipelines are programs designed to maneuver and remodel information from one supply to a different. These programs are a part of the general infrastructure for any enterprise that depends on information, as they assure that our information is dependable and all the time prepared to make use of.

Constructing a knowledge pipeline might sound complicated, however a number of easy instruments are adequate to create dependable information pipelines with just some traces of code. On this article, we’ll discover construct a simple information pipeline utilizing Python and Docker which you could apply in your on a regular basis information work.

Let’s get into it.

 

Constructing the Information Pipeline

 
Earlier than we construct our information pipeline, let’s perceive the idea of ETL, which stands for Extract, Remodel, and Load. ETL is a course of the place the information pipeline performs the next actions:

  • Extract information from numerous sources. 
  • Remodel information into a sound format. 
  • Load information into an accessible storage location.

ETL is a regular sample for information pipelines, so what we construct will observe this construction. 

With Python and Docker, we are able to construct a knowledge pipeline across the ETL course of with a easy setup. Python is a worthwhile instrument for orchestrating any information circulate exercise, whereas Docker is beneficial for managing the information pipeline software’s surroundings utilizing containers.

Let’s arrange our information pipeline with Python and Docker. 

 

Step 1: Preparation

First, we should nsure that we’ve Python and Docker put in on our system (we is not going to cowl this right here).

For our instance, we’ll use the coronary heart assault dataset from Kaggle as the information supply to develop our ETL course of.  

With the whole lot in place, we’ll put together the mission construction. Total, the easy information pipeline can have the next skeleton:

simple-data-pipeline/
├── app/
│   └── pipeline.py
├── information/
│   └── Medicaldataset.csv
├── Dockerfile
├── necessities.txt
└── docker-compose.yml

 

There’s a primary folder referred to as simple-data-pipeline, which incorporates:

  • An app folder containing the pipeline.py file.
  • A information folder containing the supply information (Medicaldataset.csv).
  • The necessities.txt file for surroundings dependencies.
  • The Dockerfile for the Docker configuration.
  • The docker-compose.yml file to outline and run our multi-container Docker software.

We’ll first fill out the necessities.txt file, which incorporates the libraries required for our mission.

On this case, we’ll solely use the next library:

 

Within the subsequent part, we’ll arrange the information pipeline utilizing our pattern information.

 

Step 2: Arrange the Pipeline

We’ll arrange the Python pipeline.py file for the ETL course of. In our case, we’ll use the next code.

import pandas as pd
import os

input_path = os.path.be part of("/information", "Medicaldataset.csv")
output_path = os.path.be part of("/information", "CleanedMedicalData.csv")

def extract_data(path):
    df = pd.read_csv(path)
    print("Information Extraction accomplished.")
    return df

def transform_data(df):
    df_cleaned = df.dropna()
    df_cleaned.columns = [col.strip().lower().replace(" ", "_") for col in df_cleaned.columns]
    print("Information Transformation accomplished.")
    return df_cleaned

def load_data(df, output_path):
    df.to_csv(output_path, index=False)
    print("Information Loading accomplished.")

def run_pipeline():
    df_raw = extract_data(input_path)
    df_cleaned = transform_data(df_raw)
    load_data(df_cleaned, output_path)
    print("Information pipeline accomplished efficiently.")

if __name__ == "__main__":
    run_pipeline()

 

The pipeline follows the ETL course of, the place we load the CSV file, carry out information transformations akin to dropping lacking information and cleansing the column names, and cargo the cleaned information into a brand new CSV file. We wrapped these steps right into a single run_pipeline perform that executes all the course of.

 

Step 3: Arrange the Dockerfile

With the Python pipeline file prepared, we’ll fill within the Dockerfile to arrange the configuration for the Docker container utilizing the next code:

FROM python:3.10-slim

WORKDIR /app
COPY ./app /app
COPY necessities.txt .

RUN pip set up --no-cache-dir -r necessities.txt

CMD ["python", "pipeline.py"]

 

Within the code above, we specify that the container will use Python model 3.10 as its surroundings. Subsequent, we set the container’s working listing to /app and duplicate the whole lot from our native app folder into the container’s app listing. We additionally copy the necessities.txt file and execute the pip set up throughout the container. Lastly, we specify the command to run the Python script when the container begins.

With the Dockerfile prepared, we’ll put together the docker-compose.yml file to handle the general execution:

model: '3.9'

companies:
  data-pipeline:
    construct: .
    container_name: simple_pipeline_container
    volumes:
      - ./information:/information

 

The YAML file above, when executed, will construct the Docker picture from the present listing utilizing the out there Dockerfile. We additionally mount the native information folder to the information folder throughout the container, making the dataset accessible to our script.

 

Executing the Pipeline

 
With all of the recordsdata prepared, we’ll execute the information pipeline in Docker. Go to the mission root folder and run the next command in your command immediate to construct the Docker picture and execute the pipeline.

docker compose up --build

 

If you happen to run this efficiently, you will note an informational log like the next:

 ✔ data-pipeline                           Constructed                                                                                   0.0s 
 ✔ Community simple_docker_pipeline_default  Created                                                                                 0.4s 
 ✔ Container simple_pipeline_container     Created                                                                                 0.4s 
Attaching to simple_pipeline_container
simple_pipeline_container  | Information Extraction accomplished.
simple_pipeline_container  | Information Transformation accomplished.
simple_pipeline_container  | Information Loading accomplished.
simple_pipeline_container  | Information pipeline accomplished efficiently.
simple_pipeline_container exited with code 0

 

If the whole lot is executed efficiently, you will note a brand new CleanedMedicalData.csv file in your information folder. 

Congratulations! You will have simply created a easy information pipeline with Python and Docker. Strive utilizing numerous information sources and ETL processes to see when you can deal with a extra complicated pipeline.

 

Conclusion

 
Understanding information pipelines is essential for each information skilled, as they’re important for buying the proper information for his or her work. On this article, we explored construct a easy information pipeline utilizing Python and Docker and discovered execute it.

I hope this has helped!
 
 

Cornellius Yudha Wijaya is a knowledge science assistant supervisor and information author. Whereas working full-time at Allianz Indonesia, he likes to share Python and information ideas by way of social media and writing media. Cornellius writes on quite a lot of AI and machine studying subjects.

READ ALSO

From Immediate to Coverage: Constructing Moral GenAI Chatbots for Enterprises

The Fundamentals of Debugging Python Issues


Build Your Own Simple Data Pipeline with Python and DockerBuild Your Own Simple Data Pipeline with Python and DockerPicture by Writer | Ideogram

 

Information is the asset that drives our work as information professionals. With out correct information, we can not carry out our duties, and our enterprise will fail to realize a aggressive benefit. Thus, securing appropriate information is essential for any information skilled, and information pipelines are the programs designed for this objective.

Information pipelines are programs designed to maneuver and remodel information from one supply to a different. These programs are a part of the general infrastructure for any enterprise that depends on information, as they assure that our information is dependable and all the time prepared to make use of.

Constructing a knowledge pipeline might sound complicated, however a number of easy instruments are adequate to create dependable information pipelines with just some traces of code. On this article, we’ll discover construct a simple information pipeline utilizing Python and Docker which you could apply in your on a regular basis information work.

Let’s get into it.

 

Constructing the Information Pipeline

 
Earlier than we construct our information pipeline, let’s perceive the idea of ETL, which stands for Extract, Remodel, and Load. ETL is a course of the place the information pipeline performs the next actions:

  • Extract information from numerous sources. 
  • Remodel information into a sound format. 
  • Load information into an accessible storage location.

ETL is a regular sample for information pipelines, so what we construct will observe this construction. 

With Python and Docker, we are able to construct a knowledge pipeline across the ETL course of with a easy setup. Python is a worthwhile instrument for orchestrating any information circulate exercise, whereas Docker is beneficial for managing the information pipeline software’s surroundings utilizing containers.

Let’s arrange our information pipeline with Python and Docker. 

 

Step 1: Preparation

First, we should nsure that we’ve Python and Docker put in on our system (we is not going to cowl this right here).

For our instance, we’ll use the coronary heart assault dataset from Kaggle as the information supply to develop our ETL course of.  

With the whole lot in place, we’ll put together the mission construction. Total, the easy information pipeline can have the next skeleton:

simple-data-pipeline/
├── app/
│   └── pipeline.py
├── information/
│   └── Medicaldataset.csv
├── Dockerfile
├── necessities.txt
└── docker-compose.yml

 

There’s a primary folder referred to as simple-data-pipeline, which incorporates:

  • An app folder containing the pipeline.py file.
  • A information folder containing the supply information (Medicaldataset.csv).
  • The necessities.txt file for surroundings dependencies.
  • The Dockerfile for the Docker configuration.
  • The docker-compose.yml file to outline and run our multi-container Docker software.

We’ll first fill out the necessities.txt file, which incorporates the libraries required for our mission.

On this case, we’ll solely use the next library:

 

Within the subsequent part, we’ll arrange the information pipeline utilizing our pattern information.

 

Step 2: Arrange the Pipeline

We’ll arrange the Python pipeline.py file for the ETL course of. In our case, we’ll use the next code.

import pandas as pd
import os

input_path = os.path.be part of("/information", "Medicaldataset.csv")
output_path = os.path.be part of("/information", "CleanedMedicalData.csv")

def extract_data(path):
    df = pd.read_csv(path)
    print("Information Extraction accomplished.")
    return df

def transform_data(df):
    df_cleaned = df.dropna()
    df_cleaned.columns = [col.strip().lower().replace(" ", "_") for col in df_cleaned.columns]
    print("Information Transformation accomplished.")
    return df_cleaned

def load_data(df, output_path):
    df.to_csv(output_path, index=False)
    print("Information Loading accomplished.")

def run_pipeline():
    df_raw = extract_data(input_path)
    df_cleaned = transform_data(df_raw)
    load_data(df_cleaned, output_path)
    print("Information pipeline accomplished efficiently.")

if __name__ == "__main__":
    run_pipeline()

 

The pipeline follows the ETL course of, the place we load the CSV file, carry out information transformations akin to dropping lacking information and cleansing the column names, and cargo the cleaned information into a brand new CSV file. We wrapped these steps right into a single run_pipeline perform that executes all the course of.

 

Step 3: Arrange the Dockerfile

With the Python pipeline file prepared, we’ll fill within the Dockerfile to arrange the configuration for the Docker container utilizing the next code:

FROM python:3.10-slim

WORKDIR /app
COPY ./app /app
COPY necessities.txt .

RUN pip set up --no-cache-dir -r necessities.txt

CMD ["python", "pipeline.py"]

 

Within the code above, we specify that the container will use Python model 3.10 as its surroundings. Subsequent, we set the container’s working listing to /app and duplicate the whole lot from our native app folder into the container’s app listing. We additionally copy the necessities.txt file and execute the pip set up throughout the container. Lastly, we specify the command to run the Python script when the container begins.

With the Dockerfile prepared, we’ll put together the docker-compose.yml file to handle the general execution:

model: '3.9'

companies:
  data-pipeline:
    construct: .
    container_name: simple_pipeline_container
    volumes:
      - ./information:/information

 

The YAML file above, when executed, will construct the Docker picture from the present listing utilizing the out there Dockerfile. We additionally mount the native information folder to the information folder throughout the container, making the dataset accessible to our script.

 

Executing the Pipeline

 
With all of the recordsdata prepared, we’ll execute the information pipeline in Docker. Go to the mission root folder and run the next command in your command immediate to construct the Docker picture and execute the pipeline.

docker compose up --build

 

If you happen to run this efficiently, you will note an informational log like the next:

 ✔ data-pipeline                           Constructed                                                                                   0.0s 
 ✔ Community simple_docker_pipeline_default  Created                                                                                 0.4s 
 ✔ Container simple_pipeline_container     Created                                                                                 0.4s 
Attaching to simple_pipeline_container
simple_pipeline_container  | Information Extraction accomplished.
simple_pipeline_container  | Information Transformation accomplished.
simple_pipeline_container  | Information Loading accomplished.
simple_pipeline_container  | Information pipeline accomplished efficiently.
simple_pipeline_container exited with code 0

 

If the whole lot is executed efficiently, you will note a brand new CleanedMedicalData.csv file in your information folder. 

Congratulations! You will have simply created a easy information pipeline with Python and Docker. Strive utilizing numerous information sources and ETL processes to see when you can deal with a extra complicated pipeline.

 

Conclusion

 
Understanding information pipelines is essential for each information skilled, as they’re important for buying the proper information for his or her work. On this article, we explored construct a easy information pipeline utilizing Python and Docker and discovered execute it.

I hope this has helped!
 
 

Cornellius Yudha Wijaya is a knowledge science assistant supervisor and information author. Whereas working full-time at Allianz Indonesia, he likes to share Python and information ideas by way of social media and writing media. Cornellius writes on quite a lot of AI and machine studying subjects.

Tags: BuildDataDockerPipelinePythonSimple

Related Posts

Ethical genai chatbots cover.webp.webp
Data Science

From Immediate to Coverage: Constructing Moral GenAI Chatbots for Enterprises

July 22, 2025
Rosidi debugging python problems 1.png
Data Science

The Fundamentals of Debugging Python Issues

July 21, 2025
Christina wocintechchat com 6dv3pe jnsg unsplash.jpg
Data Science

How CIS Credentials Can Launch Your AI Growth Profession

July 21, 2025
Exxact logo 2 1 dark background 0725.png
Data Science

From Reactive to Proactive: The Rise of Agentic AI

July 20, 2025
Fuzzy matching.png
Data Science

How Fuzzy Matching and Machine Studying Are Reworking AML Expertise

July 20, 2025
Awan 7 python web development frameworks 1.png
Data Science

7 Python Net Growth Frameworks for Knowledge Scientists

July 19, 2025
Next Post
Polkadot 1712831546ohmpg9ptvt.jpg

Polkadot Unveils Daring Imaginative and prescient for Proof-of-Personhood Id System

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024

EDITOR'S PICK

Image 201.png

A Chook’s Eye View of Linear Algebra: The Fundamentals

June 2, 2025
Blog Technical Analysis Widgetts 1535x700 1.png

Observe reside market sentiment with our new technical evaluation (TA) widget on Kraken Professional

August 31, 2024
Anya Osintsova Akoukgmtdc Unsplash 1 1024x681.png

An Existential Disaster of a Veteran Researcher within the Age of Generative AI

April 28, 2025
Screenshot 2025 02 14 At 2.39.50 pm.png

🚪🚪🐐 Classes in Determination Making from the Monty Corridor Drawback

May 19, 2025

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • How To Considerably Improve LLMs by Leveraging Context Engineering
  • From Immediate to Coverage: Constructing Moral GenAI Chatbots for Enterprises
  • Prediction Platform Polymarket Buys QCEX Change in $112 Million Deal to Reenter the U.S.
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?