• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Wednesday, October 8, 2025
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Al, Analytics and Automation

Your First Containerized Machine Learning Deployment with Docker and FastAPI

Josh by Josh
July 30, 2025
in Al, Analytics and Automation
0
Your First Containerized Machine Learning Deployment with Docker and FastAPI
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Your First Containerized Machine Learning Deployment with Docker and FastAPI

Your First Containerized Machine Learning Deployment with Docker and FastAPI
Image by Editor | ChatGPT

Introduction

Deploying machine learning models can seem complex, but modern tools can streamline the process. FastAPI is a high-performance web framework for building APIs, while Docker allows you to run applications in isolated, containerized environments. Combining these two technologies simplifies deployment across different systems, ensures scalability, and makes maintenance easier. This approach helps avoid dependency conflicts during production, creating a reliable pipeline for serving ML models.

READ ALSO

Ai Flirt Chat Generator With Photos

Fighting for the health of the planet with AI | MIT News

In this article, you’ll learn how to deploy a machine learning model using FastAPI and Docker.

Preparation

Before you start, ensure that you have the following installed on your system:

  • Python 3.8+ – Required for training the model and running the FastAPI server
  • pip – The package installer for Python, used to manage dependencies
  • Docker – A container platform used to build and run the application consistently across environments

You should also be comfortable with basic Python programming, have an understanding of machine learning concepts, and be familiar with RESTful APIs.

Here’s the recommended structure for your project:

iris–fastapi–app/

├── app/

│   ├── __init__.py

│   └── iris_model.pkl      # Trained model

├── main.py                 # FastAPI app

├── train_model.py          # Script to train and save the model

├── requirements.txt        # Dependencies

├── Dockerfile              # Docker build file

Training the Machine Learning Model

We’ll begin by training a simple random forest classifier using Scikit-learn’s Iris dataset. The script below, which you should save as train_model.py, handles loading the data, training the classifier, and serializing the model to a file using joblib. This saved model will be placed in the app/ directory, as defined in our project structure.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

from sklearn.datasets import load_iris

from sklearn.ensemble import RandomForestClassifier

import joblib

import os

 

def train_and_save_model():

    # Ensure the ‘app’ directory exists

    os.makedirs(‘app’, exist_ok=True)

 

    iris = load_iris()

    X, y = iris.data, iris.target

 

    model = RandomForestClassifier()

    model.fit(X, y)

 

    joblib.dump(model, ‘app/iris_model.pkl’)

    print(“Model trained and saved to app/iris_model.pkl”)

 

if __name__ == “__main__”:

    train_and_save_model()

To train and save your model, run this script from your terminal:

Creating a FastAPI Application

The next step is to expose the model through an API so it can be accessed by other applications or users. FastAPI makes this easy with minimal boilerplate and excellent support for type checking, validation, and documentation.

We’ll build a simple FastAPI application that loads the trained model and offers a single endpoint, /predict, to return predictions based on user input.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

from fastapi import FastAPI

from pydantic import BaseModel

import joblib

import numpy as np

 

app = FastAPI()

model = joblib.load(“app/iris_model.pkl”)

 

class IrisInput(BaseModel):

    sepal_length: float

    sepal_width: float

    petal_length: float

    petal_width: float

 

@app.post(“/predict”)

def predict(data: IrisInput):

    input_data = np.array([[data.sepal_length, data.sepal_width, data.petal_length, data.petal_width]])

    prediction = model.predict(input_data)

    return {“prediction”: int(prediction[0])}

This app exposes a single endpoint, /predict, that accepts flower measurements and returns the predicted class.

Writing the Dockerfile

To run this FastAPI application in a containerized environment, you need to create a Dockerfile. This file contains instructions for Docker to build an image that packages your application and its dependencies. Create a file in your project’s root directory with the following contents and name it Dockerfile, with no file extension.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

# Use an official Python runtime as a base image

FROM python:3.10–slim

 

# Set the working directory

WORKDIR /app

 

# Copy requirements and install dependencies

COPY requirements.txt .

RUN pip install —no–cache–dir –r requirements.txt

 

# Copy the rest of the application’s code

COPY . .

 

# Expose the port

EXPOSE 8000

 

# Run the application

CMD [“uvicorn”, “main:app”, “–host”, “0.0.0.0”, “–port”, “8000”]

Creating the requirements.txt File

The requirements.txt file is used by pip to install all the necessary dependencies in your Docker container. It should include all the libraries used in your project:

fastapi

uvicorn

scikit–learn

joblib

numpy

You can generate this file manually or by running:

pip freeze > requirements.txt

Building and Running the Docker Container

Once you have your FastAPI application, model, and Dockerfile ready, the next step is to containerize the application using Docker, and then and run it. This process ensures that your app can run reliably across any environment.

First, build the Docker image:

docker build –t iris–fastapi–app .

Then, run the container:

docker run –d –p 8000:8000 iris–fastapi–app

This command maps the container’s port 8000 to your local port 8000.

Testing the API Endpoint

Now that your FastAPI app is running in a Docker container, you can test the API locally.

Using your browser or a tool like Postman you can verify, or you can do so using curl:

curl –X POST “http://localhost:8000/predict” \

     –H “Content-Type: application/json” \

     –d ‘{“sepal_length”: 5.1, “sepal_width”: 3.5, “petal_length”: 1.4, “petal_width”: 0.2}’

Expected output:

FastAPI also provides interactive documentation at http://localhost:8000/docs. You can use this Swagger UI to test and troubleshoot the /predict endpoint directly in your browser.

Improving Model Serving

While the basic setup works well for initial deployment, real-world scenarios often require enhancements to improve the development experience and manage environment-specific configurations. Here are a few pointers to keep in mind.

Enabling Live Reload During Development

When developing locally, it’s helpful to enable automatic reloading so that your API restarts whenever you make changes to the code. FastAPI uses Uvicorn, which supports this feature. Modify your CMD line in your Dockerfile or use it via docker-compose for development like this:

CMD [“uvicorn”, “main:app”, “–host”, “0.0.0.0”, “–port”, “8000”, “–reload”]

Note: The --reload flag is intended for development only. Avoid using it in production environments.

Using Environment Variables

Instead of hardcoding paths or configuration values, use environment variables. This makes your app more flexible and production-ready.

For example, you can refactor model loading in main.py:

import os

 

model_path = os.getenv(“MODEL_PATH”, “app/iris_model.pkl”)

model = joblib.load(model_path)

You can also pass environment variables in your Docker run command:

docker run –d –p 8000:8000 –e MODEL_PATH=app/iris_model.pkl iris–fastapi–app

Conclusion

Deploying machine learning models with FastAPI and Docker is an efficient and scalable approach. FastAPI offers a high-performance method for exposing models as APIs, while Docker ensures consistent behavior across all environments. Together, they create a powerful workflow that simplifies development, testing, and deployment, and help your machine learning service becomes more robust and ready for production.



Source_link

Related Posts

Ai Flirt Chat Generator With Photos
Al, Analytics and Automation

Ai Flirt Chat Generator With Photos

October 8, 2025
Fighting for the health of the planet with AI | MIT News
Al, Analytics and Automation

Fighting for the health of the planet with AI | MIT News

October 8, 2025
Building a Human Handoff Interface for AI-Powered Insurance Agent Using Parlant and Streamlit
Al, Analytics and Automation

Building a Human Handoff Interface for AI-Powered Insurance Agent Using Parlant and Streamlit

October 7, 2025
How OpenAI’s Sora 2 Is Transforming Toy Design into Moving Dreams
Al, Analytics and Automation

How OpenAI’s Sora 2 Is Transforming Toy Design into Moving Dreams

October 7, 2025
Printable aluminum alloy sets strength records, may enable lighter aircraft parts | MIT News
Al, Analytics and Automation

Printable aluminum alloy sets strength records, may enable lighter aircraft parts | MIT News

October 7, 2025
Google DeepMind Introduces CodeMender: A New AI Agent that Uses Gemini Deep Think to Automatically Patch Critical Software Vulnerabilities
Al, Analytics and Automation

Google DeepMind Introduces CodeMender: A New AI Agent that Uses Gemini Deep Think to Automatically Patch Critical Software Vulnerabilities

October 7, 2025
Next Post

LiveRamp’s Data Collaboration Platform Drove 313% ROI for Brands, According to Total Economic Impact Study

POPULAR NEWS

Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
7 Best EOR Platforms for Software Companies in 2025

7 Best EOR Platforms for Software Companies in 2025

June 21, 2025

EDITOR'S PICK

Introduction To Diversity In Toy Marketing

Introduction To Diversity In Toy Marketing

June 12, 2025
Website Maintenance Services

Guiding Your Customer’s Journey

July 12, 2025
After researchers unmasked a prolific SMS scammer, a new operation has emerged in its wake

After researchers unmasked a prolific SMS scammer, a new operation has emerged in its wake

August 10, 2025
It’s not just you, Google and others are partially down today

It’s not just you, Google and others are partially down today

June 12, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • How To Create Engaging Content For Ski Resort Social Media Channels
  • Pinterest Board Strategy: How to Use Boards Effectively
  • The “Great Lock In” is Gen Z’s latest self-help trend
  • How Donors, Doers, Door Openers, and Dunbar’s Number Help Create Campaign Momentum
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?