• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Wednesday, October 15, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

Constructing Video Sport Recommender Programs with FastAPI, PostgreSQL, and Render: Half 1

Admin by Admin
September 29, 2025
in Artificial Intelligence
0
Glenn carstens peters 0woypejq7jc unsplash scaled 1.jpg
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Introduction

allow an utility to generate clever recommendations for a consumer, successfully sorting related content material out from the remaining. On this article, we construct and deploy a dynamic online game recommender system leveraging PostgreSQL, FastAPI, and Render to suggest new video games for a consumer based mostly on these they’ve interacted with. The intent is to offer a transparent instance of how a standalone recommender system may be constructed, which might then be tied right into a front-end system or different utility.

For this venture, we use online game knowledge accessible from Steams API however this might simply get replaced by no matter product knowledge you’re inquisitive about, the important thing steps would be the identical. We’ll cowl tips on how to retailer this knowledge in a Database, vectorize the sport Tags, generate similarity scores based mostly on the video games a consumer has interacted with, and return a sequence of related suggestions. On the finish of this text, we’ll have this recommender system deployed as a Net Utility with FastAPI such that at any time when a consumer interacts with a brand new Sport, we will dynamically generate and retailer a brand new set of suggestions for that consumer.

READ ALSO

Why AI Nonetheless Can’t Substitute Analysts: A Predictive Upkeep Instance

TDS E-newsletter: September Should-Reads on ML Profession Roadmaps, Python Necessities, AI Brokers, and Extra

The next instruments shall be used:

  • PostgreSQL
  • FastAPI
  • Docker
  • Render

These simply within the GitHub repository can discover it right here.

Desk of Contents

Because of the size of this venture, it’s divided into two articles. The primary portion covers the setup and concept behind this venture (steps 1–5 proven beneath), and the second half covers deploying it. Should you’re on the lookout for the second half it’s positioned right here.

Half 1

  1. Dataset Overview
  2. Total System Structure
  3. Database Setup
  4. FastAPI Setup
    – Fashions
    – Routes
  5. Constructing Similarity Pipeline
  1. Deploying a PostgreSQL database on Render
  2. Deploying a FastAPI app as a Render Net Utility
    – Dockerizing our utility
    – Pushing Docker Picture to DockerHub
    – Pulling from DockerHub to Render

Dataset Overview

The dataset for this venture accommodates knowledge for the highest ~2000 video games from the steamworks API. This knowledge is free and licensed for private and industrial use, topic to the phrases of service, there’s a 200 requests/5 minute fee restrict that resulted in us working with solely a subset of the information. The phrases of service may be discovered right here.

An outline of the video games dataset is proven beneath. A lot of the fields are comparatively self-descriptive; the important thing factor to notice is that the distinctive product identifier is appid. Along with this dataset, we even have a number of extra tables that we’ll element beneath; crucial one for our recommender system is a recreation tags desk, which accommodates the appid values mapped to every tag related to the sport (technique, RPG, card recreation, and so on.). These have been drawn from the classes discipline proven within the Knowledge Overview after which pivoted to create the game_tags desk in order that there’s a novel row for every appip:class mixture.

Determine 2: Dataset Overview

For a extra detailed overview of the construction of our venture, see the diagram beneath.

Determine 3: Venture File Construction

Now we’ll present a fast overview of the structure of this venture after which dive into tips on how to populate our database.

Structure

For our recommender system system, we’ll use a PostgreSQL database with a FastAPI knowledge entry + processing layer that can permit us so as to add or take away video games from a consumer’s recreation checklist. Customers making modifications to their recreation library, through a FastAPI POST request, may also kick off a suggestion pipeline leveraging FastAPI’s Background Duties perform that can question their favored video games from the database, calculate a similarity rating with non-liked video games, and replace a user_recommendation desk with their new top-N advisable video games. Lastly, each the PostgreSQL database and FastAPI service shall be deployed on Render to allow them to be accessed past our native atmosphere. For this deployment step, any cloud service might have been used, however we selected Render on this case for its simplicity.

To recap, our total workflow from the consumer’s perspective will appear to be this:

  1. The consumer provides a recreation to their library by making a POST request from FastAPI to our database.
    • If we needed to connect our recommender system to a front-end utility, we might simply tie this Put up API right into a consumer interface.
  2. This put up request kicks off a FastAPI background job that runs our recommender pipeline.
  3. The recommender pipeline queries our database for the consumer’s recreation checklist and the worldwide video games checklist.
  4. A similarity rating is then calculated between the consumer’s video games and all video games utilizing our recreation tags.
  5. Lastly, our recommender pipeline makes a put up request to the database to replace the advisable video games desk for that consumer.
Determine 4: Recommender System Diagram

Setting Up the Database

Earlier than we construct our recommender system, step one is to arrange our database. Our fundamental database diagram is proven in Determine 5. We beforehand mentioned our recreation desk above; that is the bottom dataset that the remainder of our knowledge is mostly derived from. A full checklist of our tables is right here:

  • Sport Desk: Comprises base recreation knowledge for every distinctive recreation in our database
  • Person Desk: A Dummy consumer desk containing instance info populated for instance.
  • User_Game Desk: Comprises the mappings between all video games {that a} consumer has ‘favored’; this desk is without doubt one of the base tables used to generate suggestions by capturing what video games a consumer is inquisitive about.
  • Game_Tags Desk: This accommodates an appid:game_tag mapping, the place recreation tag could possibly be one thing like ‘technique’, ‘rpg’, ‘comedy’, a descriptive tag that captures a part of the essence of a recreation. There are a number of tags mapped to every appid.
  • User_Recommendation Desk: That is our goal desk that shall be up to date by our pipeline. Each time a consumer interacts with a brand new recreation, our suggestion pipeline will run and generate a brand new sequence of suggestions for that consumer that shall be saved right here.
Determine 5: Database Diagram

To arrange these tables, we will merely run our src/load_database.py file. This file creates and populates our tables in a few steps which might be outlined beneath. Observe, proper now we’re going to give attention to understanding tips on how to write this knowledge to a generic database, so all you must know now could be that the External_Database_Url beneath is the URL to no matter database you wish to use. Within the second half of this text, we’ll stroll by way of tips on how to arrange a database on Render and duplicate the URL into your .env file.

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.ext.declarative import declarative_base
import os
from dotenv import load_dotenv
from utils.db_handler import DatabaseHandler
import pandas as pd
import uuid
import sys
from sqlalchemy.exc import OperationalError
import psycopg2

# Loading environmental variables
load_dotenv(override=True)

# Assemble PostgreSQL connection URL for Render
URL_database = os.environ.get("External_Database_Url")

# Initialize DatabaseHandler with our URL
engine = DatabaseHandler(URL_database)

# loading preliminary consumer knowledge
users_df = pd.read_csv("Knowledge/customers.csv")
games_df = pd.read_csv("Knowledge/video games.csv")
user_games_df = pd.read_csv("Knowledge/user_games.csv")
user_recommendations_df = pd.read_csv("Knowledge/user_recommendations.csv")
game_tags_df = pd.read_csv("Knowledge/game_tags.csv")

First, we load 5 CSV information into dataframes from our Knowledge folder; we have now one file for every of the tables proven in our database diagram. We additionally set up a connection to our knowledge by declaring an engine variable; this engine variable makes use of a customized DataBaseHandler class with the initialization methodology proven beneath. This class takes a connection string to our database on Render(or your most popular cloud service), handed in from our .env file, and accommodates all of our database join, replace, delete, and take a look at functionalities.

After loading our knowledge and instantiating our DatabaseHandler class, we then should outline a question to create every of the 5 tables and execute these queries utilizing the DatabaseHandler.create_table perform. It is a quite simple perform that connects to our database, executes the question, and closes the connection, leaving us with the 5 tables proven in our database diagram; nonetheless, they’re at present empty.

# Defining queries to create tables
user_table_creation_query = """CREATE TABLE IF NOT EXISTS customers (
    id UUID PRIMARY KEY,
    username VARCHAR(255) UNIQUE NOT NULL,
    password VARCHAR(255) NOT NULL,
    electronic mail VARCHAR(255) NOT NULL,
    function VARCHAR(50) NOT NULL
    )
    """
game_table_creation_query = """CREATE TABLE IF NOT EXISTS video games (
    id UUID PRIMARY KEY,
    appid VARCHAR(255) UNIQUE NOT NULL,
    title VARCHAR(255) NOT NULL,
    sort VARCHAR(255),
    is_free BOOLEAN DEFAULT FALSE,
    short_description TEXT,
    detailed_description TEXT,
    builders VARCHAR(255),
    publishers VARCHAR(255),
    worth VARCHAR(255),
    genres VARCHAR(255),
    classes VARCHAR(255),
    release_date VARCHAR(255),
    platforms TEXT,
    metacritic_score FLOAT,
    suggestions INTEGER
    )
    """

user_games_query = """CREATE TABLE IF NOT EXISTS user_games (
    id UUID PRIMARY KEY,
    username VARCHAR(255) NOT NULL,
    appid VARCHAR(255) NOT NULL,
    shelf VARCHAR(50) DEFAULT 'Wish_List',
    ranking FLOAT DEFAULT 0.0,
    overview TEXT
    )
    """
recommendation_table_creation_query = """CREATE TABLE IF NOT EXISTS user_recommendations (
    id UUID PRIMARY KEY,
    username VARCHAR(255),
    appid VARCHAR(255),
    similarity FLOAT
    )
    """

game_tags_creation_query = """CREATE TABLE IF NOT EXISTS game_tags (
    id UUID PRIMARY KEY,
    appid VARCHAR(255) NOT NULL,
    class VARCHAR(255) NOT NULL
    )
    """



# Operating queries to create tables
engine.delete_table('user_recommendations')
engine.delete_table('user_games')
engine.delete_table('game_tags')
engine.delete_table('video games')
engine.delete_table('customers')

# Create tables
engine.create_table(user_table_creation_query)
engine.create_table(game_table_creation_query)
engine.create_table(user_games_query)
engine.create_table(recommendation_table_creation_query)
engine.create_table(game_tags_creation_query)

Following the preliminary desk setup, we then run a high quality test to make sure every of our datasets has the required ID column, populate the information from the dataframes into the suitable desk, after which take a look at to make sure that the tables have been populated appropriately. The test_table perform returns a dictionary that shall be of the shape {‘table_exists’: True, ‘table_has_data’: True} if the setup labored appropriately.

# Guaranteeing every row of every dataframe has a novel ID
if 'id' not in users_df.columns:
    users_df['id'] = [str(uuid.uuid4()) for _ in range(len(users_df))]
if 'id' not in games_df.columns:
    games_df['id'] = [str(uuid.uuid4()) for _ in range(len(games_df))]
if 'id' not in user_games_df.columns:
    user_games_df['id'] = [str(uuid.uuid4()) for _ in range(len(user_games_df))]
if 'id' not in user_recommendations_df.columns:
    user_recommendations_df['id'] = [str(uuid.uuid4()) for _ in range(len(user_recommendations_df))]
if 'id' not in game_tags_df.columns:
    game_tags_df['id'] = [str(uuid.uuid4()) for _ in range(len(game_tags_df))]

# Populates the 4 tables with knowledge from the dataframes
engine.populate_table_dynamic(users_df, 'customers')
engine.populate_table_dynamic(games_df, 'video games')
engine.populate_table_dynamic(user_games_df, 'user_games')
engine.populate_table_dynamic(user_recommendations_df, 'user_recommendations')
engine.populate_table_dynamic(game_tags_df, 'game_tags')

# Testing if the tables have been created and populated appropriately
print(engine.test_table('customers'))
print(engine.test_table('video games'))
print(engine.test_table('user_games'))
print(engine.test_table('user_recommendations'))
print(engine.test_table('game_tags'))

Getting Began with FastAPI

Now that we have now our database arrange and populated, we have to construct the strategies to entry, replace, and delete knowledge, utilizing FastAPI. FastAPI permits us to simply construct standardized(and quick) API’s to allow interplay with our database. The FastAPI docs supply an amazing step-by-step tutorial that may be discovered right here. As a high-level abstract, there are a number of nice options that make FastAPI preferrred for serving because the interplay layer between a database and a front-end utility.

  1. Standardization: FastAPI permits us to outline routes to work together with our tables in a standardized method utilizing GET, POST, DELETE, UPDATE, and so on. strategies. This standardization permits us to construct a knowledge entry layer in pure Python that may then be work together with all kinds of front-end utility. We merely name the API strategies we wish within the entrance finish, no matter what language its constructed in.
  2. Knowledge Validation: As we’ll present beneath, we have to outline a Pydantic knowledge mannequin for every object we work together with(assume our video games and consumer tables). The primary benefit of that is that it ensures all our variables have outlined knowledge varieties, for instance, if we outline our Sport object such that the ranking discipline is of sort float and a consumer tries to make a put up request so as to add a brand new entry with a ranking of “nice” it wont work. This built-in knowledge validation will assist us forestall all kinds of knowledge high quality points from as our system scales.
  3. Asynchronous: FastAPI features can run asynchronously, which means one in all them isn’t depending on the opposite ending. This may considerably enhance the efficiency as a result of we received’t have a gradual Quick job ready on a gradual one to finish.
  4. Swagger Docs Constructed In: FastAPI has a built-in UI that we will navigate to on localhost, enabling us to simply take a look at and work together with our routes.

FastAPI Fashions

The FastAPI portion of our venture depends on two principal information: fashions.py, which defines the information fashions that we’ll be interacting with (video games, customers, and so on.), and principal.py, which defines our precise FastAPI App and accommodates our routes. Within the context of FastAPI, Routes outline the totally different paths for processing requests. For instance, we’d have a /video games path to request video games from our database.

First, let’s focus on our fashions.py file. On this file, we outline all of our fashions. Whereas we have now totally different fashions for various objects the overall method would be the identical so we’ll solely focus on the video games mannequin, proven beneath, intimately. The very first thing you’ll discover beneath is that we have now two precise lessons outlined for our Sport object: a GameModel class that inherits from the Pydantic base mannequin, and a Sport class that inherits from the sqlalchemy declarative_base. The pure query then is, why do we have now two lessons for one knowledge construction(our recreation’s knowledge construction)?

If we weren’t utilizing an SQL database for this venture and as a substitute learn every of our CSV information right into a dataframe each time principal.py was run, then we wouldn’t want the Sport class, solely the GameModel class. On this situation, we’d learn in our video games.csv dataframe, and FastAPI would use the GameModel class to make sure datatypes have been appropriately adhered to.

Nonetheless, as a result of we’re utilizing an SQL database, it makes extra sense to have separate lessons for our API and our database, as the 2 lessons have barely totally different jobs. Our API class handles knowledge validation, serialization, and optionally available fields, and our database class handles database-specific issues like defining main/international keys, defining which desk the article maps to, and defending safe knowledge. To reiterate the final level, we’d have delicate fields in our database which might be for inside consumption solely, and we don’t wish to expose them to a consumer by way of an API( password for instance). We are able to tackle this concern by having a separate user-facing Pydantic class and an Inside SQL Alchemy one.

Beneath is an instance of how this may be carried out for our Video games object; we have now separate lessons outlined for our different tables, which may be discovered right here; nonetheless, the overall construction is similar.

from pydantic import BaseModel
from uuid import UUID,uuid4
from typing import Non-compulsory
from enum import Enum
from sqlalchemy import Column, String, Float, Integer
import sqlalchemy.dialects.postgresql as pg
from sqlalchemy.dialects.postgresql import UUID as SA_UUID
from sqlalchemy.ext.declarative import declarative_base
import uuid
from uuid import UUID

# loading sql mannequin
from sqlmodel import Subject, Session, SQLModel, create_engine, choose

# Initialize the bottom class for SQLAlchemy fashions
Base = declarative_base()

# That is the Sport mannequin for the database
class Sport(Base):
    __tablename__ = "optigame_products"  # Desk title within the PostgreSQL database

    id = Column(pg.UUID(as_uuid=True), primary_key=True, default=uuid.uuid4, distinctive=True, nullable=False)
    appid = Column(String, distinctive=True, nullable=False)  
    title = Column(String, nullable=False)  
    sort = Column(String, nullable=True)  
    is_free = Column(pg.BOOLEAN, nullable=True, default=False)  #
    short_description = Column(String, nullable=True)  
    detailed_description = Column(String, nullable=True)  
    builders = Column(String, nullable=True)  
    publishers = Column(String, nullable=True)  
    worth = Column(String, nullable=True)  
    genres = Column(String, nullable=True)  
    classes = Column(String, nullable=True)  
    release_date = Column(String, nullable=True)  
    platforms = Column(String, nullable=True)  
    metacritic_score = Column(Float, nullable=True)  
    suggestions = Column(Integer, nullable=True)  

class GameModel(BaseModel):
    id: Non-compulsory[UUID] = None
    appid: str
    title: str
    sort: Non-compulsory[str] = None
    is_free: Non-compulsory[bool] = False
    short_description: Non-compulsory[str] = None
    detailed_description: Non-compulsory[str] = None
    builders: Non-compulsory[str] = None
    publishers: Non-compulsory[str] = None
    worth: Non-compulsory[str] = None
    genres: Non-compulsory[str] = None
    classes: Non-compulsory[str] = None
    release_date: Non-compulsory[str] = None
    platforms: Non-compulsory[str] = None
    metacritic_score: Non-compulsory[float] = None
    suggestions: Non-compulsory[int] = None

    class Config:
        orm_mode = True  # Allow ORM mode to work with SQLAlchemy objects
        from_attributes = True # Allow attribute entry for SQLAlchemy objects

Setting Up FastAPI Routes

After we have now our Fashions outlined, we will then create strategies to work together with these fashions and request knowledge from the database(GET), add knowledge to the Database(POST), or take away knowledge from the database(DELETE). Beneath is an instance of how we will outline a GET request for our video games mannequin. We’ve got some preliminary setup initially of our principal.py perform to fetch the database URL and hook up with it. Then we initialize our app and add middleware to outline which URLs we’ll settle for requests from. As a result of we’ll be deploying the FastAPI venture on Render and sending requests to it from our native machine, the one origin we’re permitting is localhost port 8000. We then outline our app.get methodology known as fetch_products, which takes an appid enter, queries our database for Sport objects the place appid is the same as our filtered appid, and returns these merchandise.

Observe the beneath snipped accommodates simply the setup and first get methodology, the remaining are pretty related and accessible on the Repo, so we received’t give an in-depth rationalization for every one right here.

from fastapi import FastAPI, Relies upon, HTTPException, BackgroundTasks
from uuid import uuid4, UUID
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, Session
from dotenv import load_dotenv
import os

# Load atmosphere variables
load_dotenv()

# safety imports
from fastapi.middleware.cors import CORSMiddleware
from fastapi.safety import OAuth2PasswordBearer

# customized imports
from src.fashions import Person, Sport, GameModel, UserModel,  UserGameModel, UserGame, GameSimilarity,GameSimilarityModel, UserRecommendation, UserRecommendationModel
from src.similarity_pipeline import UserRecommendationService

# Load the database connection string from atmosphere variable or .env file
DATABASE_URL = os.environ.get("Internal_Database_Url")

# creating connection to the database
engine = create_engine(DATABASE_URL)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()

# Create the database tables (if they do not exist already)
Base.metadata.create_all(bind=engine)

# Dependency to get the database session
def get_db():
    db = SessionLocal()
    attempt:
        yield db
    lastly:
        db.shut()

# Initialize the FastAPI app
app = FastAPI(title="Sport Retailer API", model="1.0.0")

# Add CORS middleware to permit requests 
origins = ["http://localhost:8000"]

app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,  
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)



#-------------------------------------------------#
# ----------PART 1: GET METHODS-------------------#
#-------------------------------------------------#
@app.get("/")
async def root():
    return {"message": "Good day World"}


@app.get("/api/v1/video games/")
async def fetch_products(appid: str = None, db: Session = Relies upon(get_db)):
    # Question the database utilizing the SQLAlchemy Sport mannequin
    if appid:
        merchandise = db.question(Sport).filter(Sport.appid == appid).all()
    else:
        merchandise = db.question(Sport).all()
    return [GameModel.from_orm(product) for product in products]

As soon as we have now our principal.py outlined, we will lastly run it from our base venture listing utilizing the command beneath.

uvicorn src.principal:app --reload

As soon as that is completed, we will navigate to http://127.0.0.1:8000/docs and see the beneath interactive FastAPI atmosphere. From this web page, we will take a look at any of our strategies outlined in our principal.py file. Within the case of our fetch_products perform, we will move it an appid and return any matching video games from our database.

Determine 6: FastAPI Swagger Docs

Constructing our Similarity Pipeline

We’ve got our database arrange and may entry and replace knowledge through FastAPI; it’s now time to show to the central function of this venture: a recommender pipeline. Recommender techniques are a well-researched discipline, and we’re not including any innovation right here; nonetheless, it will supply a transparent instance of tips on how to implement a fundamental recommender system utilizing FastAPI.

Getting Began — Suggest Merchandise?

If we take into consideration the query “How would I like to recommend new merchandise {that a} consumer will like?”, there are two approaches that make intuitive sense.

  1. Collaborative Suggestion Programs: If I’ve a sequence of customers and a sequence of merchandise, I might determine customers with related pursuits by their total basket of merchandise after which determine merchandise ‘lacking’ from a given consumer’s basket. For instance, if I’ve customers 1–3 and merchandise A-C, customers 1–2 like all three merchandise, however consumer 3 has up to now solely favored merchandise A + B, then I would suggest them product C. This logically is smart; all three customers have a excessive diploma of overlap in merchandise that they’ve favored, however product C is lacking from consumer 3’s basket, there’s a excessive probability that they want it as nicely. This technique of producing suggestions by evaluating like customers known as collaborative filtering.
  2. Content material-Primarily based Suggestion System: If I’ve a sequence of merchandise, I might determine merchandise which might be much like merchandise {that a} consumer has favored and suggest these merchandise. For instance, if I’ve a sequence of tags for every recreation, I might convert every recreation’s sequence of tags right into a vector of 1s and 0s after which use a similarity measure (on this case, a cosine similarity measure) to measure the similarity between video games based mostly on their vectors. As soon as I’ve completed this, I can then return the highest N most related video games to these favored by a consumer based mostly on their similarity rating.

Extra on Recommender Programs may be discovered right here.

As a result of our preliminary dataset doesn’t have a big quantity of customers, we don’t have the required knowledge to counsel objects based mostly on consumer similarity, which is called a chilly begin drawback. Consequently, we are going to as a substitute develop a content-based recommender system as we have now a major quantity of recreation knowledge to work with.

To construct our pipeline, we have now to handle two challenges: (1) How will we go about calculating similarity scores for a consumer, and (2) how will we automate this to run at any time when a consumer makes an replace to their video games?

We’ll go over how a similarity pipeline may be triggered every time a consumer makes a POST request by ‘liking’ a recreation, after which cowl tips on how to construct the pipeline itself.

Tying Recommender Pipeline to FastAPI

For now, think about we have now a Suggestion Service that can replace our user_recommendation desk. We wish to be sure that this service known as at any time when a consumer updates their preferences. We are able to implement this in a few steps as proven beneath; first, we outline a generate_recommendations_background perform, this perform is liable for connecting to our database, working the similarity pipeline, after which closing the connection. Subsequent, we have to guarantee that is known as when a consumer makes a put up request(i.e., likes a brand new recreation); to do that, we merely add the perform name on the finish of our create_user_game put up request perform.

The results of this workflow is that at any time when a consumer makes a put up request to our user_game desk, they name the create_user_game perform, add a brand new user_game object to the database, after which run the similarity pipeline as a background perform.

Observe: The Beneath put up methodology and helper perform are saved in principal.py with the remainder of our FastAPI strategies.

# importing similarity pipeline
from src.similarity_pipeline import UserRecommendationService

# Background job perform
def generate_recommendations_background(username: str, database_url: str):
    """Background job to generate suggestions for a consumer"""
    # Create a brand new database session for the background job
    background_engine = create_engine(database_url)
    BackgroundSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=background_engine)
    
    db = BackgroundSessionLocal()
    attempt:
        recommendation_service = UserRecommendationService(db, database_url)
        recommendation_service.generate_recommendations_for_user(username)
    lastly:
        db.shut()

# Put up methodology which calls background job perform
@app.put up("/api/v1/user_game/")
async def create_user_game(user_game: UserGameModel, background_tasks: BackgroundTasks, db: Session = Relies upon(get_db)):
    # Verify if the entry already exists
    present = db.question(UserGame).filter_by(username=user_game.username, appid=user_game.appid).first()
    if present:
        increase HTTPException(status_code=400, element="Person already has this recreation.")

    # Put together knowledge with defaults
    user_game_data = {
        "username": user_game.username,
        "appid": user_game.appid,
        "shelf": user_game.shelf if user_game.shelf will not be None else "Wish_List",
        "ranking": user_game.ranking if user_game.ranking will not be None else 0.0,
        "overview": user_game.overview if user_game.overview will not be None else ""
    }
    if user_game.id will not be None:
        user_game_data["id"] = UUID(str(user_game.id))

    # Save the consumer recreation to database
    db_user_game = UserGame(**user_game_data)
    db.add(db_user_game)
    db.commit()
    db.refresh(db_user_game)
    
    # Set off background job to generate suggestions for this consumer
    background_tasks.add_task(generate_recommendations_background, user_game.username, DATABASE_URL)
    
    return db_user_game

Constructing Recommender Pipeline

Now that we perceive how our similarity pipeline may be triggered when a consumer updates their favored video games, it’s time to dive into the mechanics of how the advice pipeline works. Our suggestion pipeline is saved in similarity_pipeline.py and accommodates our UserRecommendationService class that we confirmed tips on how to import and instantiate above. This class accommodates a sequence of helper features which might be finally all known as within the generate_recommendations_for_user methodology. There are 7 fundamental steps known as so that we’ll stroll by way of one after the other.

  1. Fetching a consumer’s Video games: To generate related recreation suggestions, we have to retrieve the video games {that a} consumer has already added to their recreation basket. That is completed by calling our fetch_user_games helper perform. This perform queries our user_games desk utilizing the consumer’s ID, which is making the put up request as an enter, and returning all video games of their basket.
  2. Fetching recreation tags: To check video games, we’d like a dimension to match them on, and that dimension is the tags related to every recreation(technique, board recreation, and so on.). To retrieve the sport:tag mapping, we name our fetch_all_game_tags perform, which returns the tags for all of the video games in our database
  3. Vectorizing recreation tags: To check the similarity between video games A and B, we first have to vectorize the sport tags utilizing our create_game_vectors perform. This perform takes a sequence of all tags in alphabetical order and checks if every of the tags is related to a given recreation. For instance, if our complete set of tags was [boardgame, deckbuilding, resource-management] and recreation 1 simply had the boardgame tag related to it, then its vector could be [1, 0, 0].
  4. Creating our customers vector: as soon as we have now a vector representing every recreation, we then want an combination consumer vector to match it to. To realize this, we use our create_user_vector perform, which generates an combination vector of the identical size as our recreation vectors that we will then use to generate a similarity rating between our consumer and each different recreation.
  5. Calculate Similarity: We use the vectors created in steps 3 and 4 in our calculate_user_recommendations, which calculates a cosine similarity rating starting from 0–1 and measuring the similarity between every recreation and our consumer combination video games
  6. Deleting outdated Suggestions: Earlier than we populate our user_recommendations desk with new suggestions for a consumer, we first should delete the outdated ones with delete_existing_recommendations. This deletes simply the suggestions for the consumer who made the put up request; the others stay the identical.
  7. Populate new Suggestions: After deleting the outdated suggestions, we then populate the brand new ones with save_recommendations.


from sqlalchemy.orm import Session
from sqlalchemy import create_engine, textual content
from src.fashions import UserGame, UserRecommendation
from sklearn.metrics.pairwise import cosine_similarity
import pandas as pd
import uuid
from typing import Checklist
import logging

# Arrange logging
logging.basicConfig(degree=logging.INFO)
logger = logging.getLogger(__name__)

class UserRecommendationService:
    def __init__(self, db_session: Session, database_url: str):
        self.db = db_session
        self.database_url = database_url
        self.engine = create_engine(database_url)

    def fetch_user_games(self, username: str) -> pd.DataFrame:
        """Fetch all video games for a particular consumer"""
        question = textual content("SELECT username, appid FROM user_games WHERE username = :username")
        with self.engine.join() as conn:
            consequence = conn.execute(question, {"username": username})
            knowledge = consequence.fetchall()
            return pd.DataFrame(knowledge, columns=['username', 'appid'])

    def fetch_all_category(self) -> pd.DataFrame:
        """Fetch all recreation tags"""
        question = textual content("SELECT appid, class FROM class")
        with self.engine.join() as conn:
            consequence = conn.execute(question)
            knowledge = consequence.fetchall()
            return pd.DataFrame(knowledge, columns=['appid', 'category'])

    def create_game_vectors(self, tag_df: pd.DataFrame) -> tuple[pd.DataFrame, List[str], Checklist[str]]:
        """Create recreation vectors from tags"""
        unique_tags = tag_df['category'].drop_duplicates().sort_values().tolist()
        unique_games = tag_df['appid'].drop_duplicates().sort_values().tolist()
        
        game_vectors = []
        for recreation in unique_games:
            tags = tag_df[tag_df['appid'] == recreation]['category'].tolist()
            vector = [1 if tag in tags else 0 for tag in unique_tags]
            game_vectors.append(vector)
        
        return pd.DataFrame(game_vectors, columns=unique_tags, index=unique_games), unique_tags, unique_games

    def create_user_vector(self, user_games_df: pd.DataFrame, game_vectors: pd.DataFrame, unique_tags: Checklist[str]) -> pd.DataFrame:
        """Create consumer vector from their performed video games"""
        if user_games_df.empty:
            return pd.DataFrame([[0] * len(unique_tags)], columns=unique_tags, index=['unknown_user'])
        
        username = user_games_df.iloc[0]['username']
        user_games = user_games_df['appid'].tolist()
        
        # Solely hold video games that exist in game_vectors
        user_games = [g for g in user_games if g in game_vectors.index]
        
        if not user_games:
            user_vector = [0] * len(unique_tags)
        else:
            played_game_vectors = game_vectors.loc[user_games]
            user_vector = played_game_vectors.imply(axis=0).tolist()
        
        return pd.DataFrame([user_vector], columns=unique_tags, index=[username])

    def calculate_user_recommendations(self, user_vector: pd.DataFrame, game_vectors: pd.DataFrame, top_n: int = 20) -> pd.DataFrame:
        """Calculate similarity between consumer vector and all recreation vectors"""
        username = user_vector.index[0]
        user_vector_data = user_vector.iloc[0].values.reshape(1, -1)
        
        # Calculate similarities
        similarities = cosine_similarity(user_vector_data, game_vectors)
        similarity_df = pd.DataFrame(similarities.T, index=game_vectors.index, columns=[username])
        
        # Get high N suggestions
        top_games = similarity_df[username].nlargest(top_n)
        
        suggestions = []
        for appid, similarity in top_games.objects():
            suggestions.append({
                "username": username,
                "appid": appid,
                "similarity": float(similarity)
            })
        
        return pd.DataFrame(suggestions)

    def delete_existing_recommendations(self, username: str):
        """Delete present suggestions for a consumer"""
        self.db.question(UserRecommendation).filter(UserRecommendation.username == username).delete()
        self.db.commit()

    def save_recommendations(self, recommendations_df: pd.DataFrame):
        """Save new suggestions to database"""
        for _, row in recommendations_df.iterrows():
            suggestion = UserRecommendation(
                id=uuid.uuid4(),
                username=row['username'],
                appid=row['appid'],
                similarity=row['similarity']
            )
            self.db.add(suggestion)
        self.db.commit()

    def generate_recommendations_for_user(self, username: str, top_n: int = 20):
        """Predominant methodology to generate suggestions for a particular consumer"""
        attempt:
            logger.data(f"Beginning suggestion technology for consumer: {username}")
            
            # 1. Fetch consumer's video games
            user_games_df = self.fetch_user_games(username)
            if user_games_df.empty:
                logger.warning(f"No video games discovered for consumer: {username}")
                return
            
            # 2. Fetch all recreation tags
            tag_df = self.fetch_all_category()
            if tag_df.empty:
                logger.error("No recreation tags present in database")
                return
            
            # 3. Create recreation vectors
            game_vectors, unique_tags, unique_games = self.create_game_vectors(tag_df)
            
            # 4. Create consumer vector
            user_vector = self.create_user_vector(user_games_df, game_vectors, unique_tags)
            
            # 5. Calculate suggestions
            recommendations_df = self.calculate_user_recommendations(user_vector, game_vectors, top_n)
            
            # 6. Delete present suggestions
            self.delete_existing_recommendations(username)
            
            # 7. Save new suggestions
            self.save_recommendations(recommendations_df)
            
            logger.data(f"Efficiently generated {len(recommendations_df)} suggestions for consumer: {username}")
            
        besides Exception as e:
            logger.error(f"Error producing suggestions for consumer {username}: {str(e)}")
            self.db.rollback()
            increase

Wrapping Up

On this article, we coated tips on how to arrange a PostgreSQL database and FastAPI utility to run a recreation recommender system. Nonetheless, we haven’t but gone over tips on how to deploy this technique to a cloud service to permit others to work together with it. For half two masking precisely this, learn on in Half 2.

Figures: All photos, except in any other case famous, are by the writer.

Hyperlinks

  1. Github Repository for Venture: https://github.com/pinstripezebra/recommender_system
  2. FastAPI Docs: https://fastapi.tiangolo.com/tutorial/
  3. Recommender Programs: https://en.wikipedia.org/wiki/Recommender_system
  4. Cosine Similarity: https://en.wikipedia.org/wiki/Cosine_similarity)
Tags: BuildingFastAPIGamePartPostgreSQLRecommenderRenderSystemsVideo

Related Posts

Depositphotos 649928304 xl scaled 1.jpg
Artificial Intelligence

Why AI Nonetheless Can’t Substitute Analysts: A Predictive Upkeep Instance

October 14, 2025
Landis brown gvdfl 814 c unsplash.jpg
Artificial Intelligence

TDS E-newsletter: September Should-Reads on ML Profession Roadmaps, Python Necessities, AI Brokers, and Extra

October 11, 2025
Mineworld video example ezgif.com resize 2.gif
Artificial Intelligence

Dreaming in Blocks — MineWorld, the Minecraft World Mannequin

October 10, 2025
0 v yi1e74tpaj9qvj.jpeg
Artificial Intelligence

Previous is Prologue: How Conversational Analytics Is Altering Information Work

October 10, 2025
Pawel czerwinski 3k9pgkwt7ik unsplash scaled 1.jpg
Artificial Intelligence

Knowledge Visualization Defined (Half 3): The Position of Colour

October 9, 2025
Nasa hubble space telescope rzhfmsl1jow unsplash.jpeg
Artificial Intelligence

Know Your Actual Birthday: Astronomical Computation and Geospatial-Temporal Analytics in Python

October 8, 2025
Next Post
Image.jpeg

Prime Challenges Of Product Warehousing In The Age Of Massive Information

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Blog.png

XMN is accessible for buying and selling!

October 10, 2025
0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
Gary20gensler2c20sec id 727ca140 352e 4763 9c96 3e4ab04aa978 size900.jpg

Coinbase Recordsdata Authorized Movement In opposition to SEC Over Misplaced Texts From Ex-Chair Gary Gensler

September 14, 2025

EDITOR'S PICK

Shutterstock Personal Records.jpg

Tabular information basis mannequin slashes coaching to seconds • The Register

January 15, 2025
72b7e582 1b82 4ce0 B92f Fbb6654169f1 800x420.jpg

Trump’s SEC chair decide Paul Atkins faces affirmation delay as Senate awaits key paperwork

March 18, 2025
Michael Saylor Begins Selling Over 200 Million Worth Of Microstrategy Shares To Buy More Bitcoin.jpg

Bitcoin Bull Technique Expands Its Huge Stockpile To 568,840 BTC After Newest $1.3 Billion Purchase ⋆ ZyCrypto

May 12, 2025
Rosidi ai agents in analytics workflows 8.png

AI Brokers in Analytics Workflows: Too Early or Already Behind?

June 15, 2025

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Why AI Nonetheless Can’t Substitute Analysts: A Predictive Upkeep Instance
  • Kenya’s Legislators Cross Crypto Invoice to Enhance Investments and Oversight
  • Constructing A Profitable Relationship With Stakeholders
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?