

Picture by Writer
# Introduction
AI coding instruments are getting impressively good at writing Python code that works. They’ll construct whole functions and implement advanced algorithms in minutes. Nonetheless, the code AI generates is commonly a ache to keep up.
If you’re utilizing instruments like Claude Code, GitHub Copilot, or Cursor’s agentic mode, you may have in all probability skilled this. The AI helps you ship working code quick, however the fee exhibits up later. You’ve doubtless refactored a bloated perform simply to grasp the way it works weeks after it was generated.
The issue is not that AI writes dangerous code — although it typically does — it’s that AI optimizes for “working now” and finishing the necessities in your immediate, when you want code that’s readable and maintainable in the long run. This text exhibits you bridge this hole with a deal with Python-specific methods.
# Avoiding the Clean Canvas Entice
The largest mistake builders make is asking AI to begin from scratch. AI brokers work greatest with constraints and pointers.
Earlier than you write your first immediate, arrange the fundamentals of the challenge your self. This implies selecting your challenge construction — putting in your core libraries and implementing a couple of working examples — to set the tone. This might sound counterproductive, nevertheless it helps with getting AI to put in writing code that aligns higher with what you want in your software.
Begin by constructing a few options manually. If you’re constructing an API, implement one full endpoint your self with all of the patterns you need: dependency injection, correct error dealing with, database entry, and validation. This turns into the reference implementation.
Say you write this primary endpoint manually:
from fastapi import APIRouter, Relies upon, HTTPException
from sqlalchemy.orm import Session
router = APIRouter()
# Assume get_db and Person mannequin are outlined elsewhere
async def get_user(user_id: int, db: Session = Relies upon(get_db)):
person = db.question(Person).filter(Person.id == user_id).first()
if not person:
elevate HTTPException(status_code=404, element="Person not discovered")
return person
When AI sees this sample, it understands how we deal with dependencies, how we question databases, and the way we deal with lacking information.
The identical applies to your challenge construction. Create your directories, arrange your imports, and configure your testing framework. AI shouldn’t be making these architectural choices.
# Making Python’s Sort System Do the Heavy Lifting
Python’s dynamic typing is versatile, however that flexibility turns into a legal responsibility when AI is writing your code. Make kind hints important guardrails as a substitute of a nice-to-have in your software code.
Strict typing catches AI errors earlier than they attain manufacturing. Whenever you require kind hints on each perform signature and run mypy in strict mode, the AI can’t take shortcuts. It can’t return ambiguous sorts or settle for parameters that could be strings or could be lists.
Extra importantly, strict sorts pressure higher design. For instance, an AI agent attempting to put in writing a perform that accepts knowledge: dict could make many assumptions about what’s in that dictionary. Nonetheless, an AI agent writing a perform that accepts knowledge: UserCreateRequest the place UserCreateRequest is a Pydantic mannequin has precisely one interpretation.
# This constrains AI to put in writing right code
from pydantic import BaseModel, EmailStr
class UserCreateRequest(BaseModel):
identify: str
e-mail: EmailStr
age: int
class UserResponse(BaseModel):
id: int
identify: str
e-mail: EmailStr
def process_user(knowledge: UserCreateRequest) -> UserResponse:
go
# Fairly than this
def process_user(knowledge: dict) -> dict:
go
Use libraries that implement contracts: SQLAlchemy 2.0 with type-checked fashions and FastAPI with response fashions are wonderful selections. These are usually not simply good practices; they’re constraints that maintain AI on observe.
Set mypy to strict mode and make passing kind checks non-negotiable. When AI generates code that fails kind checking, it’s going to iterate till it passes. This computerized suggestions loop produces higher code than any quantity of immediate engineering.
# Creating Documentation to Information AI
Most initiatives have documentation that builders ignore. For AI brokers, you want documentation they really use — like a README.md file with pointers. This implies a single file with clear, particular guidelines.
Create a CLAUDE.md or AGENTS.md file at your challenge root. Don’t make it too lengthy. Concentrate on what is exclusive about your challenge moderately than basic Python greatest practices.
Your AI pointers ought to specify:
- Undertaking construction and the place various kinds of code belong
- Which libraries to make use of for widespread duties
- Particular patterns to comply with (level to instance recordsdata)
- Express forbidden patterns
- Testing necessities
Right here is an instance AGENTS.md file:
# Undertaking Pointers
## Construction
/src/api - FastAPI routers
/src/providers - enterprise logic
/src/fashions - SQLAlchemy fashions
/src/schemas - Pydantic fashions
## Patterns
- All providers inherit from BaseService (see src/providers/base.py)
- All database entry goes by means of repository sample (see src/repositories/)
- Use dependency injection for all exterior dependencies
## Requirements
- Sort hints on all features
- Docstrings utilizing Google model
- Capabilities beneath 50 strains
- Run `mypy --strict` and `ruff examine` earlier than committing
## By no means
- No naked besides clauses
- No kind: ignore feedback
- No mutable default arguments
- No world state
The bottom line is being particular. Don’t merely say “comply with greatest practices.” Level to the precise file that demonstrates the sample. Don’t solely say “deal with errors correctly;” present the error dealing with sample you need.
# Writing Prompts That Level to Examples
Generic prompts produce generic code. Particular prompts that reference your present codebase produce extra maintainable code.
As an alternative of asking AI to “add authentication,” stroll it by means of the implementation with references to your patterns. Right here is an instance of such a immediate that factors to examples:
Implement JWT authentication in src/providers/auth_service.py. Observe the identical construction as UserService in src/providers/user_service.py. Use bcrypt for password hashing (already in necessities.txt).
Add authentication dependency in src/api/dependencies.py following the sample of get_db.
Create Pydantic schemas in src/schemas/auth.py just like person.py.
Add pytest exams in exams/test_auth_service.py utilizing fixtures from conftest.py.
Discover how each instruction factors to an present file or sample. You aren’t asking AI to construct out an structure; you’re asking it to use what that you must a brand new characteristic.
When the AI generates code, overview it towards your patterns. Does it use the identical dependency injection method? Does it comply with the identical error dealing with? Does it set up imports the identical approach? If not, level out the discrepancy and ask it to align with the prevailing sample.
# Planning Earlier than Implementing
AI brokers can transfer quick, which may often make them much less helpful if pace comes on the expense of construction. Use plan mode or ask for an implementation plan earlier than any code will get written.
A planning step forces the AI to assume by means of dependencies and construction. It additionally provides you an opportunity to catch architectural issues — similar to round dependencies or redundant providers — earlier than they’re applied.
Ask for a plan that specifies:
- Which recordsdata can be created or modified
- What dependencies exist between elements
- Which present patterns can be adopted
- What exams are wanted
Assessment this plan such as you would overview a design doc. Test that the AI understands your challenge construction. Confirm it’s utilizing the best libraries and make sure it’s not reinventing one thing that already exists.
If the plan seems good, let the AI execute it. If not, right the plan earlier than any code will get written. It’s simpler to repair a foul plan than to repair dangerous code.
# Asking AI to Write Assessments That Truly Check
AI is nice and tremendous quick at writing exams. Nonetheless, AI is just not environment friendly at writing helpful exams until you’re particular about what “helpful” means.
Default AI check habits is to check the comfortable path and nothing else. You get exams that confirm the code works when all the things goes proper, which is strictly when you do not want exams.
Specify your testing necessities explicitly. For each characteristic, require:
- Completely satisfied path check
- Validation error exams to examine what occurs with invalid enter
- Edge case exams for empty values, None, boundary circumstances, and extra
- Error dealing with exams for database failures, exterior service failures, and the like
Level AI to your present check recordsdata as examples. When you have good check patterns already, AI will write helpful exams, too. When you don’t have good exams but, write a couple of your self first.
# Validating Output Systematically
After AI generates code, don’t simply examine if it runs. Run it by means of a guidelines.
Your validation guidelines ought to embody questions like the next:
- Does it go mypy strict mode
- Does it comply with patterns from present code
- Are all features beneath 50 strains
- Do exams cowl edge instances and errors
- Are there kind hints on all features
- Does it use the required libraries accurately
Automate what you possibly can. Arrange pre-commit hooks that run mypy, Ruff, and pytest. If AI-generated code fails these checks, it doesn’t get dedicated.
For what you can’t automate, you’ll spot widespread anti-patterns after reviewing sufficient AI code — similar to features that do an excessive amount of, error dealing with that swallows exceptions, or validation logic blended with enterprise logic.
# Implementing a Sensible Workflow
Allow us to now put collectively all the things now we have mentioned to this point.
You begin a brand new challenge. You spend time establishing the construction, selecting and putting in libraries, and writing a few instance options. You create CLAUDE.md along with your pointers and write particular Pydantic fashions.
Now you ask AI to implement a brand new characteristic. You write an in depth immediate pointing to your examples. AI generates a plan. You overview and approve it. AI writes the code. You run kind checking and exams. Every part passes. You overview the code towards your patterns. It matches. You commit.
Complete time from immediate to commit could solely be round quarter-hour for a characteristic that will have taken you an hour to put in writing manually. However extra importantly, the code you get is less complicated to keep up — it follows the patterns you established.
The subsequent characteristic goes sooner as a result of AI has extra examples to study from. The code turns into extra constant over time as a result of each new characteristic reinforces the prevailing patterns.
# Wrapping Up
With AI coding instruments proving tremendous helpful, your job as a developer or an information skilled is altering. You at the moment are spending much less time writing code and extra time on:
- Designing programs and selecting architectures
- Creating reference implementations of patterns
- Writing constraints and pointers
- Reviewing AI output and sustaining the standard bar
The ability that issues most is just not writing code sooner. Fairly, it’s designing programs that constrain AI to put in writing maintainable code. It’s realizing which practices scale and which create technical debt. I hope you discovered this text useful even when you don’t use Python as your programming language of selection. Tell us what else you assume we are able to do to maintain AI-generated Python code maintainable. Hold exploring!
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embody DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and low! Presently, she’s engaged on studying and sharing her data with the developer group by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.
















