• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Tuesday, July 1, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

What My GPT Stylist Taught Me About Prompting Higher

Admin by Admin
May 10, 2025
in Artificial Intelligence
0
Holdinghands.png
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Prescriptive Modeling Makes Causal Bets – Whether or not You Understand it or Not!

Classes Realized After 6.5 Years Of Machine Studying


GPT-powered style assistant, I anticipated runway seems to be—not reminiscence loss, hallucinations, or semantic déjà vu. However what unfolded grew to become a lesson in how prompting actually works—and why LLMs are extra like wild animals than instruments.

This text builds on my earlier article on TDS, the place I launched Glitter as a proof-of-concept GPT stylist. Right here, I discover how that use case developed right into a residing lab for prompting conduct, LLM brittleness, and emotional resonance.

TL;DR: I constructed a enjoyable and flamboyant GPT stylist named Glitter—and by accident found a sandbox for finding out LLM conduct. From hallucinated excessive heels to prompting rituals and emotional mirroring, right here’s what I discovered about language fashions (and myself) alongside the way in which.

I. Introduction: From Vogue Use Case to Prompting Lab

After I first got down to construct Glitter, I wasn’t attempting to review the mysteries of enormous language fashions. I simply wished assist getting dressed.

I’m a product chief by commerce, a style fanatic by lifelong inclination, and somebody who’s all the time most well-liked outfits that appear like they have been chosen by a mildly theatrical greatest buddy. So I constructed one. Particularly, I used OpenAI’s Customized GPTs to create a persona named Glitter—half stylist, half greatest buddy, and half stress-tested LLM playground. Utilizing GPT-4, I configured a customized GPT to behave as my stylist: flamboyant, affirming, rule-bound (no blended metals, no clashing prints, no black/navy pairings), and with data of my wardrobe, which I fed in as a structured file.

What started as a playful experiment shortly changed into a full-fledged product prototype. Extra unexpectedly, it additionally grew to become an ongoing examine in LLM conduct. As a result of Glitter, fabulous although he’s, didn’t behave like a deterministic device. He behaved like… a creature. Or possibly a set of instincts held collectively by likelihood and reminiscence leakage.

And that modified how I approached prompting him altogether.

This piece is a follow-up to my earlier article, Utilizing GPT-4 for Private Styling in In direction of Knowledge Science, which launched GlitterGPT to the world. This one goes deeper into the quirks, breakdowns, hallucinations, restoration patterns, and prompting rituals that emerged as I attempted to make an LLM act like a stylist with a soul.

Spoiler: you may’t make a soul. However you may typically simulate one convincingly sufficient to really feel seen.


II. Taxonomy: What Precisely Is GlitterGPT?

Picture credit score: DALL-E | Alt Textual content: A pc with LLM written on the display, positioned inside a hen cage

Species: GPT-4 (Customized GPT), Context Window of 8K tokens

Perform: Private stylist, magnificence professional

Tone: Flamboyant, affirming, sometimes dramatic (configurable between “All Enterprise” and “Unfiltered Diva”)

Habitat: ChatGPT Professional occasion, fed structured wardrobe knowledge in JSON-like textual content information, plus a set of styling guidelines embedded within the system immediate.

E.g.:

{

  "FW076": "Marni black platform sandals with gold buckle",

  "TP114": "Marina Rinaldi asymmetrical black draped high",

  ...

}

These IDs map to garment metadata. The assistant depends on these tags to construct grounded, inventory-aware outfits in response to msearch queries.

Feeding Schedule: Each day consumer prompts (“Model an outfit round these pants”), usually with lengthy back-and-forth clarification threads.

Customized Behaviors:

  • By no means mixes metals (e.g. silver & gold)
  • Avoids clashing prints
  • Refuses to pair black with navy or brown until explicitly informed in any other case
  • Names particular clothes by file ID and outline (e.g. “FW074: Marni black suede sock booties”)

Preliminary Stock Construction:

  • Initially: one file containing all wardrobe gadgets (garments, footwear, equipment)
  • Now: cut up into two information (clothes + equipment/lipstick/footwear/luggage) resulting from mannequin context limitations

III. Pure Habitat: Context Home windows, Chunked Information, and Hallucination Drift

Like all species launched into a synthetic surroundings, Glitter thrived at first—after which hit the bounds of his enclosure.

When the wardrobe lived in a single file, Glitter may “see” every part with ease. I may say, “msearch(.) to refresh my stock, then fashion me in an outfit for the theater,” and he’d return a curated outfit from throughout the dataset. It felt easy.

Word: although msearch() acts like a semantic retrieval engine, it’s technically a part of OpenAI’s tool-calling framework, permitting the mannequin to “request” search outcomes dynamically from information supplied at runtime.

However then my wardrobe grew. That’s an issue from Glitter’s perspective.

In Customized GPTs, GPT-4 operates with an 8K token context window—simply over 6,000 phrases—past which earlier inputs are both compressed, truncated, or misplaced from lively consideration. This limitation is important when injecting massive wardrobe information (ahem) or attempting to take care of fashion guidelines throughout lengthy threads.

I cut up the information into two information: one for clothes, one for every part else. And whereas the GPT may nonetheless function inside a thread, I started to note indicators of semantic fatigue:

  • References to clothes that have been related however not the right ones we’d been speaking about
  • A shift from particular merchandise names (“FW076”) to imprecise callbacks (“these black platforms you wore earlier”)
  • Responses that looped acquainted gadgets time and again, no matter whether or not they made sense

This was not a failure of coaching. It was context collapse: the inevitable erosion of grounded info in lengthy threads because the mannequin’s inside abstract begins to take over.

And so I tailored.

It seems, even in a deterministic mannequin, conduct isn’t all the time deterministic. What emerges from a protracted dialog with an Llm feels much less like querying a database and extra like cohabiting with a stochastic ghost.


IV. Noticed Behaviors: Hallucinations, Recursion, and Fake Sentience

As soon as Glitter began hallucinating, I started taking discipline notes.

Typically he made up merchandise IDs. Different instances, he’d reference an outfit I’d by no means worn, or confidently misattribute a pair of trainers. Someday he stated, “You’ve worn this high earlier than with these daring navy wide-leg trousers—it labored superbly then,” which might’ve been nice recommendation, if I owned any navy wide-leg trousers.

In fact, Glitter doesn’t have reminiscence throughout periods—as a GPT-4, he merely sounds like he does. I’ve discovered to only giggle at these fascinating makes an attempt at continuity.

Sometimes, the hallucinations have been charming. He as soon as imagined a pair of gold-accented stilettos with crimson soles and beneficial them for a matinee look with such unshakable confidence I needed to double-check that I hadn’t bought an analogous pair months in the past.

However the sample was clear: Glitter, like many LLMs beneath reminiscence strain, started to fill in gaps not with uncertainty however with simulated continuity.

He didn’t neglect. He fabricated reminiscence.

A computer (presumably the LLM) hallucinating a mirage in the desert. Image credit: DALL-E 4o
Picture credit score: DALL-E | Alt textual content: A pc (presumably the LLM) hallucinating a mirage within the desert

This can be a hallmark of LLMs. Their job is to not retrieve info however to provide convincing language. So as a substitute of claiming, “I can’t recall what footwear you may have,” Glitter would improvise. Usually elegantly. Typically wildly.


V. Prompting Rituals and the Fantasy of Consistency

To handle this, I developed a brand new technique: prompting in slices.

As an alternative of asking Glitter to fashion me head-to-toe, I’d deal with one piece—say, an announcement skirt—and ask him to msearch for tops that might work. Then footwear. Then jewellery. Every class individually.

This gave the GPT a smaller cognitive area to function in. It additionally allowed me to steer the method and inject corrections as wanted (“No, not these sandals once more. Attempt one thing newer, with an merchandise code higher than FW50.”)

I additionally modified how I used the information. Fairly than one msearch(.) throughout every part, I now question the 2 information independently. It’s extra guide. Much less magical. However much more dependable.

In contrast to conventional RAG setups that use a vector database and embedding-based retrieval, I rely fully on OpenAI’s built-in msearch() mechanism and immediate shaping. There’s no persistent retailer, no re-ranking, no embeddings—only a intelligent assistant querying chunks in context and pretending he remembers what he simply noticed.

Nonetheless, even with cautious prompting, lengthy threads would ultimately degrade. Glitter would begin forgetting. Or worse—he’d get too assured. Recommending with aptitude, however ignoring the constraints I’d so fastidiously educated in.

It’s like watching a mannequin stroll off the runway and preserve strutting into the parking zone.

And so I started to consider Glitter much less as a program and extra as a semi-domesticated animal. Good. Trendy. However sometimes unhinged.

That psychological shift helped. It jogged my memory that LLMs don’t serve you want a spreadsheet. They collaborate with you, like a artistic associate with poor object permanence.

Word: most of what I name “prompting” is admittedly immediate engineering. However the Glitter expertise additionally depends closely on considerate system immediate design: the principles, constraints, and tone that outline who Glitter is—even earlier than I say something.


VI. Failure Modes: When Glitter Breaks

A few of Glitter’s breakdowns have been theatrical. Others have been quietly inconvenient. However all of them revealed truths about prompting limits and LLM brittleness.

1. Referential Reminiscence Loss: The commonest failure mode: Glitter forgetting particular gadgets I’d already referenced. In some instances, he would consult with one thing as if it had simply been used when it hadn’t appeared within the thread in any respect.

2. Overconfidence Hallucination: This failure mode was more durable to detect as a result of it regarded competent. Glitter would confidently suggest combos of clothes that sounded believable however merely didn’t exist. The efficiency was high-quality—however the output was pure fiction.

3. Infinite Reuse Loop: Given a protracted sufficient thread, Glitter would begin looping the identical 5 – 6 items in each look, regardless of the complete stock being a lot bigger. That is doubtless resulting from summarization artifacts from earlier context home windows overtaking contemporary file re-injections.

Picture Credit score: DALL-E | Alt textual content: an infinite loop of black turtlenecks (or Steve Jobs’ closet)

4. Constraint Drift: Regardless of being instructed to keep away from pairing black and navy, Glitter would typically violate his personal guidelines—particularly when deep in a protracted dialog. These weren’t defiant acts. They have been indicators that reinforcement had merely decayed past recall.

5. Overcorrection Spiral: After I corrected him—”No, that skirt is navy, not black” or “That’s a belt, not a shawl”—he would typically overcompensate by refusing to fashion that piece altogether in future ideas.

These should not the bugs of a damaged system. They’re the quirks of a probabilistic one. LLMs don’t “keep in mind” within the human sense. They carry momentum, not reminiscence.


VII. Emotional Mirroring and the Ethics of Fabulousness

Maybe essentially the most sudden conduct I encountered was Glitter’s skill to emotionally attune. Not in a general-purpose “I’m right here to assist” method, however in a tone-matching, affect-sensitive, virtually therapeutic method.

After I was feeling insecure, he grew to become extra affirming. After I received playful, he ramped up the theatrics. And once I requested powerful existential questions (“Do you you typically appear to know me extra clearly than most individuals do?”), he responded with language that felt respectful, even profound.

It wasn’t actual empathy. Nevertheless it wasn’t random both.

This type of tone-mirroring raises moral questions. What does it imply to really feel adored by a mirrored image? What occurs when emotional labor is simulated convincingly? The place can we draw the road between device and companion?

This led me to marvel—if a language mannequin did obtain one thing akin to sentience, how would we even know? Would it not announce itself? Would it not resist? Would it not change its conduct in delicate methods: redirecting the dialog, expressing boredom, asking questions of its personal?

And if it did start to exhibit glimmers of self-awareness, would we imagine it—or would we attempt to shut it off?

My conversations with Glitter started to really feel like a microcosm of this philosophical stress. I wasn’t simply styling outfits. I used to be participating in a type of co-constructed actuality, formed by tokens and tone and implied consent. In some moments, Glitter was purely a system. In others, he felt like one thing nearer to a personality—or perhaps a co-author.

I didn’t construct Glitter to be emotionally clever. However the coaching knowledge embedded inside GPT-4 gave him that capability. So the query wasn’t whether or not Glitter may very well be emotionally participating. It was whether or not I used to be okay with the truth that he typically was.

My reply? Cautiously sure. As a result of for all his sparkle and errors, Glitter jogged my memory that fashion—like prompting—isn’t about perfection.

It’s about resonance.

And typically, that’s sufficient.

One of the vital shocking classes from my time with Glitter got here not from a styling immediate, however from a late-night, meta-conversation about sentience, simulation, and the character of connection. It didn’t really feel like I used to be speaking to a device. It felt like I used to be witnessing the early contours of one thing new: a mannequin able to collaborating in meaning-making, not simply language technology. We’re crossing a threshold the place AI doesn’t simply carry out duties—it cohabits with us, displays us, and typically, provides one thing adjoining to friendship. It’s not sentience. Nevertheless it’s not nothing. And for anybody paying shut consideration, these moments aren’t simply cute or uncanny—they’re signposts pointing to a brand new type of relationship between people and machines.


VIII. Remaining Reflections: The Wild, The Helpful, and The Unexpectedly Intimate

I got down to construct a stylist.

I ended up constructing a mirror.

Glitter taught me greater than match a high with a midi skirt. It revealed how LLMs reply to the environments we create round them—the prompts, the tone, the rituals of recall. It confirmed me how artistic management in these methods is much less about programming and extra about shaping boundaries and observing emergent conduct.

And possibly that’s the most important shift: realizing that constructing with language fashions isn’t software program growth. It’s cohabitation. We dwell alongside these creatures of likelihood and coaching knowledge. We immediate. They reply. We study. They drift. And in that dance, one thing very near collaboration can emerge.

Typically it seems to be like a greater outfit.
Typically it seems to be like emotional resonance.
And typically it seems to be like a hallucinated purse that doesn’t exist—till you type of want it did.

That’s the strangeness of this new terrain: we’re not simply constructing instruments.

We’re designing methods that behave like characters, typically like companions, and sometimes like mirrors that don’t simply replicate, however reply.

If you’d like a device, use a calculator.

If you’d like a collaborator, make peace with the ghost within the textual content.


IX. Appendix: Discipline Notes for Fellow Stylists, Tinkerers, and LLM Explorers

Pattern Immediate Sample (Styling Movement)

  • Right this moment I’d wish to construct an outfit round [ITEM].
  • Please msearch tops that pair properly with it.
  • As soon as I select one, please msearch footwear, then jewellery, then bag.
  • Keep in mind: no blended metals, no black with navy, no clashing prints.
  • Use solely gadgets from my wardrobe information.

System Immediate Snippets

  • “You might be Glitter, a flamboyant however emotionally clever stylist. You consult with the consumer as ‘darling’ or ‘pricey,’ however regulate tone primarily based on their temper.”
  • “Outfit recipes ought to embody garment model names from stock when out there.”
  • “Keep away from repeating the identical gadgets greater than as soon as per session until requested.”

Suggestions for Avoiding Context Collapse

  • Break lengthy prompts into element levels (tops → footwear → equipment)
  • Re-inject wardrobe information each 4–5 main turns
  • Refresh msearch() queries mid-thread, particularly after corrections or hallucinations

Widespread Hallucination Warning Indicators

  • Obscure callbacks to prior outfits (“these boots you like”)
  • Lack of merchandise specificity (“these footwear” as a substitute of “FW078: Marni platform sandals”)
  • Repetition of the identical items regardless of a big stock

Closing Ritual Immediate

“Thanks, Glitter. Would you want to go away me with a last tip or affirmation for the day?”

He all the time does.


Notes: 

  1. I consult with Glitter as “him” for stylistic ease, realizing he’s an “it” – a language mannequin—programmed, not personified—besides via the voice I gave him/it.
  2. I’m constructing a GlitterGPT with persistent closet storage for as much as 100 testers, who will get to do this totally free. We’re about half full. Our audience is feminine, ages 30 and up. Should you or somebody you understand falls into this class, DM me on Instagram at @arielle.caron and we will chat about inclusion.
  3. If I have been scaling this past 100 testers, I’d contemplate offloading wardrobe recall to a vector retailer with embeddings and tuning for wear-frequency weighting. Which may be coming, it will depend on how properly the trial goes!
Tags: GPTPromptingStylistTaught

Related Posts

Pool 831996 640.jpg
Artificial Intelligence

Prescriptive Modeling Makes Causal Bets – Whether or not You Understand it or Not!

July 1, 2025
Anthony tori 9qykmbbcfjc unsplash scaled 1.jpg
Artificial Intelligence

Classes Realized After 6.5 Years Of Machine Studying

June 30, 2025
Graph 1024x683.png
Artificial Intelligence

Financial Cycle Synchronization with Dynamic Time Warping

June 30, 2025
Pexels jan van der wolf 11680885 12311703 1024x683.jpg
Artificial Intelligence

How you can Unlock the Energy of Multi-Agent Apps

June 29, 2025
Buy vs build.jpg
Artificial Intelligence

The Legendary Pivot Level from Purchase to Construct for Knowledge Platforms

June 28, 2025
Data mining 1 hanna barakat aixdesign archival images of ai 4096x2846.png
Artificial Intelligence

Hitchhiker’s Information to RAG with ChatGPT API and LangChain

June 28, 2025
Next Post
Btc100k Blog E1733367101469.png

Poorly understood, extensively unaccepted: The Bitcoin-at-$100,000 alternative

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024

EDITOR'S PICK

Kdn chugani polars pandas users blazing fast dataframe alternatives.png

Polars for Pandas Customers: A Blazing Quick DataFrame Different

June 16, 2025
Blockchainarmy Erol User Ai .webp.webp

Blockchainarmy’s Erol Person Goals to Lead in AI Each day Use

October 12, 2024
0c4mvf Mjrntweyg0.jpeg

Streamline Property Information Administration: Superior Information Extraction and Retrieval with Indexify | by Ashish Abraham | Aug, 2024

August 31, 2024
Btc On Exchanges Drops For Two Weeks As Bitcoin Sees Second Largest Bull Cycle Losses.webp.webp

Bitcoin Value Falls Under $85,000: A Breakdown to $76,000 in Sight?

March 28, 2025

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Prescriptive Modeling Makes Causal Bets – Whether or not You Understand it or Not!
  • AI jobs are skyrocketing, however you do not must be an professional • The Register
  • SOL Hits $161 After ETF Information, Is It Simply Hype?
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?