Within the Creator Highlight collection, TDS Editors chat with members of our group about their profession path in knowledge science and AI, their writing, and their sources of inspiration. In the present day, we’re thrilled to share our dialog with Stephanie Kirmer.
Stephanie is a Employees Machine Studying Engineer, with virtually 10 years of expertise in knowledge science and ML. Beforehand, she was the next schooling administrator and taught sociology and well being sciences to undergraduate college students. She writes a month-to-month submit on TDS about social themes and AI/ML, and provides talks across the nation on ML-related topics. She’ll be talking on methods for customizing LLM analysis at ODSC East in Boston in April 2026.
You studied sociology and the social and cultural foundations of schooling. How has your background formed your perspective on the social impacts of AI?
I believe my educational background has formed my perspective on every little thing, together with AI. I discovered to suppose sociologically by means of my educational profession, and meaning I take a look at occasions and phenomena and ask myself issues like “what are the social inequalities at play right here?”, “how do completely different sorts of individuals expertise this factor in a different way?”, and “how do establishments and teams of individuals affect how this factor is occurring?”. These are the sorts of issues a sociologist needs to know, and we use the solutions to develop an understanding of what’s occurring round us. I’m constructing a speculation about what’s occurring and why, after which earnestly searching for proof to show or disprove my speculation, and that’s the sociological technique, primarily.
You’ve gotten been working as an ML Engineer at DataGrail for greater than two years. How has your day-to-day work modified with the rise of LLMs?
I’m really within the strategy of writing a brand new piece about this. I believe the progress of code assistants utilizing LLMs is absolutely fascinating and is altering how lots of people work in ML and in software program engineering. I exploit these instruments to bounce concepts off, to get critiques of my approaches to issues or to get different concepts to my method, and for scut work (writing unit exams or boilerplate code, for instance). I believe there’s nonetheless so much for folks in ML to do, although, particularly making use of our abilities acquired from expertise to uncommon or distinctive issues. And all this isn’t to reduce the downsides and risks to LLMs in our society, of which there are various.
You’ve requested if we are able to “save the AI financial system.” Do you consider AI hype has created a bubble much like the dot-com period, or is the underlying utility of the tech sturdy sufficient to maintain it?
I believe it’s a bubble, however that the underlying tech is absolutely to not blame. Folks have created the bubble, and as I described in that article, an unimaginable sum of money has been invested underneath the idea that LLM expertise goes to provide some type of outcomes that may command income which might be commensurate. I believe that is foolish, not as a result of LLM expertise isn’t helpful in some key methods, however as a result of it isn’t $200 billion+ helpful. If Silicon Valley and the VC world had been keen to just accept good returns on a average funding, as an alternative of demanding immense returns on a huge funding, I believe this may very well be a sustainable house. However that’s not the way it has turned out, and I simply don’t see a means out of this that doesn’t contain a bubble bursting ultimately.
A yr in the past, you wrote in regards to the “Cultural Backlash In opposition to Generative AI.” What can AI firms do to rebuild belief with a skeptical public?
That is powerful, as a result of I believe the hype has set the tone for the blowback. AI firms are making outlandish guarantees as a result of the subsequent quarter’s numbers at all times want to indicate one thing spectacular to maintain the wheel turning. Individuals who take a look at that and sense they’re being lied to naturally have a bitter style about the entire endeavor. It received’t occur, but when AI firms backed off the unrealistic guarantees and as an alternative targeted laborious on discovering affordable, efficient methods to use their expertise to folks’s precise issues, that might assist so much. It might additionally assist if we had a broad marketing campaign of public schooling about what LLMs and “AI” actually are, demystifying the expertise as a lot as we are able to. However, the extra folks be taught in regards to the tech, the extra practical they are going to be about what it might and might’t do, so I count on the massive gamers within the house additionally is not going to be inclined to try this.
You’ve lined many various subjects previously few years. How do you determine what to jot down about subsequent?
I are likely to spend the month in between articles fascinated about how LLMs and AI are displaying up in my life, the lives of individuals round me, and the information, and I discuss to folks about what they’re seeing and experiencing with it. Generally I’ve a particular angle that comes from sociology (energy, race, class, gender, establishments, and so on) that I need to use as framing to try the house, or typically a particular occasion or phenomenon offers me an thought to work with. I jot down notes all through the month and once I land on one thing that I really feel actually thinking about, and need to analysis or take into consideration, I’ll choose that for the subsequent month and do a deep dive.
Are there any subjects you haven’t written about but, and that you’re excited to sort out in 2026?
I actually don’t plan that far forward! Once I began writing a number of years in the past I wrote down a giant record of concepts and subjects and I’ve fully exhausted it, so nowadays I’m at most one or two months forward of the web page. I’d like to get concepts from readers about social points or themes that collide with AI they’d like me to dig into additional.
To be taught extra about Stephanie’s work and keep up-to-date along with her newest articles, you may comply with her on TDS or LinkedIn.














