Within the Writer Highlight collection, TDS Editors chat with members of our neighborhood about their profession path in knowledge science and AI, their writing, and their sources of inspiration. At the moment, we’re thrilled to share our dialog with Mariya Mansurova.
Mariya’s story is one in all perpetual studying. Beginning with a robust basis in software program engineering, arithmetic, and physics, she’s spent extra thanover 12 years constructing experience in product analytics throughout industries, from search engines like google and yahoo and analytics platforms to fintech. Her distinctive path, together with hands-on expertise as a product supervisor, has given her a 360-degree view of how analytical groups may also help companies make the precise selections.
Now serving as a Product Analytics Supervisor, she attracts power from discovering contemporary insights and revolutionary approaches. Every of her articles on In the direction of Knowledge Science displays her newest “aha!” second: a testomony to her perception that curiosity drives actual progress.
You’ve written extensively about agentic AI and frameworks like smolagents and LangGraph. What excites you most about this rising house?
I first began exploring generative AI largely out of curiosity and, admittedly, a little bit of FOMO. Everybody round me gave the impression to be utilizing LLMs or a minimum of speaking about them. So I carved out time to get hands-on, beginning with the very fundamentals like prompting strategies and LLM APIs. And the deeper I went, the extra excited I turned.
What fascinates me probably the most is how agentic methods are shaping the way in which we stay and work. I consider that this affect will solely proceed to develop over time. That’s why I exploit each likelihood to make use of agentic instruments like Copilot or Claude Desktop or construct my very own brokers utilizing applied sciences like smolagents, LangGraph or CrewAI.
Essentially the most impactful use case of Agentic AI for me has been coding. It’s genuinely spectacular how instruments like GitHub Copilot can enhance the velocity and the standard of your work. Whereas current analysis from METR has questioned whether or not the effectivity positive aspects are really that substantial, I positively discover a distinction in my day-to-day work. It’s particularly useful with repetitive duties (like pivoting tables in SQL) or when working with unfamiliar applied sciences (like constructing an internet app in TypeScript). Total, I’d estimate a couple of 20% improve in velocity. However this increase isn’t nearly productiveness; it’s a paradigm shift that additionally expands what feels potential. I consider that as agentic instruments proceed to evolve, we are going to see a rising effectivity hole between people and corporations which have realized leverage these applied sciences and people who haven’t.
In terms of analytics, I’m particularly enthusiastic about automated reporting brokers. Think about an AI that may pull the precise knowledge, create visualisations, carry out root trigger evaluation the place wanted, notice open questions and even create the primary draft of the presentation. That might be simply magical. I’ve constructed a prototype that generates such KPI narratives. And despite the fact that there’s a big hole between the prototype and a manufacturing resolution that works reliably, I consider we are going to get there.
You’ve written three articles below the “Sensible Pc Simulations for Product Analysts” collection. What impressed that collection, and the way do you assume simulation can reshape product analytics?
Simulation is a vastly underutilised device in product analytics. I wrote this collection to point out individuals how highly effective and accessible the simulations might be. In my day-to-day work, I maintain encountering what-if questions like “What number of operational brokers will we’d like if we add this KYC management?” or “What’s the probably impression of launching this function in a brand new market?”. You’ll be able to simulate any system, irrespective of how advanced. So, simulations gave me a method to reply these questions quantitatively and pretty precisely, even when laborious knowledge wasn’t but out there. So I’m hoping extra analysts will begin utilizing this method.
Simulations additionally shine when working with uncertainty and distributions. Personally, I choose bootstrap strategies to memorising an extended listing of statistical formulation and significance standards. Simulating the method usually feels extra intuitive, and it’s much less error-prone in follow.
Lastly, I discover it fascinating how applied sciences have modified the way in which we do issues. With at this time’s computing energy, the place any laptop computer can run 1000’s of simulations in minutes and even seconds, we will simply resolve issues that will have been difficult simply thirty years in the past. That’s a game-changer for analysts.
A number of of your posts give attention to transitioning LLM purposes from prototype to manufacturing. What widespread pitfalls do you see groups make throughout that section?
By way of follow, I’ve found there’s a big hole between LLM prototypes and manufacturing options that many groups underestimate. The commonest pitfall is treating prototypes as in the event that they’re already production-ready.
The prototype section might be deceptively clean. You’ll be able to construct one thing purposeful in an hour or two, take a look at it on a handful of examples, and really feel such as you’ve cracked the issue. Prototypes are nice instruments to show feasibility and get your crew excited in regards to the alternatives. However right here’s the place groups usually stumble: these early variations present no ensures round consistency, high quality, or security when going through numerous, real-world situations.
What I’ve realized is that profitable manufacturing deployment begins with rigorous analysis. Earlier than scaling something, you want clear definitions of what “good efficiency” seems to be like when it comes to accuracy, tone of voice, velocity and some other standards particular to your use case. Then it’s good to monitor these metrics repeatedly as you iterate, making certain you’re truly bettering slightly than simply altering issues.
Consider it like software program testing: you wouldn’t ship code with out correct testing, and LLM purposes require the identical systematic method. This turns into particularly essential in regulated environments like fintech or healthcare, the place it’s good to exhibit reliability not simply to your inner crew however to compliance stakeholders as nicely.
In these regulated areas, you’ll want complete monitoring, human-in-the-loop assessment processes, and audit trails that may face up to scrutiny. The infrastructure required to help all of this usually takes way more growth time than constructing the unique MVP. That’s one thing that constantly surprises groups who focus totally on the core performance.
Your articles generally mix engineering ideas with knowledge science/analytics finest practices, resembling your “Prime 10 engineering classes each knowledge analyst ought to know.” Do you assume the road between knowledge and engineering is blurring?
The position of a knowledge analyst or a knowledge scientist at this time usually requires a mixture of expertise from a number of disciplines.
- We write code, so we share widespread floor with software program engineers.
- We assist product groups assume by technique and make selections, so product administration expertise are helpful.
- We draw on statistics and knowledge science to construct rigorous and complete analyses.
- And to make our narratives compelling and really affect selections, we have to grasp the artwork of communication and visualisation.
Personally, I used to be fortunate to realize numerous programming expertise early on, again at college and college. This background helped me tremendously in analytics: it elevated my effectivity, helped me collaborate higher with engineers and taught me construct scalable and dependable options.
I strongly encourage analysts to undertake software program engineering finest practices. Issues like model management methods, testing and code assessment assist analytical groups to develop extra dependable processes and ship higher-quality outcomes. I don’t assume the road between knowledge and engineering is disappearing fully, however I do consider that analysts who embrace an engineering mindset shall be far more practical in fashionable knowledge groups.
You’ve explored each causal inference and cutting-edge LLM tuning strategies. Do you see these as a part of a shared toolkit or separate mindsets?
That’s truly an excellent query. I’m a robust believer that each one these instruments (from statistical strategies to fashionable ML strategies) belong in a single toolkit. As Robert Heinlein famously stated, “Specialisation is for bugs.”
I consider analysts as knowledge wizards who assist their product groups resolve their issues utilizing no matter instruments match the most effective: whether or not it’s constructing an LLM-powered classifier for NPS feedback, utilizing causal inference to make strategic selections, or constructing an internet app to automate workflows.
Slightly than specialising in particular expertise, I choose to give attention to the issue we’re fixing and maintain the toolset as broad as potential. This mindset not solely results in higher outcomes but in addition fosters a steady studying tradition, which is crucial in at this time’s fast-moving knowledge business.
You’ve coated a broad vary of matters, from textual content embeddings and visualizations to simulation and multi AI agent. What writing behavior or guideline helps you retain your work so cohesive and approachable?
I often write about matters that excite me in the intervening time, both as a result of I’ve simply realized one thing new or had an attention-grabbing dialogue with colleagues. My inspiration usually comes from on-line programs, books or my day-to-day duties.
After I write, I all the time take into consideration my viewers and the way this piece might be genuinely useful each for others and for my future self. I attempt to clarify all of the ideas clearly and depart breadcrumbs for anybody who needs to dig deeper. Over time, my weblog has turn out to be a private data base. I usually return to outdated posts: generally simply to repeat a code snippet, generally to share a useful resource with a colleague who’s engaged on one thing comparable.
As everyone knows, all the pieces in knowledge is interconnected. Fixing a real-world downside usually requires a mixture of instruments and approaches. For instance, in case you’re estimating the impression of launching in a brand new market, you would possibly use simulation for situation evaluation, LLMs to discover buyer expectations, and visualisation to current the ultimate suggestion.
I attempt to mirror these connections in my writing. Applied sciences evolve by constructing on earlier breakthroughs, and understanding the foundations helps you go deeper. That’s why lots of my posts reference one another, letting readers observe their curiosity and uncover how totally different items match collectively.
Your articles are impressively structured, usually strolling readers from foundational ideas to superior implementations. What’s your course of for outlining a posh piece earlier than you begin writing?
I consider I developed this fashion of presenting data at school, as these habits have deep roots. Because the e-book The Tradition Map explains, totally different cultures range in how they construction communication. Some are concept-first (ranging from fundamentals and iteratively shifting to conclusions), whereas others are application-first (beginning with outcomes and diving deeper as wanted). I’ve positively internalised the concept-first method.
In follow, lots of my articles are impressed by on-line programs. Whereas watching a course, I define the tough construction in parallel so I don’t overlook any necessary nuances. I additionally notice down something that’s unclear and mark it for future studying or experimentation.
After the course, I begin occupied with apply this data to a sensible instance. I firmly consider you don’t really perceive one thing till you strive it your self. Despite the fact that a lot of the programs have sensible examples, they’re usually too polished. So, solely once you apply the identical concepts in your personal use case will you run into edge circumstances and friction factors. For instance, the course would possibly use OpenAI fashions, however I would wish to strive a neighborhood mannequin, or the default system immediate within the framework doesn’t work for my explicit case and desires tweaking.
As soon as I’ve a working instance, I transfer to writing. I choose separate drafting from modifying. First, I give attention to getting all my concepts and code down with out worrying about grammar or tone. Then I shift into modifying mode: refining the construction, selecting the best visuals, placing collectively the introduction, and highlighting the important thing takeaways.
Lastly, I learn the entire thing end-to-end from the start to catch something I’ve missed. Then I ask my associate to assessment it. They usually carry a contemporary perspective and level out issues I didn’t contemplate, which helps make the article extra complete and accessible.
To be taught extra about Mariya‘s work and keep up-to-date along with her newest articles, observe her right here on TDS and on LinkedIn.