Alarmed by what firms are constructing with synthetic intelligence fashions, a handful of trade insiders are calling for these against the present state of affairs to undertake a mass knowledge poisoning effort to undermine the know-how.
Their initiative, dubbed Poison Fountain, asks web site operators so as to add hyperlinks to their web sites that feed AI crawlers poisoned coaching knowledge. It has been up and working for a few week.
AI crawlers go to web sites and scrape knowledge that finally ends up getting used to coach AI fashions, a parasitic relationship that has prompted pushback from publishers. When scaped knowledge is correct, it helps AI fashions supply high quality responses to questions; when it is inaccurate, it has the other impact.
Information poisoning can take varied types and might happen at completely different levels of the AI mannequin constructing course of. It might observe from buggy code or factual misstatements on a public web site. Or it could come from manipulated coaching knowledge units, just like the Silent Branding assault, through which a picture knowledge set has been altered to current model logos inside the output of text-to-image diffusion fashions. It shouldn’t be confused with poisoning by AI – making dietary adjustments on the recommendation of ChatGPT that lead to hospitalization.
Poison Fountain was impressed by Anthropic’s work on knowledge poisoning, particularly a paper revealed final October that confirmed knowledge poisoning assaults are extra sensible than beforehand believed as a result of solely a few malicious paperwork are required to degrade mannequin high quality.
The person who knowledgeable The Register in regards to the venture requested for anonymity, “for apparent causes” – essentially the most salient of which is that this particular person works for one of many main US tech firms concerned within the AI increase.
Our supply mentioned that the purpose of the venture is to make folks conscious of AI’s Achilles’ Heel – the convenience with which fashions will be poisoned – and to encourage folks to assemble info weapons of their very own.
We’re advised, however have been unable to confirm, that 5 people are taking part on this effort, a few of whom supposedly work at different main US AI firms. We’re advised we’ll be supplied with cryptographic proof that there is a couple of particular person concerned as quickly because the group can coordinate PGP signing.
The Poison Fountain internet web page argues the necessity for lively opposition to AI. “We agree with Geoffrey Hinton: machine intelligence is a risk to the human species,” the location explains. “In response to this risk we wish to inflict injury on machine intelligence programs.”
It lists two URLs that time to knowledge designed to hinder AI coaching. One URL factors to a normal web site accessible by way of HTTP. The opposite is a “darknet” .onion URL, supposed to be troublesome to close down.
The positioning asks guests to “help the conflict effort by caching and retransmitting this poisoned coaching knowledge” and to “help the conflict effort by feeding this poisoned coaching knowledge to internet crawlers.”
Our supply defined that the poisoned knowledge on the linked pages consists of incorrect code that incorporates refined logic errors and different bugs which might be designed to break language fashions that prepare on the code.
“Hinton has clearly acknowledged the hazard however we are able to see he’s appropriate and the state of affairs is escalating in a approach the general public shouldn’t be typically conscious of,” our supply mentioned, noting that the group has grown involved as a result of “we see what our clients are constructing.”
Our supply declined to supply particular examples that advantage concern.
Whereas trade luminaries like Hinton, grassroots organizations like Cease AI, and advocacy organizations just like the Algorithmic Justice League have been pushing again towards the tech trade for years, a lot of the talk has targeted on the extent of regulatory intervention – which within the US is presently minimal. Coincidentally, AI corporations are spending rather a lot on lobbying to make sure that stays the case.
These behind the Poison Fountain venture contend that regulation shouldn’t be the reply as a result of the know-how is already universally accessible. They wish to kill AI with hearth, or quite poison, earlier than it is too late.
“Poisoning assaults compromise the cognitive integrity of the mannequin,” our supply mentioned. “There isn’t any method to cease the advance of this know-how, now that it’s disseminated worldwide. What’s left is weapons. This Poison Fountain is an instance of such a weapon.”
There are different AI poisoning initiatives however some look like extra targeted on producing income from scams than saving humanity from AI. Nightshade, software program designed to make it tougher for AI crawlers to scrape and exploit artists’ on-line photos, seems to be one of many extra comparable initiatives.
The extent to which such measures could also be obligatory is not apparent as a result of there’s already concern that AI fashions are getting worse. The fashions are being ate up their very own AI slop and artificial knowledge in an error-magnifying doom-loop referred to as “mannequin collapse.” And each factual misstatement and fabulation posted to the web additional pollutes the pool. Thus, AI mannequin makers are eager to strike offers with websites like Wikipedia that train some editorial high quality management.
There’s additionally an overlap between knowledge poisoning and misinformation campaigns, one other time period for which is “social media.” As famous in an August 2025 NewsGuard report [PDF], “As an alternative of citing knowledge cutoffs or refusing to weigh in on delicate matters, the LLMs now pull from a polluted on-line info ecosystem — generally intentionally seeded by huge networks of malign actors, together with Russian disinformation operations — and deal with unreliable sources as credible.”
Teachers differ on the extent to which mannequin collapse presents an actual danger. However one current paper [PDF] predicts that the AI snake might eat its personal tail by 2035.
No matter danger AI poses might diminish considerably if the AI bubble pops. A poisoning motion may simply speed up that course of. ®
Alarmed by what firms are constructing with synthetic intelligence fashions, a handful of trade insiders are calling for these against the present state of affairs to undertake a mass knowledge poisoning effort to undermine the know-how.
Their initiative, dubbed Poison Fountain, asks web site operators so as to add hyperlinks to their web sites that feed AI crawlers poisoned coaching knowledge. It has been up and working for a few week.
AI crawlers go to web sites and scrape knowledge that finally ends up getting used to coach AI fashions, a parasitic relationship that has prompted pushback from publishers. When scaped knowledge is correct, it helps AI fashions supply high quality responses to questions; when it is inaccurate, it has the other impact.
Information poisoning can take varied types and might happen at completely different levels of the AI mannequin constructing course of. It might observe from buggy code or factual misstatements on a public web site. Or it could come from manipulated coaching knowledge units, just like the Silent Branding assault, through which a picture knowledge set has been altered to current model logos inside the output of text-to-image diffusion fashions. It shouldn’t be confused with poisoning by AI – making dietary adjustments on the recommendation of ChatGPT that lead to hospitalization.
Poison Fountain was impressed by Anthropic’s work on knowledge poisoning, particularly a paper revealed final October that confirmed knowledge poisoning assaults are extra sensible than beforehand believed as a result of solely a few malicious paperwork are required to degrade mannequin high quality.
The person who knowledgeable The Register in regards to the venture requested for anonymity, “for apparent causes” – essentially the most salient of which is that this particular person works for one of many main US tech firms concerned within the AI increase.
Our supply mentioned that the purpose of the venture is to make folks conscious of AI’s Achilles’ Heel – the convenience with which fashions will be poisoned – and to encourage folks to assemble info weapons of their very own.
We’re advised, however have been unable to confirm, that 5 people are taking part on this effort, a few of whom supposedly work at different main US AI firms. We’re advised we’ll be supplied with cryptographic proof that there is a couple of particular person concerned as quickly because the group can coordinate PGP signing.
The Poison Fountain internet web page argues the necessity for lively opposition to AI. “We agree with Geoffrey Hinton: machine intelligence is a risk to the human species,” the location explains. “In response to this risk we wish to inflict injury on machine intelligence programs.”
It lists two URLs that time to knowledge designed to hinder AI coaching. One URL factors to a normal web site accessible by way of HTTP. The opposite is a “darknet” .onion URL, supposed to be troublesome to close down.
The positioning asks guests to “help the conflict effort by caching and retransmitting this poisoned coaching knowledge” and to “help the conflict effort by feeding this poisoned coaching knowledge to internet crawlers.”
Our supply defined that the poisoned knowledge on the linked pages consists of incorrect code that incorporates refined logic errors and different bugs which might be designed to break language fashions that prepare on the code.
“Hinton has clearly acknowledged the hazard however we are able to see he’s appropriate and the state of affairs is escalating in a approach the general public shouldn’t be typically conscious of,” our supply mentioned, noting that the group has grown involved as a result of “we see what our clients are constructing.”
Our supply declined to supply particular examples that advantage concern.
Whereas trade luminaries like Hinton, grassroots organizations like Cease AI, and advocacy organizations just like the Algorithmic Justice League have been pushing again towards the tech trade for years, a lot of the talk has targeted on the extent of regulatory intervention – which within the US is presently minimal. Coincidentally, AI corporations are spending rather a lot on lobbying to make sure that stays the case.
These behind the Poison Fountain venture contend that regulation shouldn’t be the reply as a result of the know-how is already universally accessible. They wish to kill AI with hearth, or quite poison, earlier than it is too late.
“Poisoning assaults compromise the cognitive integrity of the mannequin,” our supply mentioned. “There isn’t any method to cease the advance of this know-how, now that it’s disseminated worldwide. What’s left is weapons. This Poison Fountain is an instance of such a weapon.”
There are different AI poisoning initiatives however some look like extra targeted on producing income from scams than saving humanity from AI. Nightshade, software program designed to make it tougher for AI crawlers to scrape and exploit artists’ on-line photos, seems to be one of many extra comparable initiatives.
The extent to which such measures could also be obligatory is not apparent as a result of there’s already concern that AI fashions are getting worse. The fashions are being ate up their very own AI slop and artificial knowledge in an error-magnifying doom-loop referred to as “mannequin collapse.” And each factual misstatement and fabulation posted to the web additional pollutes the pool. Thus, AI mannequin makers are eager to strike offers with websites like Wikipedia that train some editorial high quality management.
There’s additionally an overlap between knowledge poisoning and misinformation campaigns, one other time period for which is “social media.” As famous in an August 2025 NewsGuard report [PDF], “As an alternative of citing knowledge cutoffs or refusing to weigh in on delicate matters, the LLMs now pull from a polluted on-line info ecosystem — generally intentionally seeded by huge networks of malign actors, together with Russian disinformation operations — and deal with unreliable sources as credible.”
Teachers differ on the extent to which mannequin collapse presents an actual danger. However one current paper [PDF] predicts that the AI snake might eat its personal tail by 2035.
No matter danger AI poses might diminish considerably if the AI bubble pops. A poisoning motion may simply speed up that course of. ®
















