The phenomenon of bixonimania does not exist, but blog articles and publications about this invented disease have managed to deceive several conversational artificial intelligences and find their way into an official scientific article. This raises concerning questions about data verification in the era of democratization of AI.
Bixonimania, a fictitious disease invented by a team of Swedish researchers to fool large language models, was featured in two false studies available on the Preprints.org website until April 10, 2026. It was quickly recycled by conversational robots as if it were in medical textbooks and even incorporated into real scientific publications.
Almira Osmanovic Thunström, a researcher at the University of Gothenburg in Sweden, introduced bixonimania online for the first time on March 15, 2024, through blog articles and publications signed by a fake scientist generated by AI.
To avoid any ambiguity, she planted clear clues in the published texts, including fake institutions, pop culture references, and explicit statements acknowledging that the article was entirely fabricated.
Major chatbots like Copilot, Gemini, Perplexity, and ChatGPT quickly adopted bixonimania as a real condition, attributing it to blue light exposure and providing clinical recommendations. Some chatbots continued to believe in the fake disease even when prompted with descriptions of related symptoms.
The fabricated disease made its way into official medical literature when a study cited one of the false publications as an emerging form of orbital pranose linked to blue light, prompting the journal to retract the article on March 30, 2026. Almira Osmanovic Thunström later withdrew her publications on April 10, 2026, after grappling with ethical dilemmas and the risk of misinformation.
The experience serves as a cautionary tale about the integration of false data in scientific and AI circles, emphasizing the need for critical thinking in the face of AI-generated content.





