This Is Where Real Doctors Actually Stand After an AI Chatbot Fooled a Neuroscientist Into Ignoring Medical Advice
There isn't a corner of the world that AI hasn't touched. At a breakneck pace, AI has weaseled its way into our jobs, homes, natural resources, and schools. While it can be an incredibly powerful tool, many folks are putting their entire trust in it without even considering the potential repercussions.
And if you think it's only a certain type of person succumbing to this way of thinking, you'd be dead wrong. Recently, The New York Times reported that AI convinced a 75-year-old neuroscientist, Joe Riley, that his doctors were wrong about his chronic lymphocytic leukemia (CLL) diagnosis and that his treatment was only designed to prolong his illness. He apparently came to these conclusions by regularly querying Perplexity and other chatbots. As someone who lived with a chronic illness, he was relatively wary of the medical system and felt that conversing with AI could help solve his problems.
Ultimately, Joe essentially diagnosed himself with Richter's Transformation, which the outlet describes as "a rare complication that occurs when a relatively docile cancer abruptly evolves into a more aggressive, punishing one." He even believed that the treatment recommended by his doctors would not only exacerbate his supposed Richter's Syndrome but shorten his life expectancy. His oncologist, Dr. Marzbani, was perplexed by this diagnosis, as he really didn't see any of the symptoms Richter's produces.
Related: ‘I'm a Gastroenterologist-Here's What I Eat To Help Prevent Colon Cancer'
They got stuck in a cycle of arguing over it, where the doctor would explain all the reasons why he didn't believe Joe had the condition and urged him to undergo the treatment that would give him more time, which he desperately wanted with his friends and family. Still, Joe could not be convinced.
Much to the dismay of his family, Joe waited a year after his diagnosis to finally accept treatment, and by then the damage was already done: Joe died due to complications from CLL.
"I don't think AI killed my father," clarified Joe's son Ben Riley in a moving piece he wrote on the experience of losing his dad in Cognitive Resonance. "I think it's possible, perhaps even likely, that in a world without AI, he would still have latched on to some other piece of research to support his disposition against medical treatment, as he had deep misgivings-fear, really-about spending time in hospitals. Nonetheless, the fact remains that AI does exist in our world, and just as it can serve as fuel to those suffering manic psychosis, so too may it affirm or amplify our mistaken understanding of what's happening to us physically and medically."
Related: Star Trek Legend Weighs in on AI: 'Evolve or Die'
What Doctors Want Patients to Know About AI
Doctors agree: AI should be used as a tool, not the final say in a patient's treatment.
"There are so many AI sources and search engines out there that all get information from different sources and are not vetted by physicians who specialize in those specific health topics," Dr. Jeffrey Velotta, a thoracic surgeon at Kaiser Permanente Northern California and expert contributor for The Mesothelioma Center at Asbestos.com, tells Parade. "They are often too generalized and over-simplified or flat-out inaccurate. So I would be very hesitant to rely on any AI. Patients may feel reassured by a confident-sounding answer, but that confidence isn't backed by the clinical training or specialized expertise a physician brings to the table. AI can help you think about things to discuss with your doctor, but NOT replace your doctor at all!"
Doctors are also seeing how patients use AI as part of their examinations can be both useful and harmful.
"The greatest issue we have with AI in the use of outpatient medicine is not knowing how to prompt correctly," Dr. Nasser Mohammed, a Board Certified Family Physician with Osra Medical, warns Parade. "The answer you get depends heavily on what you ask and how you ask it. For example, I had a patient come in immediately asking for hemorrhoid cream and wanting to bypass the rest of the visit. They reached this conclusion after using AI to self-diagnose. Instead of starting with rectal pain as the main problem to solve for, they prompted AI with 'is it a hemorrhoid,' resulting in biased and pre-selected questions. It turns out, they actually had a rectal abscess that needed to be drained. This issue is not only a limitation with the average patient but also with the average clinician. The way you prompt AI is a different way of thinking, and we need to be careful about how we ask our questions."
It isn't all doom and gloom for doctors, though. Many of them see the potential benefits it offers in facilitating a unique relationship between patient and physician that is far more thorough and interactive.
"The positives are that the patient is already coming in with a very thorough history-- something I really appreciate," Dr. Carmen Fong, colorectal surgeon and Chief Medical Officer at Bummed, tells Parade. "Rather than sitting on the exam room table answering, I don't know, or I'm not sure, or Let me look it up on my phone, they've already thought through the history and the timeline and put it into ChatGPT. This helps make for a concise, problem-oriented visit and not 45 minutes of fact-finding. Of course, our conversation may diverge into variables or symptoms that I find more pertinent, but that 1 page summary is a good start. It helps the patient-and physician spend more quality time on the problem."
The key to peacefully coexisting with AI is to treat it with a healthy dose of skepticism and not follow it blindly wherever it leads.
🩺SIGN UP for Parade's health newsletter with expert-approved tips, healthy eats, exercises, news & more to help you stay healthy & feel your best self💊
Copyright 2026 The Arena Group, Inc. All Rights Reserved
This story was originally published April 21, 2026 at 2:05 PM.