Excerpts:
Ben Riley discovered by accident that his dad hadn’t been telling the truth about his cancer.
He was sitting at the kitchen counter in his Austin home last summer, a bright new build with white walls and concrete floors, when he decided to peek at his dad’s MyChart portal. He idly scrolled through pages of lab results and doctor’s notes on his laptop until a sentence grabbed his attention.
“I was clear the window of treatment may close the longer he postpones,” the doctor wrote. “The natural history of his disease is death and debilitation.”
The note didn’t make sense. Ben knew that his 75-year-old father had chronic lymphocytic leukemia, a type of white blood cell cancer that is often slow-moving. But his dad, Joe Riley, had reassured his family that starting treatment was not urgent. He certainly hadn’t conveyed his doctor’s warning that he was headed toward a dangerous deadline.
…
Ben knew better than to confront his dad, a retired neuroscientist who bristled at anyone questioning his intellectual judgment. He needed more information, a plan, to persuade Joe, who was — apparently — dying of cancer thousands of miles away in Seattle.
He was anxiously monitoring his dad’s patient portal, trying to decide what to do, when a new message popped up. Joe had sent his oncologist research he had done with A.I., the apparent evidence for his decision to refuse the treatment.
…
He seemed to be in a “constant conversation” with A.I, said James Riley, Ben’s younger brother. He was particularly fond of Perplexity, a search engine powered by A.I. that prides itself on citing reputable sources and producing answers you can “actually trust,” according to the company’s C.E.O. (The New York Times sued Perplexity in December, accusing it of copyright infringement of news content related to A.I. systems. The company has denied the claims.)
…
“Why do you believe this?” he remembered asking Joe during one appointment. “Where’s this coming from?”
Joe sent him a research report he generated with Perplexity.
In the weeks after he saw that report in his father’s medical record, Ben’s concern morphed into anger. He said he felt like he and his father were living in separate realities with no “shared sense of what is true and false.”
…
He attached the report to the email, which Dr. David Bond opened a few hours later from his office in Ohio. At first glance, it looked like a polished scientific report. But the closer Dr. Bond read, the more illogical it became. The report made authoritative claims and, as evidence, cited studies that he thought were “only peripherally related to the topic.” It referenced percentages that appeared to be completely made up. The summary of Dr. Bond’s research was completely unrecognizable to him.
…
In the three months since Ben published that post, four large tech companies have released new consumer health tools, encouraging users to upload their records and pepper A.I. with their medical questions. Perplexity was among them.


No, they tell on themselves all the time.
First, it is common knowledge that you shouldn’t inherently trust AI as a definitive source of truth right now, especially for critical matters. Almost every major AI tool explicitly puts this in their usage agreement and displays persistent UI disclaimers warning users not to trust the output and to consult actual professionals.
AI is not inherently good or bad though. It is just a tool. Good people use it to build things and solve problems, and stubborn people use it to validate their own terrible decisions. There is a deeply ingrained luddite nostalgia here that equates new technology with moral decay, confusing a valid hatred for unchecked tech-bro capitalism with a hatred for the actual math and code.
Blaming AI because a stubborn, arrogant guy went hunting for something to validate his refusal of medical treatment is misdirection. There is good justification for transparency & safeguards around climate impact, resource consumption & the establishment class that do bad things with it. Like when Israel used it to carry out genocide that the Democrats funded & voted for. Likewise in Germany, the liberal establishment has criminalized saying things like “from the river to the sea” to crush dissent by people wanting Palestinians to be free from apartheid.
The idea that people shouldn’t have autonomy & everything needs to be locked down to protect them relies on the deeply infantilizing premise that adults are so intellectually fragile that simply reading an AI hallucination, or a viewpoint you disagree with, is too dangerous for the mind to process.