Excerpts:

Ben Riley discovered by accident that his dad hadn’t been telling the truth about his cancer.

He was sitting at the kitchen counter in his Austin home last summer, a bright new build with white walls and concrete floors, when he decided to peek at his dad’s MyChart portal. He idly scrolled through pages of lab results and doctor’s notes on his laptop until a sentence grabbed his attention.

“I was clear the window of treatment may close the longer he postpones,” the doctor wrote. “The natural history of his disease is death and debilitation.”

The note didn’t make sense. Ben knew that his 75-year-old father had chronic lymphocytic leukemia, a type of white blood cell cancer that is often slow-moving. But his dad, Joe Riley, had reassured his family that starting treatment was not urgent. He certainly hadn’t conveyed his doctor’s warning that he was headed toward a dangerous deadline.

Ben knew better than to confront his dad, a retired neuroscientist who bristled at anyone questioning his intellectual judgment. He needed more information, a plan, to persuade Joe, who was — apparently — dying of cancer thousands of miles away in Seattle.

He was anxiously monitoring his dad’s patient portal, trying to decide what to do, when a new message popped up. Joe had sent his oncologist research he had done with A.I., the apparent evidence for his decision to refuse the treatment.

He seemed to be in a “constant conversation” with A.I, said James Riley, Ben’s younger brother. He was particularly fond of Perplexity, a search engine powered by A.I. that prides itself on citing reputable sources and producing answers you can “actually trust,” according to the company’s C.E.O. (The New York Times sued Perplexity in December, accusing it of copyright infringement of news content related to A.I. systems. The company has denied the claims.)

“Why do you believe this?” he remembered asking Joe during one appointment. “Where’s this coming from?”

Joe sent him a research report he generated with Perplexity.

In the weeks after he saw that report in his father’s medical record, Ben’s concern morphed into anger. He said he felt like he and his father were living in separate realities with no “shared sense of what is true and false.”

He attached the report to the email, which Dr. David Bond opened a few hours later from his office in Ohio. At first glance, it looked like a polished scientific report. But the closer Dr. Bond read, the more illogical it became. The report made authoritative claims and, as evidence, cited studies that he thought were “only peripherally related to the topic.” It referenced percentages that appeared to be completely made up. The summary of Dr. Bond’s research was completely unrecognizable to him.

In the three months since Ben published that post, four large tech companies have released new consumer health tools, encouraging users to upload their records and pepper A.I. with their medical questions. Perplexity was among them.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    Like many of us have been saying for years, the point of putting rules/laws/guardrails in place is for the most vulnerable among us.

    There seems to be an attitude trying to normalize use of these tools “as long as you double check them”, but that requires you to use them only for things within or very near your areas of expertise. The general approach to things like this socially can’t just be “git gud scrub”.

  • John Richard@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    36
    ·
    3 days ago

    Sounds like time for Grandpa to go in a nursing home. No shortage of bad information out there. Just waiting for the day for liberals in the US to finally admit they want to burn books, ban free speech & go back to the 1930s.

    • OwOarchist@pawb.social
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      3 days ago

      AI doesn’t need to be banned.

      It just needs to be civilly liable for medical malpractice and practicing medicine without a license.

      • John Richard@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        10
        ·
        2 days ago

        Ah yes, everyone else is stupid & they need a corporate-owned centralized government controlling any information they may speak or receive.

          • John Richard@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            7
            ·
            2 days ago

            No one controls “the” AI . There are many types of AI. There are competing interests domestic & foreign. So no single corporation owns or controls “the” AI. I don’t disagree that some have significantly more influence over what is accessible to the general public, especially using hosted providers, but there are many people who run their own locally as well. I just believe a centralized government significantly controlling the cans and cants is much worse, just as if the government were to control what cryptography algorithms people can use.

        • OwOarchist@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          How the fuck do you get from ‘it needs to be civilly liable for what it does’ to “they need a corporate-owned centralized government controlling any information they may speak or receive”?

          You’re hearing the extremist BS you want to hear, and you’re arguing with that, not with what anyone is actually saying.

          • John Richard@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            5
            ·
            2 days ago

            You just made me stub my toe with your words. I demand 5 million for pain & suffering because you’re civilly liable. You can sue them already, but is very unlikely you’ll win because they have user agreements and disclaimers, and people are supposed to have some common sense. What does being civilly liable mean to you?

            • OwOarchist@pawb.social
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              2 days ago

              Their user agreements and disclaimers shouldn’t be able to so easily shield them from any and all liability. (And they probably already can’t, if existing law was enforced fairly.)

      • John Richard@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        7
        ·
        2 days ago

        No, they tell on themselves all the time.

        First, it is common knowledge that you shouldn’t inherently trust AI as a definitive source of truth right now, especially for critical matters. Almost every major AI tool explicitly puts this in their usage agreement and displays persistent UI disclaimers warning users not to trust the output and to consult actual professionals.

        AI is not inherently good or bad though. It is just a tool. Good people use it to build things and solve problems, and stubborn people use it to validate their own terrible decisions. There is a deeply ingrained luddite nostalgia here that equates new technology with moral decay, confusing a valid hatred for unchecked tech-bro capitalism with a hatred for the actual math and code.

        Blaming AI because a stubborn, arrogant guy went hunting for something to validate his refusal of medical treatment is misdirection. There is good justification for transparency & safeguards around climate impact, resource consumption & the establishment class that do bad things with it. Like when Israel used it to carry out genocide that the Democrats funded & voted for. Likewise in Germany, the liberal establishment has criminalized saying things like “from the river to the sea” to crush dissent by people wanting Palestinians to be free from apartheid.

        The idea that people shouldn’t have autonomy & everything needs to be locked down to protect them relies on the deeply infantilizing premise that adults are so intellectually fragile that simply reading an AI hallucination, or a viewpoint you disagree with, is too dangerous for the mind to process.

      • John Richard@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        2 days ago

        What is this X now? You can’t actually comment with anything of substance, or you know doing so would expose you for what you project?