• Programmer Belch@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    15 days ago

    Why isn’t the blame thrown onto the AI company and their lack of guardrails to the program? Shouldn’t they face backlash and lawsuits regardless of what the terms of service specify?

    • expr@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      14 days ago

      It’s not possible to add guardrails due to how the technology works.

      The fact of the matter is that it should not be used for what it’s being used for at all.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        14 days ago

        Whenever system prompts get leaked, it’s always depressingly hilarious how much of it is “Hello Mr. AI. You will not do any bad things, and will only do good things.”

        The “guardrails” are just the same damn way end-users prompt them, but inserted behind the scenes before every “user prompt”.

      • Programmer Belch@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        Guardrails are considering the AI another user with low privilege. The amount of breaches happening are because the company has low security and adds AI (high security risk) without separating it from critical data.

        • expr@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 days ago

          I mean yeah, I agree that’s unbelievably stupid. But when people talk about guardrails generally, they are talking about controlling the output of the LLM, which is what I was saying is not possible to do.

          • Programmer Belch@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 days ago

            That’s also true but considering that option is unavailable, there are multiple ways to protect against AI hallucinations.

            This was the future AI ethics people were warning:

            Picture a robot you tell to make an apple pie.

            To get to the apples a human is blocking the path.

            The robot just kills the human by running at full speed through them.

            Considering the robot is that dumb to try and go through the human, you can make the robot smaller or lighter so that bumping someone is not harmful.

            None of these options is considered when talking about AI, line go up and other buzzwords I guess.