Lawsuits: OpenAI didn’t report ChatGPT user to cops to protect Altman, IPO.

  • Bane_Killgrind@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    3 days ago

    Absolutely not.

    Leaders rejected the safety team’s urgings and declined to report the user to law enforcement.

    OpenAI will “find ways to prevent tragedies like this in the future” and to continue “working with all levels of government to help ensure something like this never happens again,” Altman said.

    They already have a fucking way to prevent this and they opted not to, for PR reasons. They are complicit, they provided a service that aided planning and decided to continue service and allowed further planning.

    If you post a message to a website, that message is not private from the website regardless of the method they use to receive it. They have the moral responsibility to respond to threats to life regardless of the legal responsibility they are arguing they don’t have.

    If I put a cork board up in front of my house and someone pins threats to it, when I notice it it’s now my responsibility to act on that.

      • new_world_odor@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        3 days ago

        it’s really not. more like gathering a crowd of a few billion people, asking them a question, hearing the loudest answer and assuming it’s correct

            • Skullgrid@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              3 days ago

              There is a huge difference between hosting an archive of conversations that took place, and providing a place where you can participate in conversations.

              This is the equivalent of looking at the archives of debates transcribed in newspapers. When you do that, you are not participating in a debate, you are reading the transcript of a debate

              • Bane_Killgrind@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                3 days ago

                The model responds based on conversations it’s trained on? It’s a bespoke response. It’s not simply showing a browsable list of responses, it’s giving particular ones.

                It’s literally feeding these mentally ill people responses that a human, with the same context, would be legally culpable for.

      • horse@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        If you go to the library and tell the librarian you are planning to shoot up your school and ask where you can find books to help with that, I bloody well hope they would report you. Because it sounds like that is basically the equivalent of what happened here. It’s not like someone using a library to privately access information and then using it in a harmful way. OpenAI apparently knew exactly what this person was doing on their platform and (allegedly) decided it was better for their bottom line to look the other way. At that point they have a clear moral obligation to act, imo.