• ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      I’d go with weapons systems designed to remove humans from the decision. Tools that people use to approve medications or treatment without actually understanding what they’re approving. Cars that remove human judgement from uncertain circumstances. AI systems that make employment decisions to shield people from responsibility or legal culpability.

      Basically any situation with real consequences where you’re taking a person out of the responsibility or decision making loop.

      Also certain non LLM AI technologies for extracting information and patterns from interconnected data sets. Basically automated mass surveillance systems.

    • JackFrostNCola@aussie.zone
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      7 days ago

      Not sure if serious or not, but just one tangent could be AI that is connected to the internet whilst also being ‘uncontained/restrained’. I imagine theres a fair bit of damage that could be done by software which can distribute itself and alter its own code or create new code under the radar.

      • ricecake@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        7 days ago

        Nah, you’re falling for the hype. AI systems currently use a fairly chunky amount of compute resources and storage space. It doesn’t matter if it’s able to do so if it can’t really move itself because it’s too big.
        Then there’s the part where it’s not volitional like we are. Current techniques are basically pattern recognition and pattern extrapolation. They need an input to feed off. They don’t need to be contained because they don’t want to escape. They don’t want at all.
        The part of their code that can be edited isn’t the part that matters. That parts the part that shuffles requests into the system and provides tool for interoperation with other stuff. The actual LLM is a big, inscrutable blob of numeric descriptors that map to other numeric descriptors to establish a set of weights for pattern handling. Editing it is called “training” and requires immense resources.

        You can grab some pretty good models freely on the Internet and try to build your own AI powered worm. It’s not nearly as useful as just creating a worm.

        • JackFrostNCola@aussie.zone
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          Thanks for the explanation, i suppose i never considered the size of the model and that even if they allowed it to be self replicating & training, its not just going to upload it to any old location and relaunch itself.