• ricecake@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    7 days ago

    Nah, you’re falling for the hype. AI systems currently use a fairly chunky amount of compute resources and storage space. It doesn’t matter if it’s able to do so if it can’t really move itself because it’s too big.
    Then there’s the part where it’s not volitional like we are. Current techniques are basically pattern recognition and pattern extrapolation. They need an input to feed off. They don’t need to be contained because they don’t want to escape. They don’t want at all.
    The part of their code that can be edited isn’t the part that matters. That parts the part that shuffles requests into the system and provides tool for interoperation with other stuff. The actual LLM is a big, inscrutable blob of numeric descriptors that map to other numeric descriptors to establish a set of weights for pattern handling. Editing it is called “training” and requires immense resources.

    You can grab some pretty good models freely on the Internet and try to build your own AI powered worm. It’s not nearly as useful as just creating a worm.

    • JackFrostNCola@aussie.zone
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      Thanks for the explanation, i suppose i never considered the size of the model and that even if they allowed it to be self replicating & training, its not just going to upload it to any old location and relaunch itself.