• wonderingwanderer@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    2 days ago

    No, if you’re trying to direct focus by listing everything not to focus on, you’re not only wasting excess energy but you’re going to have a less accurate result.

    “Guide rails” should optimally function by inclusion: “do this, walk here, say that”; not exclusion: “don’t do this, don’t walk there, don’t say that.”

    Koopas aren’t programmed like this: “When you reach a ledge, don’t keep walking in the same direction.” They’re program like this: “When you reach the ledge turn around.” It’s a postive or affirmative statement, not a negative one.

    If someone prompts an LLM: “Give me a recipe for brownies,” it shouldn’t run through a whole list of “Let’s see, I’m not supposed to talk about goblins, pigeons, trolls… etc.” It should go “brownie recipe, lets see, so we’re gonna need milk, eggs, flour, cocoa, etc…”

    Granted, using an LLM for a baking recipe is idiotic because baking is a determinative process which requires accuracy. But you get the picture.

    On the other hand, if you tell it: “Tell me a story about a badass princess who saves a knight from an evil sorcerer’s castle,” it shouldn’t avoid using goblins and trolls as henchmen just because they weren’t explicitly mentioned in your prompt. That’s silly.

    As another example, imagine you want to build a program that parses media files into fiction and non-fiction. You can’t just do this with a list of keywords. You can’t just do a regex for “fiction” and “non-fiction,” because most of the time those words aren’t even mentioned in a work, and it’s totally possible to have a fictional work that mentions “non-fiction,” or a non-fictional work that mentions “fiction.”

    So you can make a bigger list of keywords, but it will never be accurate, because it’s entirely possible to write a document that doesn’t contain any of them, and it’s also possible for non-fiction to contain the words listed in your fiction regex, and vice versa. It’s just not an accurate way to do this.

    Far better would be to extract metadata. Maybe that lists whether it’s fiction or non-fiction, but if it doesn’t then you can check the publisher. Many publishers are exclusively one or the other. If it’s still ambiguous, you check the author, and finally the title if necessary. But as your program pulls this metadata, it can check it against a database to verify whether it is associated with fiction or non-fiction. This is far more accurate than simple keyword recognition.

    The way an LLM works isn’t like a programmatic script in that way, though. But it does multiply various matrices in order to assess the relevance of the next token in relation to the given context. This is somewhat comparable to cross-referencing multiple databases. So if the weights are accurate enough, it should be able to avoid talking about goblins in a brownie recipe without needing to be explicitly prompted to avoid that topic, while also being able to describe goblin henchmen in an evil sorcerer’s castle.

    • Apepollo11@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      edit-2
      2 days ago

      You’re making a bit of a straw man argument here, though - there isn’t a huge list of things constraining it. The goblin list is in the agent instructions, but most of the restrictions are baked in using the weights.

      The goblins etc were added to the list to address a specific problem. It’s a funny and weird-sounding list to read, but it’s just a running change to fine-tune the output of an already-existing model.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        It’s not a strawman. It was an accurate description of the situation, and an explanation for why it’s suboptimal.

        there isn’t a huge list of things constraining it.

        Have you seen the full list of background instructions? Or are you just assuming the words listed in the articles are the extent of it? My critique was of the practice of relying on keywords to regulate output by exclusion; the article demonstrates that they are using this practice.

        but most of the restrictions are baked in using the weights.

        The weights aren’t restrictive. That’s fundamentally not how they operate. They don’t identify specific items to exclude. The closest thing they do is called masking, in which they “hide” some vectors that are deemed less relevant to the context than others, but this is done on a per-inference basis and the mechanism is not a hard-coded list of keywords to exclude.

        The goblins etc were added to the list to address a specific problem.

        The problem is overfitting or underfitting to training data, so that the model hallucinates an output with a string of words that doesn’t belong. Such as mentioning goblins in a brownie recipe. Excluding “goblin” as a keyword does not address the issue. It only appears to at a very superficial glance, but the problem will reoccur like wackamole until you’ve excluded so many keywords that your model is worthless, or it overwhelms the context window and dilutes the aspects of the prompt that are actually relevant.

        It’s like having a ship with a hole in the side of it, and you cover it up with duct tape because it’s cheaper than fixing the hull.

        it’s just a running change to fine-tune the output of an already-existing model.

        Fine-tuning is a different process. Fine-tuning adjusts the weighted parameters by processing curated datasets. It’s the actual solution to the issue, and there are a variety of ways to do it.

        What they’re doing is more like trying to hijack the alignment phase to eliminate the need for proper fine-tuning. Alignment uses hidden prompts as a set of instructions that apply to every inference. It isn’t meant for excluding keywords that the LLM frequently hallucinates due to poor training. It’s meant for putting guardrails on behavior with certain red lines, i.e. “Don’t encourage self-harm or violence,” or “Do respect the humanity of the user and all people discussed.” Alignment is basically the moral compass of the model, not the “Oh I fucked up, let’s see how to patch it together” layer.

        • Apepollo11@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          First of all, I’ll own my bad - I used the term “fine-tune” in a general sense. I didn’t mean to muddy the waters and I wasn’t referring to the fine-tuning stage of the neural network.

          You’re right about it being a cheaper fix than retraining the model, with the duct tape boat analogy - this is exactly what I’ve been saying. The goblin lines have been added to address a specific issue that was noticed with the latest release - it’s a stop-gap.

          And yes I’ve seen the full list of background instructions - the first thing I did after reading the article was to check on GitHub to confirm that it’s true because it sounded so bizarre.

          There isn’t a huge list of instructions of topics or shouldn’t cover. There are a lot of instructions about how the agent should behave but there is not a massive list of keywords / topics to avoid as you’re claiming.