First of all, I’ll own my bad - I used the term “fine-tune” in a general sense. I didn’t mean to muddy the waters and I wasn’t referring to the fine-tuning stage of the neural network.
You’re right about it being a cheaper fix than retraining the model, with the duct tape boat analogy - this is exactly what I’ve been saying. The goblin lines have been added to address a specific issue that was noticed with the latest release - it’s a stop-gap.
And yes I’ve seen the full list of background instructions - the first thing I did after reading the article was to check on GitHub to confirm that it’s true because it sounded so bizarre.
There isn’t a huge list of instructions of topics or shouldn’t cover. There are a lot of instructions about how the agent should behave but there is not a massive list of keywords / topics to avoid as you’re claiming.
First of all, I’ll own my bad - I used the term “fine-tune” in a general sense. I didn’t mean to muddy the waters and I wasn’t referring to the fine-tuning stage of the neural network.
You’re right about it being a cheaper fix than retraining the model, with the duct tape boat analogy - this is exactly what I’ve been saying. The goblin lines have been added to address a specific issue that was noticed with the latest release - it’s a stop-gap.
And yes I’ve seen the full list of background instructions - the first thing I did after reading the article was to check on GitHub to confirm that it’s true because it sounded so bizarre.
There isn’t a huge list of instructions of topics or shouldn’t cover. There are a lot of instructions about how the agent should behave but there is not a massive list of keywords / topics to avoid as you’re claiming.