You took those quotes wildly out of context. Of course there is a hard limit on how much information can be extracted from data. Clever processing won’t break that limit. But only in basic cases have we seen proofs that certain statistical inference methods make optimal use of the data. In complicated systems like neural nets it is basically impossible to prove such optimality. In fact the models are almost definitely not using the data optimally. Processing can help. A lot.
They aren’t out of context, and you have just said the same thing. Data processing can help in removing noise, but it can’t help in creating information or extracting information that wasn’t there in the first place. In fact – again as you said – it can end up destroying part of the original information.
LLMs extract word correlations from textual data. Already in this process they are losing information, since they can’t extract correlations beyond a certain (yet large) length, and don’t extract correlations at shorter lengths. And in creating output they insert spurious correlations that replace (destroy) some of the original ones. This output will contain even less information than the original training data. So a new LLM trained with such an output will give back even less.
No one feeds random LLM output straight back though. The whole idea of reinforcement learning is you take some ML model output, check if it is good, and push the model in that direction if it is good.
As long as you believe that e.g. it’s easier to verify a mathematical result than to come up with one, then RL should work.
Reinforcement learning makes the model better over time, so why should there be fewer and fewer good results?
If you’re talking about the rate of improvement going down, then yes, of course. That’s bound to happen (unless you have an actual intelligence explosion, but in that case you won’t know what “good results” even mean anyway).
You took those quotes wildly out of context. Of course there is a hard limit on how much information can be extracted from data. Clever processing won’t break that limit. But only in basic cases have we seen proofs that certain statistical inference methods make optimal use of the data. In complicated systems like neural nets it is basically impossible to prove such optimality. In fact the models are almost definitely not using the data optimally. Processing can help. A lot.
They aren’t out of context, and you have just said the same thing. Data processing can help in removing noise, but it can’t help in creating information or extracting information that wasn’t there in the first place. In fact – again as you said – it can end up destroying part of the original information.
LLMs extract word correlations from textual data. Already in this process they are losing information, since they can’t extract correlations beyond a certain (yet large) length, and don’t extract correlations at shorter lengths. And in creating output they insert spurious correlations that replace (destroy) some of the original ones. This output will contain even less information than the original training data. So a new LLM trained with such an output will give back even less.
No one feeds random LLM output straight back though. The whole idea of reinforcement learning is you take some ML model output, check if it is good, and push the model in that direction if it is good.
As long as you believe that e.g. it’s easier to verify a mathematical result than to come up with one, then RL should work.
It will still, over time, give fewer and fewer good results to be fed back into it.
Reinforcement learning makes the model better over time, so why should there be fewer and fewer good results?
If you’re talking about the rate of improvement going down, then yes, of course. That’s bound to happen (unless you have an actual intelligence explosion, but in that case you won’t know what “good results” even mean anyway).