

Model collapse comes from using only training data generated by previous generations.
All that’s needed to avoid it is to add training data that isn’t directly from the previous “generation” of the LLM in question. The thing that causes model collapse is the loss of data from generation to generation, so you just need to keep the training data “fresh” with stuff that wasn’t directly generated by the earlier generation of your model.
You could do that with archived material you used for previous training runs. For more recent events you could do that with social media feeds. The Fediverse, for example, would probably be a perfectly fine source of new stuff. Sure, there’s some AI-generated stuff mixed in, but that’s not “poison.”
As I mentioned, the article that demonstrated model collapse did it using a very artificial set of circumstances. It’s not how real AI training is done.






Alright, so instead of simply saying “include external data in your training run”, extend that to “and also filter the data to exclude erroneous stuff.” That’s a routine part of curating training data in real-world AI training as well, I was already writing a lot so I didn’t feel like adding more detail there would have enhanced it.
The basic point remains the same, that real world training accounts for the things that were necessary to force model collapse to happen in that old paper I linked. It’s a solved problem. We can see that it’s solved by the fact that AI models continue to get better, despite an increasing amount of AI-generated data being present in the world that training data is being drawn from. Indeed, most models these days use synthetic training data that is intentionally AI-generated.
A lot of people really want to believe that AI is going to just “go away” somehow, and this notion of model collapse is a convenient way to support that belief. So it’s very persistent and makes for great clickbait. But it’s just not so. If nothing else, the exact same training data that was used to create those earlier models is still around. AI models are never going to get worse than they are now because if they did get worse we’d just throw them out and go back to the earlier ones that worked better, perhaps re-training with the same data but better training techniques or model architectures.