It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.


They don’t ever do more than 4 predicted frames per 1 full frame, and usually just 1:1
That and the game can flag frames that are too different (camera cuts) to mitigate this problem.
What the game supplies is the current frame + motion vectors, but the framegen bits take over how the frames are displayed onscreen. This is where the extra latency comes from, at worst you are seeing one true frame behind what the game is rendering, while the presentation layer generates the intermediate frame(s).