I looked at the code some more. For the Qt engine, the playback is paced with a QTimer on a fixed interval. I added milliseconds since Epoch logging to the callback and they are really spaced incorrectly, and that aligns with real FPS readout.
Comparing it with 5.1.5 implementation, it was also using a QTimer but in one-shot mode and each time the timer was set again with the interval to the next expected frame. I did the same in Krita master code, but the result was essentially the same as the fixed interval timer.
Especially with frame rate around 30, the results are just wild:
- frame rate 30 → 21.5 real FPS
- frame rate 31 → 24 real FPS
- frame rate 32 → 32 real FPS (finally!)
I’m starting to wonder if this isn’t a regression in Qt and QTimer itself maybe? I think Krita 5.1.5 used an older Qt version. Although this is a far-fetched theory… I couldn’t just rebuild the old version with the current deps.
I don’t want to bash the existing implementation, but this pacing scheme seems a bit too simplistic to me. QTimer may just be too inaccurate for this as it works with integer milliseconds. For something like this I would rather expect microseconds precision and the interval to not be exactly the same as sometimes we may get slightly out of synch with the ideal frame rate. On the other hand, QTimer seemed to work just fine in 5.1.5…