Interesting. I had no idea that as I changed my fps_max, I also changed my physics stepsize. But they aren't really constant are they Marco? I can change fps_max and lock it through script after all.
Let me run some numbers:

fps_max=20---------> 50ms stepsize
fps_max=40--------->25ms stepsize

Both reasonable values.... but what about fps_max=240 as the default and fps_msz=400 as the maximum:

fps_max=240------------->4.2ms stepsize
fps_max=400------------>2.5 ms stepsize.

These *seem* awefully small stepsizes....

Three conclusions I'm drawing:

a)This *could* explain alot of the divergent behaiviour we are seeing as a result of using diffrent fps_max rates or using a step size inadequate for our game/sim. For example, people who leave they fps_max at default will have 240 physics steps calculated each second....hence the heavy physics load alot of people have observed.

b) I think Bholtz is onto somthing: Your formula seems to suggest an attempt to lock the physics simulation to the framerate, thus hopefully acheiving one PE update each frame update. But wouldn't this lead to a de-synchronization if the system can't pump out frames that fast? For eg, at fps_lock=240, the PE is updating every 4ms. But on a slow system, wouldn't trying to keep up this framerate mean that a particular frame could last MORE than the 4ms that the PE is updating for?

c) I always thought the stepsize would somehow adjust for the CURRENT or AVERAGE framerate and not locking into a preset MAXIMUM value. If fps_max=100, but your system can only pump out an fps=20, then the PE is IMO making 5 times as many calculations than necessary and these superflous calculations also carry with them 5 times as many errors.


I'm off to test some of the hypothesis I posited above; specifically make a fps_max vs. PE stability analysis. Another interesting nugget Marco thanks!