Thats not quite the case, Gentoo.
VR is a game of correction for angular motion. I think we can all agree on that.
Sensor-based VR ONLY needs to know the focal length of the lens, and with that and an accelerometer it can calculate needed sensor motion. to compensate. Since all modern lenses communicate that information already nothing more is needed to "tweak" the system on a lens-by-lens basis.
The beauty of PatMann's theory, and why I like it so much, is that it points to where sensor-motion based VR systems would break down - and that is in a limited range of motion available to the sensor due to physical constraints within the body. (Longer focal length = mucho more sensor motion needed per arc-minute of motion.)
When doing a mechanical VR system the implementation is beyond my math, but it is apparent that the implementation would need to vary depending on which elements move during zoom and/or focus.
EDIT: If my talk of this being an angular issue at heart is confusing I can try to draw up some diagrams of the situation and resulting math tonight. It would be helpful to know if you grok trig or not, though.
EDIT 2: Though if you dig trig I think you can see where I am going with this, as the focal length increases the velocity of the image motion on the sensor increases (another possible point-of-failure in sensor-based VR) and therefore the range of motion needed (velocity times exposure time) grows in correlation to focal length. (Which is why wider lenses can normally be hand-held at slower speeds as the same angular movement @ 85mm and at 35mm have vastly different distance blurs on the sensor all other things being equal. Since it is that distance of movement across the sensor/film over the exposure time that we perceive as blur shorter focal length lens=less prone to blur.)