Temporal reprojection

I’m planning to experiment with temporal reprojection as a way to increase the framerate of vvvv projects (using estimation to fill in frames)

Is there a way this could be feasible in vvvv? I see a problem in that you want the main rendering to happen at one frame rate (eg 30fps) and the reprojection and output to screen at a different framerate (eg 60fps)

Is there a way to allow the main rendering to take place over 2 frames?

Maybe one solution involves 2 instances of vvvv, though I imagine sync between the two would be an issue and may negate any benefits from the reprojection

there are usually numerus other ways to increase performance in the patch. Proper debug is a best start. As for inconsistent framerates usually results data loss between the frames, eg. bang gonna be just eaten… some other random issues.
The good start prolly describe source of ur problem and sort it out instead of creating more complex problems on top.

there’s an example doing that shown in this wave simulation patch.

it just toggles the inner renderer framewise iirc

@antokhio - This isn’t to fix a particular problem patch. It’s really as a general approach for dealing with the super high framerates needed for VR devices. If you think you can optimise your projects to 90fps then fair enough ;)

@Sebl. That did occur to me but I didn’t investigate further as I presumed that disabling the renderer every other frame (which I think is the approach used) would stop the rendering process happening that frame, so still only one (output) frame would be used for rendering. Whaddaya think?

Sharing the texture between the instances is superfast - in fact, those two instances only share the gpu memory pointer to the texture which is already there.
If you want to aim this improvement specifically to VR solutions, what might be interesting is to use the depth map and rgb frame to react to the head movement very fast.

My concern is more the sync between the two instances. They will be running independently of each other rather than at a locked multiple. Vux has told me before that multiple instances can cause GPU slowdowns anyway for similar reasons (I think)

Worth a go though!

And I like the depth map idea. I wonder what the best way to reproject would be? You could build a mesh from the depth + rgd then adjust view, but you might notice gaps due to missing depth information, and it might not be performant.

I wonder if there is a pixel only approach?

Looking at your problem, it is keeping the output renderer running 90+ but having a bunch of cpu computation that runs slower than that, the only solution I can think of is one instance creates and manipulates all the data and cpu stuff and shared memory’s those values to the render instance which does as little as possible!

vr devices usually tackle this problem with the so called tiewarping.
i did quite some research on that topic, but it seems it is not possible with native vvvv.

one solution might be to implement that by yourself, what would result in a dedicated renderer for VR that internally does timewarping.

i started such a node some time ago, but did not finish it. if you want to have a look, head over there.

GPU architecture is a little bit more complex than that, there’s much more happening under the hood than a pointer sharing (access synchronization is just a small part of it), so actually they can often cost you more…

@mrboni:

Temporal re-projection will be artefacts hell on fast head movements, so before you think about that, I would personally look at optimizing your scene first, achieving 90 fps twice full hd is not a big deal with a reasonably good graphics card nowadays, so time to optimize patches and post processors ;)

In a non particular order (and non exhaustive):

  • Cull as much as you can (high poly geometry)
  • Instance as much as you can (low to medium poly geometry)
  • Cull instances (medium poly geometry)
  • Maximize depth early rejection (eg: your draw order is important).
  • Take care of texture formats (fp32 are generally expensive, specially the 4 channels flavour)
  • Take care or samplers in post processing (most times you can use point, or no sample at all an just use Load).
  • Use stencil maps to optimize post processors (most basic optimization in the world).
  • Optimize constant/structured buffers in forward shaders to match frequency of updates.
  • Profile your code, if you are badly cpu bound temporal re-projection will not help you ;)

Thanks for the input guys

afaik timewarp is a related technique, but for reducing latency rather than upsampling framerate. was your attempt above to make a ‘direct mode’ renderer?

Thanks for your optimisation advice Vux, some very useful bits there, particularly the draw order and use of depth stencil for post stuff

90 fps still seems like a crazy high threshold to reach though. Do you not think there is an approach that could predict inter frames from 45 to 90 fps with tolerable artefacts?

This looks interesting, if heavy… http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.230.344&rep=rep1&type=pdf

you could add latency and blend, or see if you could do optical flow, but the best results would be make it run fast!