Freeframe versus pixelshader performance (stacked)

Hello,

it is possible to make a renderer render into a texture, making it possible to ‘stack’ multiple pixel-shaders on top of each other.

The same thing is possible with freeframe plugins.

I was wondering if anyone knew something about the performance of these 2 options. Would it be faster to use pixelshaders, or is rendering to a texture an expensive operation? In that case, using freeframe plugins would be a better option?

Does someone have some experience with this matter?

as freeframes run on the CPU they are by design much slower. especially when it comes to high resolutions. also freeframes run with video fps, but shaders have no problem with 60 fps…

rendering a texture and using that one on another geometry with a shader is super fast because the texture never leaves the GPU. texture rendering can get slow if you have to get it back to the CPU memory, e.g. when you save it to disk, or use the pipet.

btw not all freeframes run on the cpu, as you surely know. what about support for FFGL? is that even possible in vvvv?
(another wild idea here: embedding of vvvv patches/or shaders via FFGL into other video software, e.g. resolume avenue, just as VDMX has support for quartz composer patches on OSX)
:D

it is possible to make a renderer render into a texture, making it possible to ‘stack’ multiple pixel-shaders on top of each other.

yes. just come out of the Renderer node far right pin, ‘ex9 Out’, into a DXTexture node.

The same thing is possible with freeframe plugins.

yes. come out of the Video Output pin into a VideoTexture node.

Thanks for the info!