How do frame interlace video for Nvidia 3D vision?

I want to display stereoscopic 3D with vvvv using NVIDIA 3D vision.
I have two video streams from 2 webcams (30fps) and for getting it displayed
I have to create a video stream with frame alternates those video streams
at 120 fps. I tried using the switch node controlled by FrameCounter, but my system locks up when trying to switch the streams so fast.
Is there any other way to interlace the video streams which works in realtime?
Thank you for yor help.

yea, you can write shader for that ;]
guess it’s also possible to make that with quad and cons ex9.texture but sounds like it needs some special plug…

yea, that can be done by using two quads and switching them on and of, or by controlling their alpha.
but i doubt, that the VideoIn will stay n’sync with the graphics card. you have to find a way to always display the right camera image when the left or right shutter of the glasses is closed. if you dont sync that, and vvvv misses one frame, you will have the wrong camera image displayed for the left/righ eye…

Tnx a lot. I’ll try with quad and cons. I hope no frames will get lost
and I don’t have to sync any videoin.

Hey Mach,
Have you had any chance yet, just received my 3d vision kit and I really want to have a go :)

hmmm not sure but some camera have a trigger function like the ids ueye
so in theory it should be possible to off sync one of the camera!
http://www.ids-imaging.de/frontend/files/uEyeManuals/Manual_eng/uEye_Manual/hw_triggermodus.html

switching textures wont be as fast as you need,
i dont have a kit, but try this,

i thing that the 2nd solution will be faster

nividia3d60.v4p (10.7 kB)

also try this one, i dont remember who did it… but i took it from a post a long time ago

ShutterGlasses.v4p (6.8 kB)