This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.


What if video plays jerky or stops playing in fullscreen?

This seems to happen when your cpu is running near 100% of load.
DirectShow video playback under windows has a very low priority by default. So if someone is consuming too much cpu, video playback gets jerky. Using all available cpu is obviously the way to go with rendering realtime graphics. Setting the 'waitforframe' pin tells vvvv to wait for the windows up until it has delivered the next frame.
The value on the pin is measured in milliseconds. So if your video has 25fps each frame gets shown 40ms. So it makes sense to wait up until 40ms for the next frame to arrive.

Alternatively you could create a MainLoop node and decrease the value of 'foreground fps'. Note that is usually does make sense to have rendering framerate and video framerate matching, but if your patch consumes to much cpu there is not much you can do.

How can i loop a video?

to make it looping you have to set the StartTime and EndTime pins to reasonable values. both default to 0. You can determine the endtime (duration) of a video using the Filestream output pin "duration". By connecting the duration output pin through a Framedelay node you can use this value to "automatically" set the endpoint.

How to avoid skipping while looping

Everytime a video is looping it stops a short moment before it starts to replay. How to avoid this?
You can optimize the looping by using a codec which is optimized for random access - we had pretty good results with the picvideo motion jpg codec. For other codecs it might help setting the amount of keyframes to a higher level.
The Filestream node always streams the data from disk - this is a time consuming process. Also the Windows multimedia architecture is not really build for creative use of video. So the best strategy for getting high quality looping is avoiding the avi-file format altogether.

Use many still frames

For short clips the playback works best with a spread of textures: Create a directory full of still frames and then use the PictureStack (EX9.Texture) or PictureStack (EX9.Texture Position) to play them back.

If you know that all your frames fit into video memory you can even preload all textures.

Use tiled images

If you have only 64 very small frames you can do an old video game programmers trick to get even more speed: Arrange all frames in an 8x8 grid on one large texture. Use GridSplit and a Transform (2d) to calculate a texture transformation baed on the coordinates of the current frame. GridSplit allows you to set diffent grid sizes, so you can use any grid you like. Surpisingly enough the current beta has a help patch for GridSplit - press F1 over the GridSplit node.
Both methods also allow you to play videos on many objects at different framepositions - this will typically not be satisfying when using multiple filestreams

Preloading the Movie

you could use the Queue (Texture) to preload an avi.

 FileStream -> VideoTexture -> Queue -> GetSlice

now play the file and DoInsert in the queue in every frame. when the film is finished stop DoInsert in the Queue. now the queue is filled with textures and you can use GetSlice as described above to loop through the movie.
of course the same videomemory boundaries apply to this solution since the queued textures will be placed in videomemory.
I´m afraid though that to my knowing Queue (Texture) only works on ATI cards due to a driver bug in the recent nvidia drivers.

What is the best camera for vvvv?

There is no simple best. Depending on your needs there would be many options:


the problem with usb-webcams seems to be that they are slow. even slower are dv-cams connected via firewire. analog video capture cards are quite fast. but the fastest option you have is uncompressed firewire. have a look in the video-hardware-links section for a listing of capture cards and cameras.

Image quality

if speed is not important image quality could be a reason to choose the camera. are you sure the 640x480 of the toucam are native pixels? isn't it that they are actually only 320x240 scaled? that would be of no use then. analog video usually has that noise in dark imageareas. so take dv-cams for quality.


USB webcams are very cheap.


Doing analysis in a dark space is really difficult. If you're not using some form of night-vision camera you'll probably need to adjust the image to get good results -- and while you can do that in software it's much better if you have a capture card or camera that lets you change brightness/contrast/gamma levels in the driver.
A straight DVCam Firewire connection won't let you do that, but more specialized webcams and analog grabber cards will. – kms

Driver Quality

I made very good experiences with the Osprey100 or Osprey2xx from viewcast).
Many "consumer cards" like Hauppauge WinTV or Terratec ... need the applications that come with the cards to change some of their settings.

When using cheap consumer cards with booktree/connexant chips I made very good experiences using the universal Bt848 / Bt849 / Bt878 & Bt879 driver available at http://btwincap.sourceforge.net/ instead of the original driver supplied with the card.
This driver is very stable, fast and does not need the manufacturers extra applications.


Infrared reception

To be more or less independent from the ambient lighting (or darkness), it might be an idea to use a IR-sensitive camera, perhaps with an IR-Filter (blocks all light except from IR-light) and perhaps lighting the scene with some IR-lights. Good sources arehttp://www.videortechnical.de andhttp://www.theimagingsource.com

What is the best video codec?

see Codecs FAQ

How do I get Freeframe DLLs into vvvv?

Just put the .dlls in the /freeframe directory under your vvvv.exe
or drop the dll right onto your patch.

I'd like to switch between different video files during a performance

Switching between videos can be done without stuttering. I have achieved smooth switching of video clips by attaching a Switch(string) to the filepath inlet of Filestream. This works with the codecs Mjpeg and Xvid.

i've tried setting up an iobox (string) with different file paths, and connecting to a filename, then to the filestream (which goes into the video texture), but this doesn't seem to work (the video doesn't play).
beware that the FileStream is not spreadable. it only accepts one filename at a time. use a GetSlice after the IOBox to route only one of the filepaths to the FileStream at a time. Switching between videos will very likely cause the output to stutter. If that's annoying the only thing you could do is make one large movie an loop between different start/endpoints in the file.

Another method you could try, is to use AVI-Synth to join the movies via script. theres a package that automates the whole process. download here:
AVS AVI Joiner.zip (7.64 Kb)

I want to connect one video or audio output to different inputs. What can I do?

This is not possible right now. The directshow-based connections for audio and video can connect only one node to one other. we are planning on changing that, but it will take some time. You can either use two filestreams to play back the file twice (they actually stay in sync quite nicely) or you can accomplish what you want by routing over the audio output: play the file to the audio out, and connect the audio in to your analysis nodes. use AudioRecordSelector and AudioMixer to set the recording source to the wave mix output of your pc.

On my pc it tells me that there are no sound devices. I have a professional sound card which works completely around the sound handling of Windows.

the sound input is handled via microsoft´s directshow, so you need a suitable driver for your card. vvvv will use the audio capabilities, which are available in the standard windows sound mixer. the option of using an ASIO driver used to work at some point in vvvvs version history, but currently it is not available. So you should either look for another driver or get a cheap pci card.

How can i implement a video conferencing application with vvvv?

This is not easy. We did some installations using video conferencing but usually just used simple good old cables to transmit video in a very oldschool analogue way.

See VideoStreaming for a sollution via FreeFrame plugins.

some older thoughts on the topic:
three problems to solve: recording, transmission and playback:

  • the transmission part is easy. the udp and tcp nodes will allow you to transfer binary data directly between computers. this is the key for video conferencing. The boygrouping mechanism can transmit data between compuzers, but it is basically a broadcast: one server transmits to many clients - this might not be what you want.
  • the recording part is very easy if you have a camera which can directly record to an open standard: a webcam with build in webserver (www.mobotix.de or www.axis.se). Use http to access the jpg-images as a binary string. If you have an usb camera or want to transmit realtime graphics, vvvv is a little less equipped: pipet and dump might help to digitize your image into colors or values, and with some string nodes you can convert these values into strings suitable for transferring over the network -- with this technique you are very likely to transit uncompressed video, so dont expect high resolution video. using the writer would be an option also, but you would need to re-read the file into memory with a reader
  • the playback part is quite flexible: use the 3 flavours of DynamicTexture to create your textures back.
  • Oschatz

Puzzled and inspired by oschatzs simple but plausible solution of using a pipet to sample the video and then transmit:
How about writing images to a ring buffer of files and then reloading them via network from the other machine? Might need only two more additions that were necessary anyway:

  • loading textures in a separate thread so as to not disturb rendering performance too much
  • saving images to file. We've had that problem before, but isn't there a solution yet? I reckon that liting the link to the file format is a suggestion to write ones own patch for it- but I supect that it would yield the same performance as using hardcoded nodes?
  • Max

Another thing i'd wish that somebody tried sometime:
Use two computers with a scalable composite output and a composite in

  • display the input in a window on machine 1
  • then clone that window on m1's composite-out
  • feed that into m2's video-in
  • ad infinitum
  • Max

sometimes ago i captured a renderers live output with camtasia studio, which offers a stream output . this stream output could be used with iVisit.

Did somebody work out something like the bonk~ objekt from pd or max which detects fast amplitude attacks from beats? or any other beatrecognition patches?

One of the best publicly available beat matching algorithms is documented here:
Beat estimation on the beat and Real-time beat estimation using feature extraction by Kristoffer and Jensen and Tue Haste Andersen. See waspaa03-beat.pdf and cmmr03-beat.pdf.

They write:
This paper presents a novel method for the estimation of beat intervals. As a first step, a feature extracted from the waveform is used to identify note onsets. The estimated note onsets are used as input to a beat induction algorithm, where the most probable beat interval is found.
Several enhancements over existing beat estimation systems are proposed in this work, including methods for identifying the optimum audio feature and a novel weighting system in the beat
induction algorithm. The resulting system works in real-time, and is shown to work well for a wide variety of contemporary and popular rhythmic music.''
This beat estimation algorithm has been implemented in c++ in the open source software Mixxx

It seems to be quite some work to make this into a vvvv node, but feasible. If someone is willing to wrap the c++ code into a directshow filter, we´d write a wrapper for vvvv.

One thing worth trying for now would be using the VLIGHT.CTRL from the boys athttp://www.vlight.to . they developed a well done workaround to do real time visuals in flash (39euros, free 30days trial). i am not a flash expert, but as i understand it they use a separate program which does all the audio analysis stuff and which can be accessed with a tcpip-socket connection from within flash. Therefore it should be possible to access it by using the TCP objects within vvvv.

anonymous user login


~2h ago

joreg: Workshop on 29 02: Create Sequencers and Precise Clock Based Tools. Signup here: https://thenodeinstitute.org/courses/ws23-vvvv-08-create-sequencers-and-precise-clock-based-tools-in-vvvv-gamma/

~7d ago

joreg: Workshop on 22 02: Unlocking Shader Artistry: A Journey through ‘The Book of Shaders’ with FUSE. Signup here: https://thenodeinstitute.org/courses/ws23-vvvv-12-book-of-shaders/

~18d ago

joreg: Talk and Workshop on February 15 & 16 in Frankfurt: https://visualprogramming.net/blog/vvvv-at-node-code-frankfurt/

~20d ago

woei: @Joanie_AntiVJ: think so, looks doable

~20d ago

xd_nitro: Anyone remember who increased projector brightness by removing some components that product the color?

~21d ago

Joanie_AntiVJ: This looks super interesting (vectors over network) would anyone here know how to implement this in beta? https://github.com/madmappersoftware/Ponk

~28d ago

joreg: Workshop on 01 02: Data Sensing and Logging with Arduino Signup here: https://thenodeinstitute.org/courses/ws23-vvvv-09-data-sensing-and-logging-with-arduino-and-vvvv/

~1mth ago

domj: I've added myself to vvvv specialists available for hire See at the bottom, lmk if you want to collab or in need of tutoring! ❤️