This seems to happen when your cpu is running near 100% of load.
DirectShow video playback under windows has a very low priority by default. So if someone is consuming too much cpu, video playback gets jerky. Using all available cpu is obviously the way to go with rendering realtime graphics. Setting the 'waitforframe' pin tells vvvv to wait for the windows up until it has delivered the next frame.
The value on the pin is measured in milliseconds. So if your video has 25fps each frame gets shown 40ms. So it makes sense to wait up until 40ms for the next frame to arrive.
Alternatively you could create a MainLoop node and decrease the value of 'foreground fps'. Note that is usually does make sense to have rendering framerate and video framerate matching, but if your patch consumes to much cpu there is not much you can do.
to make it looping you have to set the StartTime and EndTime pins to reasonable values. both default to 0. You can determine the endtime (duration) of a video using the Filestream output pin "duration". By connecting the duration output pin through a Framedelay node you can use this value to "automatically" set the endpoint.
Everytime a video is looping it stops a short moment before it starts to replay. How to avoid this?
You can optimize the looping by using a codec which is optimized for random access - we had pretty good results with the picvideo motion jpg codec. For other codecs it might help setting the amount of keyframes to a higher level.
The Filestream node always streams the data from disk - this is a time consuming process. Also the Windows multimedia architecture is not really build for creative use of video. So the best strategy for getting high quality looping is avoiding the avi-file format altogether.
For short clips the playback works best with a spread of textures: Create a directory full of still frames and then use the PictureStack (EX9.Texture) or PictureStack (EX9.Texture Position) to play them back.
If you know that all your frames fit into video memory you can even preload all textures.
If you have only 64 very small frames you can do an old video game programmers trick to get even more speed: Arrange all frames in an 8x8 grid on one large texture. Use GridSplit and a Transform (2d) to calculate a texture transformation baed on the coordinates of the current frame. GridSplit allows you to set diffent grid sizes, so you can use any grid you like. Surpisingly enough the current beta has a help patch for GridSplit - press F1 over the GridSplit node.
Both methods also allow you to play videos on many objects at different framepositions - this will typically not be satisfying when using multiple filestreams
you could use the Queue (Texture) to preload an avi.
FileStream -> VideoTexture -> Queue -> GetSlice
now play the file and DoInsert in the queue in every frame. when the film is finished stop DoInsert in the Queue. now the queue is filled with textures and you can use GetSlice as described above to loop through the movie.
of course the same videomemory boundaries apply to this solution since the queued textures will be placed in videomemory.
I´m afraid though that to my knowing Queue (Texture) only works on ATI cards due to a driver bug in the recent nvidia drivers.
There is no simple best. Depending on your needs there would be many options:
the problem with usb-webcams seems to be that they are slow. even slower are dv-cams connected via firewire. analog video capture cards are quite fast. but the fastest option you have is uncompressed firewire. have a look in the video-hardware-links section for a listing of capture cards and cameras.
if speed is not important image quality could be a reason to choose the camera. are you sure the 640x480 of the toucam are native pixels? isn't it that they are actually only 320x240 scaled? that would be of no use then. analog video usually has that noise in dark imageareas. so take dv-cams for quality.
USB webcams are very cheap.
Doing analysis in a dark space is really difficult. If you're not using some form of night-vision camera you'll probably need to adjust the image to get good results -- and while you can do that in software it's much better if you have a capture card or camera that lets you change brightness/contrast/gamma levels in the driver.
A straight DVCam Firewire connection won't let you do that, but more specialized webcams and analog grabber cards will. – kms
I made very good experiences with the Osprey100 or Osprey2xx from viewcast).
Many "consumer cards" like Hauppauge WinTV or Terratec ... need the applications that come with the cards to change some of their settings.
When using cheap consumer cards with booktree/connexant chips I made very good experiences using the universal Bt848 / Bt849 / Bt878 & Bt879 driver available at http://btwincap.sourceforge.net/ instead of the original driver supplied with the card.
This driver is very stable, fast and does not need the manufacturers extra applications.
To be more or less independent from the ambient lighting (or darkness), it might be an idea to use a IR-sensitive camera, perhaps with an IR-Filter (blocks all light except from IR-light) and perhaps lighting the scene with some IR-lights. Good sources arehttp://www.videortechnical.de andhttp://www.theimagingsource.com
see Codecs FAQ
Just put the .dlls in the /freeframe directory under your vvvv.exe
or drop the dll right onto your patch.
Switching between videos can be done without stuttering. I have achieved smooth switching of video clips by attaching a Switch(string) to the filepath inlet of Filestream. This works with the codecs Mjpeg and Xvid.
i've tried setting up an iobox (string) with different file paths, and connecting to a filename, then to the filestream (which goes into the video texture), but this doesn't seem to work (the video doesn't play).
beware that the FileStream is not spreadable. it only accepts one filename at a time. use a GetSlice after the IOBox to route only one of the filepaths to the FileStream at a time. Switching between videos will very likely cause the output to stutter. If that's annoying the only thing you could do is make one large movie an loop between different start/endpoints in the file.
Another method you could try, is to use AVI-Synth to join the movies via script. theres a package that automates the whole process. download here:
AVS AVI Joiner.zip (7.64 Kb)
This is not possible right now. The directshow-based connections for audio and video can connect only one node to one other. we are planning on changing that, but it will take some time. You can either use two filestreams to play back the file twice (they actually stay in sync quite nicely) or you can accomplish what you want by routing over the audio output: play the file to the audio out, and connect the audio in to your analysis nodes. use AudioRecordSelector and AudioMixer to set the recording source to the wave mix output of your pc.
the sound input is handled via microsoft´s directshow, so you need a suitable driver for your card. vvvv will use the audio capabilities, which are available in the standard windows sound mixer. the option of using an ASIO driver used to work at some point in vvvvs version history, but currently it is not available. So you should either look for another driver or get a cheap pci card.
This is not easy. We did some installations using video conferencing but usually just used simple good old cables to transmit video in a very oldschool analogue way.
See VideoStreaming for a sollution via FreeFrame plugins.
some older thoughts on the topic:
three problems to solve: recording, transmission and playback:
One of the best publicly available beat matching algorithms is documented here:
Beat estimation on the beat and Real-time beat estimation using feature extraction by Kristoffer and Jensen and Tue Haste Andersen. See waspaa03-beat.pdf and cmmr03-beat.pdf.
They write:
This paper presents a novel method for the estimation of beat intervals. As a first step, a feature extracted from the waveform is used to identify note onsets. The estimated note onsets are used as input to a beat induction algorithm, where the most probable beat interval is found.
Several enhancements over existing beat estimation systems are proposed in this work, including methods for identifying the optimum audio feature and a novel weighting system in the beat
induction algorithm. The resulting system works in real-time, and is shown to work well for a wide variety of contemporary and popular rhythmic music.''
This beat estimation algorithm has been implemented in c++ in the open source software Mixxx
It seems to be quite some work to make this into a vvvv node, but feasible. If someone is willing to wrap the c++ code into a directshow filter, we´d write a wrapper for vvvv.
One thing worth trying for now would be using the VLIGHT.CTRL from the boys athttp://www.vlight.to . they developed a well done workaround to do real time visuals in flash (39euros, free 30days trial). i am not a flash expert, but as i understand it they use a separate program which does all the audio analysis stuff and which can be accessed with a tcpip-socket connection from within flash. Therefore it should be possible to access it by using the TCP objects within vvvv.
anonymous user login
~8d ago
~22d ago
~22d ago
~25d ago
~26d ago
~1mth ago
~1mth ago
~1mth ago
~1mth ago
~1mth ago