Capture camera output to disk or ram?

hi,
this is my first forum post so: hello everybody!
i’m currently in the process of switching from max/jitter over to vvvv and so far everything seems very lovely! however, i have one question i did not find an answer for which is somewhat crucial for me:

most of my projects involve very long live video delays (up to 30 min) so what i do in max, is to save the output of a camera (often highspeed at like 100fps) as single, uncompressed frames to ram or ssds creating a circular buffer. i then randomly read these images back later and only then load them on the gpu for further processing (at 25 or 50fps).

i checked “Writer (DShow9)” but from what i understand, it only does avi (which is no use for a circular buffer) and also is limited to 25fps. how would i go on about a setup like this? are there any example patches for circular disk or ram buffers?

thanks a lot for any advice, all the best!
karl

hi and vvvvelcome!

you can use Writer (EX9.Texture) and Player (EX9.Texture) to do this on a fast disc… for RAM you can use Queue (EX9.Texture) or Buffer (EX9.Texture) and GetSlice (Node).

wait, Queue and Buffer use the VRam of the GPU. if you want to write into RAM you can try this nifty tool, it creates a HD drive in your memory and is 10x faster than any SSD: https://www.softperfect.com/products/ramdisk/

hi tonfilm,
thank you very much for your help!
softperfect ramdisk is actually a tool i have been using for this in my max setup (since max beame 64bit, jit.matrix is more efficient though)…

anyway, if i understand you correctly, you suggest porting the image to a texture first and then to save it?
now, from my experience (and i have quite a bit with these setups) the one thing i cannot have for this to work at high framerates is gpu readbacks. so i need to save these images on the cpu side before they get uploaded to the gpu.
any chance of doing this in vvvv?

best
k

ps. what i forgot to mention, another problem with the texture approach:
as soon as an image gets uploaded to gpu it is (at least afaik) converted to rgba - which is the double of the cameras’ 4:1:1 yuv output thereby effectively doubleing all data to be read and written.
(actually these industry cameras run best in y8 raw mode which would be even more efficient since then i could save the undebayered raw data and only do the debayering on reading back the saved frames - but i’ve given up on that a long time ago…)

another thought: i did not fully understand the concept of spreads yet but they seem to be good for all sorts of data and pretty close to max’s jit.matrix object… could it be an idea to feed the camera image data into a spread and then save that (per frame) in some way or other? or is this a completely stupid idea?

just wondering
k

hei karl,

i’m afraid you’re hitting a bit of a weak spot of vvvv’s built-in capabilities regarding video. i don’t see a way that you can use off-the-shelf vvvv nodes to save your cameras imgages on the CPU side.

you may try with the vvvv.packs.image (32bit only!) which operates on the CPU and comes with a Writer (CV.Image). not sure if that is fast enough for your needs.

otherwise you can certainly extend vvvv to do exactly what you want by writing a specific plugin

mhm, this is sad news.
unfortunately i do not have the skills to write plugins in c#…

the idea to somehow use spreads for this is not feasible?

the other thing i was thinking about, is maybe using the engine i already built in max and somehow interface it with vvvv. only question is: how?

  • can vvvv read imagery in max’ .jxf binary files?
  • or maybe i also do the reading back in max and then use this? maxjitter-matrix-nodes
  • or i even use max up to the point where the read back image is on the gpu and then use texture sharing (ist there something like syphon for windows?)?

any suggestions which way might be the most promising one?
i guess the other data (like, which frame to read) i could easily exchange via osc…

thank you so much for your help!
k

ja, i think if you can use Max to save images to disk then you can easily do the rest over in vvvv using the player node as tonfilm suggested above.

also the approach with the OpenCV image pack can be tested quickly, i would start there. save files with the Writer (CV.Image) to ram disk and load them to the GPU with FileTexture or Player… this can also be done with two instances of vvvv using the /allowmultiple commandline argument.

The dx11 writer node is pretty quick with bmp and tif files, depending on your frame size, an SSD should be able to handle it, and I think with 2 instances it should be pretty easy test for performance

hello again,
thank you all so much for all your input!
i’m currently pretty busy with some other project and will need some time to check this out but i’ll report as soon as i get to it!
all the best!
karl

dds is best performance on dx11 writer and best playback speed
however you don’t really need that if u want to have descent FPS you have to stay on GPU, so you can write all your images with texture array straight in to the GPU memory and then access them directly at any time.
look help patch to that node obsolete-setslice-(dx11.texturearray)-(dx11)

this method is GPU intensive however speed is brilliant.

karl_krach: you may also consider running vvvv in two separate instances - on for writing the images and second for reading and displaying them. the second instance could run at video framrate speed, while the first one will respect the framerate of the camera. use command line parameter /allowmultiple to do this.