GDI to DirectVideo?

Does somebody know a way to get an image out of GDI renderer into DirectShow without the circular way of GDI.texture and than AsVideo?
Or is there an other way of getting BW pixel values into DirectShow field to be connected to contour plugin later, but without making a detour to graphic card?

gdi2directVideo.v4p (17.5 kB)

Your bottleneck is the 7500 sliced Pipet node ;)

I think there are some SHADERS you can use to transfrom your renderer to Black and white. The attachements with THIS POST contain some.

And (never tried it) but the SOBEL shader also looks worth checking when working with contours ;)

Thanks for answering West.
Im looking more into a way of how to get my streamed BW values (in the example its done by pipet) to DirectShow format as Im going to use contour node then.
At the moment I just found a way as shown, which is putting this BW values together as an image in DX/EX field. And than its changed to DirectShow stream.
But this way I get data from outside to CPU, put it to GPU to build my image and get it back to CPU to analyse it. In the end, this is a detour which I would like to avoid. It would be more clever and faster to just build my image at CPU (GDI renderer) and give to DirectShow directly. And everything is done at CPU then. But I dont know how.
Is there a Freeframe plugin doing this - building up an image out of grey values? Or this MEM stuff? Or can I somehow write and grab (may be by Trautner) a BMP image continously?

hmm…

is it actually that you have a video as source on one pc and want to analyze it with contour on the other pc? then peg the fugstream guy to fix the udp-mode.

or:
do you have xy-coordinates of quads on the server? as seen in your patch. those quads you first draw, then pipet and send the pipeted pixels. why not only send the quad coords (save bandwidth) and write a freeframe plug that draws quads in a videostream based on given coordinates.

or is it much more complicated?

Its the first one !!! I already send a mail to him but no respond. May be a second try would be worth.

And for a second purpose as well: I did a contour analysis, some formulas, and out of this I need another blob tracking for getting IDs. But right now Im slowly going into Freeframe (workshop site is open now :D and hope to get it run - my first small C++ steps have been successful. And I’ve seen OpenCv has got a DrawPoint function.

I would like to add a kind of drawing pixel function (like GDI point) in front of countour freeframe to use its blob tracking. For me it looks not too difficult for a first Freeframe project (I hope so;) So, more to come…

right, with the drawpoint() it could be trivial to achieve what you want.

i remember from the workshop that the ff-opencv-template didn’t compile or something. i haven’t had a look at it since. so if you face such problems, please bug me again.

Yepp, I’ve seen some points yesterday evening: there are absolute paths in project settings, a lonely function loadmask() and no cvRelease of images as far as I figured out. I’m thinking of making a new template for me and will post it here later on. My steps are small but Im moving ;D

Two things which is unclear for me:
-DirectShow video feed is needed to let Freeframe run. Is my output restricted to that input size? So, could I give it a 2x2 pixels video only (to save harddrive) and send out a 320x240px?
-Would it be to much for Freeframe to put in a spread with 76800 values of luminance (320x240px) and just shift it into a CvMatrix or IplImage? Would be the easiest way to get a black/white image.
-Whats about 7500? (150x50px)

  • Is my output restricted to that input size?
    yes. you can only modify the incoming stream but not change its size with freeframe.

  • Would it be to much for Freeframe to put in a spread with 76800 values of luminance
    i’d say yes, it’s too much. but you better try yourself if the rate suits your needs.

Thanks joreg, now it s time to let keyboard dance.

What is the problem with the UDP mode of fugStream?

yiffable: Its working only up to a certain size, which is limited by UDP-packet size of 64kb (Joreg - Now, I managed to change this information brainwise ;D ). So, if you have bigger videos, its supposed to be splitted into several packets to be streamed, but unluckely, this isnt working in UDP-mode (Http and Mem is fine). I tried to build it up myself in V4, but its too hard to handle randomly arriving packets. I guess, thats why this isnt working. (Http is waiting until a packet is complete, so no problem for this, but you’ve got a remarkable delay).

See Videostraming and my userpage