Blob tracking with cameras

i have been trying to use blob tracking with cameras for a little while now, and it seems like i have not been able to get around with a major problem working with cameras. That problem is that whenever I try to use some blob detection methods, readings from the cameras are constantly fluctuating that I cannot get a very stable results. No matter how much I tune the input from the camera, at the end I am still getting the blob counts to jump between values even though nothing is moving in front of the camera. As far as I can see, I think this is mainly because of the sensitivity of the camera, and the lighting condition not being contrasting enough.

Is there anything I can do in the software side to help with this problem or is the only way to solve this problem with better lighting condition and camera setup?

hi levvvvky,
you may find very usefull Kalle’s IIR module for smoothing detected values jumping every where, or you may use also a damper node to round and delay this sensitivity.
for me it works well with fiducials, that were presenting same problem.

wich node are you using for your tracking ?

you may also find vvvvery interresting this page, where users share their own usefull patches

I am using a contour FreeFrame plugin to do blob tracking right now. Is there another one that you think its better?

are you sure, that you are using the right outputs? use ‘x’ and ‘y’, not ‘contours x’ and ‘contours y’ they output all points along the contour and will fluctuate a lot, because there is always noise in the video image. also set the ‘cleanse’ input to 1, it will do a slight picture smoothing…

what camera are you using?

Also, have you tried using the Hysteresis (Animation) Node? You’ll find a help file for it here. This may help.

i have just started playing around with vvvv. I have got the blob tracking part done, and now want to play around with different effects after i manged to find out where the blobs are.
thanks to tonfilm for sharing the wave generating effect. I wanted to use the blobs to generate the wave patterns, however I saw a significant decrease in the performance of my patch after putting the two together.
the perfmeter within tonfilm patch seems to indicate that i am using the CPU alot. I thought that I have already tried to use shaders to do most of the things, and the contour module is a freeframe plugin then almost everything within my patch should be done on the GPU then. I am not sure how else can I optimize my patch.
Anyone can shed some light as to what I am doing wrong?

Thanks.

blob+wave.rar (7.2 kB)

the wave patch is from master gregsn!

and its quite intense patch so my recommendation would be, get another machine to do the tracking and send the coordinates via ethernet or midi. well, if you have some old pc lying around.

or with a multicore machine, let vvvv run twice, each instance running on a single core and exchange data via ethernet/localhost.

and btw. if you share patches, make sure you deliver them with all shaders and textures in a proper folder structure. your rar is kind of useless.

I am sorry.

I havent been transporting patches between machines that much. Please let me attach the files again. I think that this rar has all the files.
thanks.

blob+wave.rar (14.5 kB)

Okay, after playing your patch I found same performance decrease.
So, I found this: The bottleneck is in contour patch. You taking a videoin and putting it onot Graficcard using videotexture. There you are doing a shader for background and color/threshold. Then you grabbing it from graficcard by asvideo to CPU to run contour node.
You are shifting huge amount videodata CPU-GPU-CPU to do a threshold/color transformation only!
Another way would be to wait for half a year or so until I managed to understand freeframe/C++ and Ill make you a freeframe for that Currently, Im stuck in it. ;D
If you havent that patience (me wouldnt ;D ) A try is to reduce the size of both videotexture and asvideo to decrease videodata. See attached patch. Think you have to set size to fit best results.

blobtracking_size.v4p (25.1 kB)

also: in AsVideo try to set the Reference Clock to None (if you haven’t already, didn’t look in your patch).

Setting the Reference clock to none worked great. Thanks for the tip, joreg.
Exactly what does that do?

As for the moving of videodata from the CPU-GPU-CPU, I was trying to do everything with a shader as much as possible. I remember reading from somewhere saying that it is better to have smaller shaders stack together than having a big shader that does everything. I tried it and I was only able to find using AsVideo being the only method that I take the videotexture from one shader to another. Is there another way?

Thanks.

with AsVideo you convert a texture to a directshow video. directshow video tries to run at a defined framerate. 25fps are hardcoded into the AsVideo node (which is stupid of course). anyawy, setting reference clock to none the directshow graph does not try to run at a specified speed, but just tries to run as fast as possible.

hi levvvvky, hi all…
have you considered to use also ir illuminator and background subtraction before doing traking…
take a look in this thread where you can find some interesting consideration about camera traking

ciao
ales9000

jorg: interesting stuff about asvideo I added it to node reference.

joreg: thanks the explanation. That was very helpful.
ales9000: yea I am using IR illuminators and background subtraction already. I am getting very decent blob tracking right now. just that I have been running into some other problems specifically the performance of my patch.
However to add to the whole IR tracking experience, I am using some 850nm IR LEDs with 3 layers of film negatives as filter on top of a unibrain firewire camera. I have to say that it works very well.