Trautner

Hi everybody,

I have a small question about trautner.

I have two trautner patches running in the one patch. With the video coming from the videoin node. (video+preview).

there’s two problems.

I’m using the two trautners to track and x+y position for the mouse.

When I fullscreen my actual render window, and come out of it, the trautner stops working! does anyone know why?

another thing is when I start it again, by changing to a different driver and then back again, the two mask images that are loaded into the trautners are the same. i.e they don’t retain the original images.
this happens when I open the trautner patch on it’s own as well.

any one have any ideas how i can fix these two problems? thanks!

the biggest problem really is the same mask loading in both trautners on load.

Hi , are you using 2 cameras ? if not why using 2 trautners , dont really see what your problem is , any patch example of the problem ?

I had the same problems some time ago but i tought jored already fixed that bug!?

When I fullscreen my actual render window, and come out of it, the trautner stops working! does anyone know why?

that happen to me when my computer is to close of the action area and the hold bacground is on 1 ,so when i make fullscreen the background changes , i solved this by using the keyboard to toggle the the hold background , that might not be your case .

attached is my patch. I want to get high resolution hand position tracking on x+y, but as you probably know, trautner only takes 256 “shades” of grey leaving me only a resolution of 16x16.

as you can see in my patch, on load, both trautners load the same masks… strange.

I’m open to other suggestions on how I can do hand tracking.

I’ve looked into using contour tracking but I can’t seem to get good results because my interaction area is a 6foot X 6foot area, resulting in the user moving around a lot.

track.zip (7.4 kB)

Oh, and I’m only using one camera here.

propably the best way to go for high resolution hgand tracking would be to modify the trautner freeframe code and compile it with the new values (like more detection areas).

Hi 24bit,

I wouldnt use the preview pin of videoIn.node as its for preview only. It drops a lot of frames and if there is high computing needs, than it can stop completely.
Why dont you use Contour tracking instead? You get your centre of hand and you will have better resolution.

but when i move further away from the camera, my contour dissapears.

i also find it difficult to differentiate between my hand and arm.

are there any examples of hand tracking using contour?

i’m definitely moving away from the idea of using trautner for handtracking. I’ve tried everything with trautner and can’t get smooth results at all.

please, if anyone has any suggestions, please let me know.

My project pretty much relies on accurate hand tracking.

I’d like tracking as good as mesmerize with the Playstation Eye on the PS3. I don’t know if anyone is familiar with it.

finding a good algorithm and coding it as a freeframe plugin in c or c++ would be the way to go here if you want to have PS3-class tracking.

if you dont want to go for a freeframe plugin, a good approach for improving the tracking is usally using a shader to prepare (undistort, threshold) the camera image. particularily useful might be preparing a Queue of Textures with video frames and then using a shader with -say- 6 texture inputs to colorize only the pixels with many changes in the last 6 video frames (e.g. outputting something like the maximum minus the minimum color in these 6 frames)

with a preprocessed image like this it should be possible to use Pipet instead of misusing Trautner to work like a 16x16 Pipet.

thanks oschatz. that makes a lot of sense. the way it was working before was a very big HACK and didn’t work very well ! :)

I’ll take a look preparing a queue of texture and colorizing only the pixels with many changes.

just one thing, how do i go about outputting the result into pipet?

hi,

here is a patch I did for myself to check tracking with shaders and pipet. Its not too far yet and has some inaccurancies at borders, but could give you an idea how to go.
For getting foreground out of your image there are many ways and it strongly depends on your surroundings amd purposes.
Oschatz’s suggestion is good for changing light conditions, but you will get problems if the hand is not moving, like all frame difference approaches have this problem.
For static background with not too much light changing, you get good results with BackgroundSubstraction shader.
At the moment, Im working on an adaptive background substraction for changing lighting conditions, but not finished yet. It needs some time, but I will post and upload it, if Im ready.

edit: actually this adaptive background is working like oschatz said. To minimum and maximum, add the maximum difference over last frames and let it always start, if there is no movement. I just read “last 6 frames” and thought its frame difference stuff. Im writing faster, then my mind…

colorjedoens2.v4p (30.8 kB)

Ähm, I forgot the shader itself. First, save it in effects folder and then start patch.

TrackingGrid.fx (5.3 kB)

hi frank.

thanks for the reply. I don’t really get what your patch is showing me. I see that you can move the point around and the green quad is following and I see everything going into the pipet.

But how does this allow you to track a hand?