im interesetd in skipping the skeleton/hand detection algorythms, i need the interaction to be more casual
i guess i could fill the screen with hand detection starting points but it doesnt seem right
i experimented with something similar some time ago too. i came up with the idea to take a snapshot of the depthimage of the empty room. i did a little shader which outputs the difference between the snapshot and the moving depthimage within a certain tolerance. this way you get blobs as soon you move your fingers/hand near the snapshot (walls whatever. the nice thing is that it works with any kind of surface, it doesnt have to be flat. and its quite quick to setup.
i get it working reliable with a distance from 2-4cm between background and fingers. so you get a blob a bit before you actually touch the surface, think the kinects depthoutput is just a bit to unaccurate. tried some temporal smoothing between the frames to get a tighter touchrange but i didnt help much.
next step would be a little editor where you can define regions in the depth image, so you can differentiate which object has been touched.
if you need more something like touch in air interaction you could just skip the reference image thingie and write a shader which outputs a certain depthrange as pure white…
ah and use a depthmode which doesnt scale the output between the nearest and farest pixel…
hi elektromeier
what i was after was indeed the air touch. went with colorkeying the depthtexture and used your dilate shader to smooth the blobs. changed texture lookup to clamp, so blobs dont cross the screen.
thanks for the help >>