Kinect: precision of the depth information

Hi!
Last weekend we tried to track small objects on a plane surface with the kinect (about 3-4 meters away). We used the OpenNiSkeletonDepth with the KinectDepthGrey.fx to get the depth information. Unfortunately the depth information out of the fx comes with some kind of “distortion”. The edges of the surface aren`t in the same depth like the surfacecenter. So it was impossible (for us ;) to track only the objects.
After a little research we found out that the problem comes with the kinect itself and not with the shader or the skeletonnode.
Cut a long story short: Is there a way to get rid of these nasty distortion?
Maybe an adjustable (depends on range, problem is described also here: http://groups.google.com/group/openni-dev/browse_thread/thread/e2cb3165d6eec1e2#)) radial colormultiplication inside the shader?

philip <- have to learn writing fx-files

perhaps you can record a (depth)frame of a plane surface that you can use to correct the distortion.

i dont think thats possible
how the kinect depth sensor works its complerty different from a cmos or ccd caera
there is no “grid” of pixeles , looks like a random spread of dots

therefore i think that a very good filter need to be programed to have a “smooth” surface, but its a software post effect, not a hardware issue.

Are you converting the depth data from the Kinect to XYZ data? The data as it comes from the Kinect is in terms of the “projection” from the Kinect, and will show strong distortion if you try to interpret it in world terms.

The block of nodes on the left of the help patch for OpenNISkeletonDepth is doing the correct conversion. While it works, that was really just a proof-of-concept (hack) plugin and modules, and is very inefficient for doing many points.

You should be able to get better performance by using the latest Kinect plugin in git and feeding the 16-bit greyscale depth map from it into my conversion shader (forum:kinect-depth-to-world-conversion-in-pixel-shader-but…) and then use Pipet to directly read the world XYZ values, but I haven’t tried it. I’ll get a new version of my mod of the dynamic plugin up in a day or so that does that, and also allows you to specify bounding boxes for areas of interest for greater efficiency/speed.

Now if you WERE using my routine for XYZ conversion and you’re still seeing distortion, tell me more, as it works pretty well for me - but I’m not tracking anything smaller than hands, and am only concerned with roughly centimeter accuracy at .5-2 meters.