Calibrating a HD camera to a Kinect Depth

Hello y’all. Anyone got any tips, techniques, patches or plugins for doing this please?

Also any tips on what HD camera will work well with vvvv would be appreciated.

thanks

didnt try this but maybe Perspective (Transform Kinect) could be somehow mapped to include relative position of the second camera and its xy fovs

just a quick note: if your HD cam has a different angle than the kinect, it can only work at one plane…

it can work at any plane if you calibrate properly

this is what you need OpenCV for! :)

StereoCalibrate from OpenCV can be used to get the right transform to reproject the tex from HD camera onto mesh from kinect.
you need to

  • use opencv
  • do some chessboard captures
  • check out this project because it’s awesome and works in the same field http://rgbdtoolkit.com/
  • you need a ‘StereoCalibrate’ node which hasn’t been made yet, but i need to make anyway, and since it’s just wrapping opencv wont take too long (but i want to make sure the output comes out in a useful way for vvvv)
  • come to my workshop at Node 13 that i wanna run!
  • ask Node guys to invite me to run a workshop on OpenCV at Node 13 :)

Technically:
Given the intrinsics (projection) and extrinsics (view) of each camera (kinect+video) then you can

  • unproject kinect depth into world 3d (the SDK’s will also do this for you automatically with proper calib by reading the calib data off the device)
  • project rgb onto the unprojected depth mesh
  • know the field of view, lens offset, rotation, translation of both cameras

Eliot, wouldn’t it be possible to run your actual patch (calibration kinect projector) and replace the projector by the camera we’d like to map with the kinect ? (the projector is nothing else than a camera in this case ?):

would it work to :
make a chessboard,
adapt the resolution of the camera,
lauch the calibrate opencv of your patch
, get the matrix transform,
and distort the texture like you do in your beautiful patch Eliot
(right away thank you so much for sharing this !!!)

thx

g

but at the same time? you can calibrate to any (depth)plane, right. but you cannot animate this plane, or can you? ie. when the actor moves closer to the camera or further away than the plane you calibrated for, the matching will get off, no?

also a quick’n’dirty way is to just use Homography (Transform 2d) to get basic matching of the images. ah, and also just for the record see the helppatch of Undistort (DShow9 OpenCV).

hey joreg!

it will work consistently at any depth as long as the calibration is good
you of course will have problems with occlusions and unmatched FOV

it’s essentially a projected texture from the DSLR camera’s frustum onto a 3d mesh from the kinect sensor. for this you need to know the 3d points on the mesh in world space, the intrinsics (projection transform and undistortion) of the DSLR and the extrinsics (world transform between the 2 spaces)

if you only have the 2d calibration (i.e. a homography) then you’re constrained to one plane which of course is useful for lots of situations anyway

thanks y’all. I suspected homograph might be a crude technique. I’d love a bit of chess board openCV calibration though.

+1 vote for openCV workshop at node.

Elliot do you think that my technique would work ?

print a real chessboard, run for example the rgbdtoolkit to get intrasics and extrasincs points between camera and kinect, and feed the matrix transform of the camera texture like in your patch (not really sure how to do this…:)=
I have to do calibration between thermal cam and kinect, in three days… ouchy mama.
Do you think it will work ?
thx
g

hey gundorf!

this is actually something that we looked into at art&&code (calibrating thermal camera to kinect and then to a projector)

the method we were looking at was:

  1. Laser cut a dot hole pattern http://opencv.itseez.com/_images/asymetricalPattern2.jpg
  2. Put something hot behind this pattern
  3. Calibrate with opencv-2.3 (which supports these circle grids as well as chessboards)

If you use RGBDToolkit you can definitely get those extrinsics and intrinsics and plug them into vvvv. Essentially you’ll be doing a projected texture.
There’s an example of this working in openFrameworks at
https://github.com/obviousjim/ScreenLab0x01/tree/master/Renderer

particularly check out
ScreenLab0x01 / Renderer / bin / data / shaders / unproject.vert
and the drawWireframe function in
https://github.com/obviousjim/ofxRGBDepth/blob/master/src/ofxRGBDRenderer.cpp
which binds that shader

thx a lot for those precious lines !!
I keep working on it, 'll keep you in touch, who knows maybe I will succeed :)
tiouss
g.

just to be sure if I understand correctly:
I have from rgbdtoolkit:
depthCalib.yml wich must be my depthsensor instrinsics
rgbcalib.yml wich is my camera intrinsics.

rotationDepthtoRgb.yml
rotationRgbtoDepth.yml
translationRgbtoDepth.yml
translation.DepthToRgb.yml
that must be extrinsics.

if I want to make my camera image mapped onto my depthmap, Ijust need RGBtodepth right ?

how can I compute 3x3 translation and rotation matrixes into one transform texture matrix ?
I multiply intrinsic and extrinsic matrixes, and then:

once I have my transform matrix,
I plug it into the adapt shader from screenlab;
should my matrix be 4x4 ? because of the varying vec4 texCdNorm ?
Or may I stay with my 3x3 matrix ?
then I have my texture adapted, and I can send it into your runtime projector calibration ? right ? and magique it works and then I can finally sleep !

I’m really not sure of everything above, I try to get some clues in ofx code, wich I haven’t worked with yet,
it’s been a lot of new stuffs for my little brain,
Thx. je m’accroche ;)
g.

hey!

this is much easier to explain practically than descriptively.
i’m elliotwoods on skype if you’re around!
maybe we can work on an example patch together

elliot

hi !

about the calibration of cam and depth sensor, few questions because I’m really not familiar with opencv ;)
from the translation 3x1 and the rotation matrix 3x3 of rgbdtoolkit, my4x4 extrinsics should be like that :
extrin.transform = {rm0, rm3, rm6, 0.0f,
rm1, rm4, rm7, 0.0f,
rm2, rm5, rm8, 0.0f,
tv0, -tv1, -tv2, 1.0f};
so far so good.
Right away I don’t need to consider the DepthToRgb yml file ? Using the RgbToDepth is enough, right ?

I haven’t found the way to deal with distort coeff, is it ok to multiply this 5x1 to the 3x3 intrinsics parameters ? still just need to consider my rgb intrinsics ? Not depth intrinsics ?

final intrisics is 3x3 ? Is it ok to multiply it with extrinsic matrix wich is 4x4 ?

And the then the shader:

I imlplement it into your pointcloud to render the mapped image with the real scene, maybe I should have done a second shader, wich just transform the cam texture and then get into the pointcloud to be rendered magically mapped.
I attached it, if you could just take a glance…would be great;
thx again man !
big up.

attached file: the current shader, wich I’m not sure it does the job, and the output calibration file from rgbdtoolkit. (test it with ps3 cam).

cam-kinect.rar (4.0 kB)

can anyone recommend a good HD camera that will work well with vvvv?

thanks

here’s the last version of the shader, still nothing, black image…I think I missed some simple points…here’s the final matrix transform between ps3 and kinect…don’t know if these values are pretty correct…

any tips about first steps in shading world would be deeply welcomed ;)

thank you
<a href=“sites/default/files/images/ps3todepth.png" title=”">ps3todepth.png

ps3kinect.fx (3.5 kB)

woulkd anyone know about the shader issue ? I think this is a basic general issue, not really specific to the kinect mapping problem,
Sorry but i’m in the rush, and I would like to do this mapping in a official
way ;) not with anchor/translation/size/blabla hacking
wich is pretty dirty…but I got not so much time, and I’m in trouble :)

right away,
the rgbdtoolkit has been updated, pretty nice stuff here ! thx to James George and elliot woods.
http://rgbdtoolkit.com/

thanks.

@xd_nitro - nice results would be for you probably a DSLR using mini-HDMI out into a BlackMagic Intensity considering you’ve probably got all those to hand (except the mini-hdmi cable maybe?)
that’d be 720p but obviously much better quality than a smaller sensor/lens camera with less controls.
Also you might need to consider getting Magic Lantern in order to turn off the OSD.

@gundorf - you’re calibrating a PS3Eye to the kinect? eh? :)
PS3Eye camera isn’t really different quality from the kinect camera at all except possible higher fps, and your mesh data is still going to be arriving at 30fps (unless you do some funky optical flow based mesh interpolation)

@elliotwoods
it’s just an attempt to make the whole configuration of the mapping working, before dealing with the termal cam; but even like this, pretty stuck right now.

@elliot

arh great idea. duh didn’t think of that. I even have a DSLR with a mini-HDMI! thanks