New Projection Mapping Technique?

Hey guys, REAL here.

I recently checked out a story on MIT students developing a touch-screen where you don’t actually touch the screen, aka, it reads the gestures you make in front of the screen. After reading how they did it, I was inspired to this idea.

Would it be possible, have IR lights shining on the object (or objects) you’d like to be projecting on, have an IR camera feeding greyscale data to VVVV or other software which then interprets the greyscale.

Darker the value = further away = larger image in that area
Lighter the value = closer = smaller image in that area

VVVV or other software would then distort the image you wish to project onto the surface accordingly, in a seamless manner.

Would this be possible?

For larger surfaces, you could have multiple IR lights the same distance away from the building and try to make them pretty consistent, as the value of the IR on the building is hte only thing that controls distortion.

You could then, as a lesser priority, make it so that you can have the same idea as above, then manually split each area accordingly. for instance, if you have a pyramid, the IR light system would be able to distort the image so it appears flat, then you can split the pyramid into 4 different panels, so you can choose which video goes onto which panel.

Please let me know if this is possible, an idea worth pursuing, or if anybody has thought of it or is willing to make it happen.

Thanks :)

intensity of light should diminish at 1/r^2
but the intensity recorded at the camera is subject to many conditions
such as
vignetting / compensation for vignetting at the edge of the camera image
noise on camera image
other sources (biggest issue)

you would be better projecting something meaningful in IR and then recording that back with the camera

worth a try, but you need a project that doesn’t have high accuracy requirement