projection-mapping openCV projector projector node
Credits: elliotwoods, microdee (for soft shadows in runtime demo)
Tutorial on 3D projection mapping using the CalibrateProjector node (wrapping OpenCV's CalibrateCamera routine).
Use your mouse in the World renderer to move control points to features on the mesh (virtual object).
|Enter||Add new point|
|Tab||Select next point|
|Shift+Tab||Select last point|
Use your mouse in the world renderer to drag the control points coming out of the projector onto the corresponding features of the real object.
|Space||Reset point to world view camera|
|Tab||Select next point|
|Shift+Tab||Select last point|
Use the SaveViewProjection node at the bottom of the patch to save the View and Projection matrices (i.e. the result of the calibration process). These can then be loaded into other patches with the LoadViewProjection node (packaged in the download).
What I'd describe as '3D projection mapping' is the act of re-projecting a virtual 3D object onto its real world counterpart using a video projector*. Thereby all of the features of the real object which are visible from the point of view of the projector have an image projected onto them, and this image is 'extracted' from the corresponding surfaces of the virtual counterpart object.
* thereby defining '2D projection mapping' as lining up 2D shapes in a projector image with real world features in the projector's line of sight.
A projector has the same optical properties as a camera, whereby a scene in 3D is projected onto a plane in 2D (camera) or an image from a plane in 2D projected onto a 3D scene (projector). This comes from the Principle of Reversibility, whereby in optics reversing any path of light results in a valid system.
Imagine a film camera on a tripod in front of a table which is covered in objects. When we take a photo, light hits the objects and is scattered, rays that are scattered directly towards the camera's aperture are captured on the film. Each minute section of the film is looking through the aperture at one part of the scene, and therefore captures light from that part of the scene only.
Presume we don't move the film at all, it stays in the camera as it was when we took the photo. If we then develop the film inside the camera, then the image of the scene will appear on the film. If we open the aperture now, each section of the film is still looking through the aperture at its corresponding section of the scene. Imagine we shine light onto the film so that the film is bright, it is scattering light from each section of the film based on the intensity pattern of the image. This light then travels back through the aperture and will hit the section of the scene corresponding to each part of the image. The image of the scene will be 'projection mapped' back onto the scene (or perhaps the negative image).
In the film example, the 'projector' and camera conveniently share the exact same optics:
If we want to match a virtual camera with a real video projector, then we must create a camera that shares all these above properties with the projector.
Conveniently, the standard model of a video projector is to have no lens distortion (i.e. if you shine the image onto a flat surface, you will always get a rectangular image). Therefore we omit this property, and can very effectively define a projector with standard computer graphics camera matrices (view and projection matrices).
The remaining properties can be categorised into 2 sets:
|Extrinsics||World Transform||Translation <3>, Rotation <3>|
|Intrinsics||Projection Transform||Focal length (XY) <2>, Lens offset (XY) <2>|
This gives us 10 degrees of freedom for our virtual camera.
We can either enter these 10 parameters (above) manually as in how to project on 3d geometry, or we could find a mechanism that will calculate these for us.
The previous automatic route for this was Pade projection mapping (which required a custom vertex shader). This CalibrateProjector uses OpenCV's CalirateCamera function in order to perform this automatic solving for us, and provides standard View and Projection matrices.
The general benefit of the automatic route is that it allows a more immediate interaction between the person calibrating and the result that they want to achieve. By manipulating control points on the mesh and in the projector, the operator can more quickly and confidently achieve an accurate mapping, and fine tune the mapping by adding extra control points in problem areas, without compromising the existing data entered.
This can happen if your graphics card doesn't have the same depth buffer settings available as mine
If you encounter a problem selecting points in the World view where:
When you select a points on the front of an object, the patch instead selects a point on the back of the object, then try the following:
Youtube says: "This video has been removed because it is too long" :(
we love you voice on video Elliot...
same problem here
yep. cheers for notice!
noted and currently uploading on another account which can go > 15mins
Very nice tutorial, thanks for your time :)
where are the plugins for the tables?
@microdee - these are sneakily included in the latest OpenCV plugin pack (May 2012).
Should be moved to a separate download in future.
Using your soft shadow contrib in the runtime example.
so nice ! cheers for the reconstructme info
very nice thanks elliot! which scanner did you use in the example?
thank u Elliot for the great contribution and for the tutorial also!
@elliotwoods: oooh ok. hey what an honor!:D
update: @elliotwoods: there's no ValuesTableBuffer.dll in the opencv contrib, however i found the source on your github. is there a compiled version somewhere?
Will include. In asakusa restaurant. Will post corrections to opencv dll's from xd_nitro's studio tomorrow
ack. actually it's going to be much easier to wait until i'm back on my own PC
has to be the weekend :(
@microdee - the filename had changed to VVVV.Nodes.SpreadTableValue.dll
perhaps you were looking at the old tutorial patch?
@tekcor : Asus Xtion Live (equivalent to a kinect camera)
Got some Problems with Points in World Projector.
Sometimes when i try moving the Points in World Projector they are kind of sticky like to a grid or something and when i create a Point they do not appear at the courser but somewhere else.
Am i missing something?
Anyway this is just great thanks so much for this contribution.
Makes life much easier!!!!!
you are my personal Hero!!!
this is great!!!
thank you elliot!!!
Hello all and thank you Elliot for this piece of technical and helpful art.
It works pretty well.
Does someone have any idea on how to display a picture in background of the Projection renderer?
I've tried in many way to do it but my background get always distorted.
It would be helpful to put a picture in background in order to be able to make experimentation without having to switch on the video projector.
Edit: I've found a way to put a picture in background, but it's dirty I can't show this... i'm sure you guys knows how to do it the clean way
Thanks for the tip, I was looking on the billboard side and it works until I add the fourth calibration point then the camera projection shift my background picture. My trick is to render the projection renderer as DX9Texture and blend it with my background FileTexture on a FullscreenQuad in a new DirectX Renderer window.
@newemka - did you make sure that Billboard was set to WithinNormalisedProjection ? (also i think there's a separate node for this. Either way, glad you found a solution that works for you!
Hi, thanks for the nice tut and solution Elliot.
i found a strange behaviour. I,m using beta 28 and works very well i had to repatch some staff and change the keyboard subpatch.
Chaging the patch to other folder you get all nodes working apart from calibrate camera. i found that if you have your project folder and a folder with the opencv in it an a folder with the patches inside seem to work again. it does not matter that you have the opencv already add in the vvvv root , not doing this it did not work for me. just in case you have similar problem. cheers
confirmed. had this path troubles too with beta <28.
also if you get this msg using the CalibrateCamera node:
"OpenCV: For non-planar calibration rigs the initial intrinsic matrix must be specified"
set the Flags enum to
finally tried this and the projector node 3.0 is just WOW :o
I have the same issues as color.
Keyboard subpatch only working on <28 (beside 28alpha).
i got that keyboard problems too... need to replace the keyboard (window) with keyboard (global)...
Hi, I am new to the whole vvvv scene and am trying to get this to work, but am having problems with ValuesTableBuffer.dll, I cannot seem to find it anywhere.
Could someone please help me figure this out?
@Elliot please feel free to remove the DX11 Version if not appropriate,tried to contact you but did not reach you. Works well in my side.
Thanks for keep updating this!
Great Tutorial!!! Thanks Elliot.
I'm having a problem with the world projector:
I can't create points, or move the model in it.
Instead of regular mouse pointer, I have only blinking red square like, that allows me to set only 1 point.
I'll appreciate someones help on that matter.
uploaded a screenshot of this issue
Please, update link on OpenCV plugins, it is not available now.
@ShmulikF: Its because of the keyboard subpatch is broken. Fix the keymatch and connect them to respective IO's within the same sub patch. It should work.
I desperatly need help on a simple projector calibration.. Does anyone have Elliot's patch running on his machine ? You can hit me on skype (crustea).
Any help would be awesome :)
i just cant get the calibrate camera node to work. it doesnt solve and flashes red while manipulating points in the renderer.
@colorsound, i dont really understand what your folder structure is. could you please describe what exactly you did to make it work again?
Just a question: is there anyone with this patch working with vvvv 30.2?
Thanks for this tutorial, has opened up a whole new world for me.. I attended one of your classes at MAD Lab the other year and we touched on 3d mapping at the end.. I'd never quite managed to figure it out until this so thanks!
I've created a patch to hook up the iPad with touchOSC and it allows the alignments to be done remotely which is great for larger scale mapping projects, and you can also use it to control the show..
It's not really in a shareable state, but maybe I'll get it there one day..
Just wanted to say thanks :)
I have a problem with this tutorial. I miss the file VVVV.Nodes.OpenCV.dll. I searched the addons but this file is missing. Where can I download it?
Just been trying this.
I reach the end of calibration (with a mesh from reconstructme) and the calculated view is facing in a completely different direction from the scene. I've tried severaltimes, with a few vvvversions.
I was very precise last time ans the reprojection error was only about 1.5px, but still the view faces the wrong way.
ok, it seems the view is inverted in the z direction somehow!
have 'fixed' with a scale z of -1
another issue is vvvv freezing every couple of seconds while moving markers with the mouse. seems to be something to do with the tables
have 'fixed' with a scale z of -1
I fixed this in the may 2012 version with 30.2 by changing the coordinates pin of the CalibrateCamera node inside of CalibrateProjector from "vvvv" to "OpenCV".
@ mrboni yes I ended up doing the same (also with calibrate camera). It seemed to work for my purposes but I'm not sure if scaling the Z at -1 is 100% correct. Also from memory I think I did that on the perspective transform.
Would love to hear from Elliot about this, I thought he mentioned in his workshop at Node13 that some of the open CV stuff might be working with an Open Framewroks style matrix.
anonymous user login