The conceptProjection mapping - sometimes called Spatial Augmented Reality - is when media is projected on to a surface that is generally non-planar. The projection is warped in such a way as to align the visuals to the physical objects. There are several ways to achieve this, but here I consider a 3D mapping technique where a reasonably accurate 3D model of the projection surface has been created.
Physical projection surface (left) and a 3D model of the object (right)
A virtual camera in Unity is then calibrated to mimic the behaviour of the projector, so visuals captured in Unity can be output to the projector and align with the physical object. This technique is similar to that used by Mapamok, however by building the system in Unity it's hoped that new content can be authored easily.
The methodA virtual camera in Unity has the intrinsic and extrinsic matrices of the physical projector applied to it. These matrices are calculated using OpenCV's calibrateCamera function, using manually acquired point correspondences between the virtual object and the projector's view of the physical object.
One of the major hurdles was the different coordinate systems used by Unity and OpenCV. Unity uses a left-handed coordinate system, while OpenCV expects right-handed. This can be overcome by converting to right-handed before sending the point correspondences to OpenCV, and likewise flipping one of the axes in OpenCV's results.
The translation and rotation components of the extrinsic matrix are applied separately to the Unity camera. The translation is fairly simple, however the rotation must be converted into ZXY Euler angles (the ZXY order is important) before they can be used. The intrinsic matrix is applied by swapping out the camera's projection matrix for the calculated intrinsic.
After calibration, the model can be textured, animated and generally messed with to create some cool effects. Here's a very basic demo...