The second thing I am spending time this week working through is adding the light location as a parameter that can be optimized over in the calibration. This involved adding a new flag indicating if we treat the light location as known or not, and if we treat it as unknown, then we optimize for the light location in steps 1 & 3 of the 3-step optimization.
In order to make this task as ‘easy’ as possible, I am only optimizing for translation, rotation, focal, gaussian and light location (everything I did for WACV + light location), and I have not changed the RANSAC protocol to include the fact that the light is unknown. I also initialized the light location to be the correct light location. This blog post will focus on the results of this calibration run.
The main thing I notice is that the camera and light seem to be able to both move slightly, as if they were each attached to a string, one on each end, that was on a pulley in the middle.
The upper image shows a birds-eye view of the setup, with the monitor on the left and the glitter on the right. The bottom image shows the view of the setup looking at it from the side of the table (looking towards the drones). The dotted-line frustum is the GT camera location and zoom, while the red-outlined frustum is the optimized camera location and zoom. The blue circle is the GT location of the center of the square of light displayed on the monitor, and the magenta circle is the optimized location of the center of the light.
The main thing I notice here is that the optimized camera is millimeters closer to the glitter, along the axis in line with the lens. Meanwhile, the optimized light is millimeters farther from the glitter, almost straight back (and slightly lower).
Here we can maybe see why this is the case (again from a birds-eye view of the setup). We are treating the light location as a point light source with some gaussian (which may not be the most accurate representation of our light). So, while all of the rays may fall within some small ‘square of light’ on the monitor, they actually more-or-less intersect close to a point a little further behind, and in this case even below, the point on the monitor where the light actually was.
This leads me to believe that I need to take a look at the method by which I am using the gaussian representation of our light, and think about how we can represent our light more accurately. Or maybe we need an even smaller light source (more like an actual point light). Something to think about for the re-implementation of the setup!