Light-Camera-Glitter Simulation

In one of my last blog posts, I discussed the preliminary results of trying to optimize for both the camera & light locations simultaneously. I have been working on a simulation of our geometry to try and understand this better. To recap, it seems that there is a slight ambiguity in the position of the light and camera, in that they can move slightly along two rays and still give a lower error. One thought is that it is an error in how we are modeling the light and the gaussian receptive field of the glitter. Another thought is that it may have to do with the fact that our light is not a point light source, but rather a small square; so we don’t quite know what part of the light a glitter piece sees. More likely is that it some combination of these two issues.

In this blog post I want to highlight where I am in the simulation, and what it shows so far. I simulate 10,000 pieces of glitter that lay on a place, each with a surface normal corresponding to some random screen location. 500 of them (chosen at random), however, have surface normals that allow the glitter to see the simulated point light and the simulated camera. For those that ‘see’ the light, I give them intensity values of 1, and give intensity values of 0 for all of the rest.

I then compute the predicted intensities for each glitter pieces using a few different gaussian values for the light and the glitter receptive field. Below, the non-yellow rays are the pieces of glitter that have actual intensity values of 0, but predicted intensity values between 0.1 and 1.

The above images are made using a light gaussian size of 5 and a glitter receptive field of 40. As the predicted intensity gets higher, the rays getting lighter (and farther away from the camera), which is exactly what we would expect to happen in this system.

Next, I am adding in the actual ‘optimization’ part of the simulation. As a first pass, I will just optimize for everything (including light and camera) and see what happens. In doing that, I hope to understand what parameters to tune, and I can start fiddling with the way in which I am predicting intensities and how I am modeling receptive fields.

Exploring 3D Light Location as an Optimization Parameter

The second thing I am spending time this week working through is adding the light location as a parameter that can be optimized over in the calibration. This involved adding a new flag indicating if we treat the light location as known or not, and if we treat it as unknown, then we optimize for the light location in steps 1 & 3 of the 3-step optimization.

In order to make this task as ‘easy’ as possible, I am only optimizing for translation, rotation, focal, gaussian and light location (everything I did for WACV + light location), and I have not changed the RANSAC protocol to include the fact that the light is unknown. I also initialized the light location to be the correct light location. This blog post will focus on the results of this calibration run.

The main thing I notice is that the camera and light seem to be able to both move slightly, as if they were each attached to a string, one on each end, that was on a pulley in the middle.

The upper image shows a birds-eye view of the setup, with the monitor on the left and the glitter on the right. The bottom image shows the view of the setup looking at it from the side of the table (looking towards the drones). The dotted-line frustum is the GT camera location and zoom, while the red-outlined frustum is the optimized camera location and zoom. The blue circle is the GT location of the center of the square of light displayed on the monitor, and the magenta circle is the optimized location of the center of the light.

The main thing I notice here is that the optimized camera is millimeters closer to the glitter, along the axis in line with the lens. Meanwhile, the optimized light is millimeters farther from the glitter, almost straight back (and slightly lower).

Here we can maybe see why this is the case (again from a birds-eye view of the setup). We are treating the light location as a point light source with some gaussian (which may not be the most accurate representation of our light). So, while all of the rays may fall within some small ‘square of light’ on the monitor, they actually more-or-less intersect close to a point a little further behind, and in this case even below, the point on the monitor where the light actually was.

This leads me to believe that I need to take a look at the method by which I am using the gaussian representation of our light, and think about how we can represent our light more accurately. Or maybe we need an even smaller light source (more like an actual point light). Something to think about for the re-implementation of the setup!

Correcting for Checkerboard Thickness

One of the potential (definite) sources of error that we found in our exploration of camera calibration over the last several months is the thickness of the checkerboard – the checkerboard does not lie in the same plane as the glitter sheet, even when we place the checkerboard directly on the glitter. This is a factor that I had not considered at all, and it was brought to my attention recently. This is something that we will address in the next iteration of the calibration setup, but for now I want to try to correct for this in the measurements I have. In order to compute the camera location in 3D world coordinates, we implement the following process…

  1. Take ~25 pictures of the checkerboard in various/random orientations, 1 of which is a picture of the checkerboard sitting straight and flush against the glitter:

2. Find 3 orthogonal points on the checkerboard in order to establish a coordinate system for the checkerboard, shown in green/red (red just to help me know which is the upper left):

3. Map these 3 points into our glitter coordinate system using a homography:

4. Compute the 3D world coordinates of these 3 points – now these 3D coordinates are in the plane of the glitter, which is ~3mm behind the plane of the checkerboard. Herein lies the problem.

Gut Check: We know the squares are 24.5mmx24.5mm. When I compute the 3D locations of the 3 points shown above, I get the x-distance from upper left to upper right to be 98.8 (that actual distance on the checkerboard is 98). I get the y distance from upper left to lower left to be 74.0653 (that actual distance on the checkerboard is 73.5). I also compute the dot product between the two ‘axis’ vectors formed by the 3 points, and get -0.0021 (so they are almost orthogonal).

Solution

The MATLAB Camera Calibration Toolbox gives me a location of the camera relative to the checkerboard. I then do a change of coordinate system in order to get this camera location in world coordinates. So, if the checkerboard is being treated as being in the plane of the glitter sheet (which it is), then my relative location of the camera will be computed to be ~3mm closer to the glitter sheet than it actually is.

I think I can just subtract ~3mm from the x coordinate of the 3D location of the checkerboard points, and then compute the camera location in 3D world coordinates as I was before. The y-coordinate and the z-coordinate don’t change since the checkerboard is flush with the glitter sheet (and the glitter sheet lies flat in the y-z plane). So then this effectively gives a camera location that is ~3mm behind where the checkerboard gives as the location.

My understanding then is that I will need to re-run characterization using this new camera location, and then use this new characterized data to run calibration. This seems too simple — I’ll end my post here and leave it open to comments about my reasoning.

Full Camera Calibration

Initializations:

  • Translation: Ray intersection of consistent glitter
  • Rotation: Looking along the world coordinate system x-axis ([\frac{\pi}{2}, \frac{-\pi}{2}, \frac{\pi}{2}])
  • Focals: [10000, 10000] — this is the middle of the range of the lens we are using
  • Image Center: The actual center of the calibration test image
  • Distortion: [0, 0] — different idea is to do a quick & dirty grid search over possible distortion values and choose the best as the intialization
  • Skew: Trying various fixed values…the next portion of this post focuses on this

Below is a table with results for the calibration run using a different fixed values for the skew. All other initializations are as described above.

As you can see, the values for translation, rotation, focal and gaussian (all of the parameters from the original calibration) are relatively consistent for each experiment. However, there is quite a bit of variation in the skew and the distortion parameters.

During the second and last steps of the optimization, I am computing the ‘CALTag Error’ in which I compute the reprojected image coordinates of the caltags and the consistent glitter, undistort them, and then compare these to their actual image coordinates. This is where the distortion parameter gets used. During this same ‘CALTag Error’ process, I am using the calibration matrix, K=\begin{bmatrix}f1 & s & cx\\0 & f2 & cy\\0 & 0 & 1\end{bmatrix} where s = skew, (cx, cy) = image center.

For now, I am only considering the CALTags in the ‘CALTag Error’, not the actual glitter coordinates as well. The reprojections of the CALTags and the glitter are shown below for the calibration with a fixed skew of 9 — the two number on top are the average difference between GT and calculated CALTags & glitter respectively.

…the final CALTag error (with only CALTag reprojections) came out to be 3.9078.

Next, I tried considering both the CALTags and the Glitter in the ‘CALTag Error’. The reprojections of the CALTags and the glitter are shown below for the calibration with a fixed skew of 9 — the two number on top are the average difference between GT and calculated CALTags & glitter respectively.

The results for this are as follows:
T: [632.2, 426.1, 177.8]
R: [1.3681, -1.0523, 1.0032]
F: [10517, 10577]
\sigma 58.69
k1,k2: [-0.0107, 0.0651]
cx,cy: [4131, 2879]

…and the final CALTag error (including both caltag reprojections and glitter reprojections) came out to be 0.576 (average pixel difference across all points).

Deep Wheat: t-SNE Plots & Confusion Mtx of 132_Tra_132_Tes Dataset

I reviewed the Deep Wheat dataset last week and re-arranged the training and testing dataset. In the original dataset, there are many cultivars that lost the data of date 4, 5, 7, 8, 10, and 12, so I removed the data from these days. And also, there are many cultivars that lost too much data so I removed them. Then I removed the boundary cultivars with too much data, finally get a new dataset with 264 cultivars, 9 dates. I took half of the cultivars and the 1st rep of the other half as training data, and the 2nd rep of it as testing data.

exp_name: g_8_ep_200_R50_132tra_132tes
model: ResNet-50
loss function: EPSHN loss
group size: 8
epochs: 200
data: 132 cultivar training & testing data

The 1-NN result of the testing data on cultivar is 0.009 (just a bit better than chance 1/132 = 0.0076), and 0.20 on date (2 times of chance 1/9 = 0.11). It seems that after removing the cultivars and dates which have incomplete data, it becomes harder for the network to distinguish different cultivars and dates. The confusion matrix of training data shows an obvious diagonal, but the confusion matrix of testing data is a kind of a mess.

confusion matrix of 1NN results of testing data
confusion matrix of 1NN results of training data

A Mini Hyper Parameter Grid Search

Wide Residual Network is a improved version of ResNet. It has less but wider layers. From it’s repo. It outperform ResNet-1001 (yes, it’s not 101) on different datasets. At first I just tried with a non controlled settings. So I decided give it a try on Sorghum. But got contradict results (Better loss, but lower validation recall). Then I ran a very small grid search to compare the two network. The changes are the learning rate decay.

The validation recall, loss and 2 loss terms over epoch

The most outstanding line is the baby blue one, which is Resnet 101 without learning rate decay. But that is not reasonable since the two settings at first 30 epochs are identical, despite the random part (selection of mini batches). But the recall@5 have about 10% difference. And from my pervious training, the resnet 101 has kind of same recall as the red line. Does it just get lucky?

Despite the baby blue, The WRN (wide residual network) do have a lower loss. But it doesn’t outperform on the validation set. This validated the initial training result. It is probably overtrained. The WRN-50-2 has 68.9M parameters, and ResNet-101 has 44.5M parameters. Further works could be running with larger grid search on Pegasus. Maybe a smaller WRN could have better result.

Paper Idea: Are you unknowingly sharing Your screen?

There was recently a paper made famous because a low-resolution Obama was “upsampled” to become a high-resolution white man. That paper captures a combination of bias in data sets and algorithmic choices to show one example image rather than the distribution of images. This paper created quite a bit of controversy, including a widely publicized exchange between Yann Lecun and Timnit Gebru. I encourage everyone in the lab to read about this exchange.

But the rest of this post goes in another direction. What else can we do with this “upsampling” ability? Here is one crazy idea, ripped from the reality of our modern lives — we spend a huge amount of time in front of screens, in zoom calls, for example. Are we unwittingly sharing what is on our screen? Or, to put this in the constructive setting, suppose I have a picture of you, taking from your webcam, while you are looking at a screen (that I can’t see directly). Could I reconstruct your screen based on what I can see through your webcam?

Probably there will be papers that try to do this based on direct reflections in your glasses or from your eye. I’m wondering if we can do something more, and the *first* thing I think about as I think about papers like this is to ask, “where could I get data for this? Well, I’ve been on a rampage about reaction videos for the last 2 years. These are videos that people (often video-bloggers, or vloggers) share, of them watching other videos, and a common tradition is to use video editing software to put the “being watched video” into the video of the vlogger who is watching and reacting to that video.

This offers an amazing dataset. Each reaction video shows: (1) video taken from the webcam (or a camera) of the vlogger, and (2) in the inset, the video that is on the screen the video logger is watching. So the question becomes, can we predict or estimate the inset image, based on subtle changes in the main video. The below youtube video gives one example (go to 6:10 if it doesn’t start there automatically):

Notice the reflection on the wall on the right captures the color of what is being shown on the screen. Also, the microphone itself changes color. Some of these are cues we might be able to learn geometrically (following, for example, the sparklevision paper):

Below is a list of other youtube videos where I see a clear “response” of something in the scene to the video that is being reacted to. In this case, in the glasses of the vlogger (which move around and might be difficult to use):

and then, as if to troll me, there’s literal glitter (sequins) in the background of the next vlogger. I don’t actually by eye see the glitter responding to the screen, but it must be, right? (and I’m going to write some code to check for this).

and in the below video I see reflections in the game controllers on the table:

So what are “practical” applications of this? Well, you could try to exactly reconstruct an unknown image, but the resolution of that is not likely to be good enough to, for example, read their e-mail. But I’m struck by noticing that we spend our days in front of many screens like this the one below. What is actually showing on the screen that each of these people are looking at?

This is a nice question because you can ask it without completely reconstructing the image. In zoom there is a default “big” picture that zoom selects to show based on who is talking (a) which of these 9 people are looking at that screen? (b) you could ask, for each person, are they showing the grid of all 9, or which 1 of the 9 are they looking at? (c) Can you improve your response to (b) by looking at a time-sequence? (d) Can you index other popular web-pages, live-streams, or media so that you can detect other things that might be showing on someone’s screen?

I’d be excited to see any other videos, or classes of videos anyone can find that show more of this. I think that watching reaction videos from gamers might be a win; they are used to sitting close to a large screen with an otherwise darkish room (sorry, stereotyping, i know!), which is likely to create the easiest datasets to use.

Current issue and potient solution in dissimilarity visualization

Issue:

The activation map and pooled vector(before fc layer) in Resnet-50 are non-negative. Then, no component is negative for dot product.

Potient solution:

1. Remove relu layer before pooling layer.

2. Based on equation in Grad-CAM:

w is weighs of fc layer. Yc is score for category c. A is activation map, k is channel, i_j is spatial location.

The equation is the goal of Grad-CAM, to find weighs for each activation map and combine them.

In similarity visializtion for embedding, the goal is the same, we try to find ∂D/∂A, which is the contribution for each activation map to the distance between two image vector. D is dot product of fc_vect_a and fc_vect_b. Since ∂D/∂A = ∂D/∂Y * ∂Y/∂A, (Y is fc layer vector) , and based on equation in Grad-CAM, we can get ∂D/∂A = fc_vect*W, which fc_vect and A from different images

So, we can use vector after fc layer and W matrix to get a new vector(which have negative value) to replace orignal pooled activation map.

3. we can forward vector in each location in activation map to fc layer. And use the output.

Deep Wheat: t-SNE Plots of Different Coloring Methods & Std of Cultivars

This experiment have trained 200 epochs, I did the t-SNE plot of epoch 100 to check if we got over trained.

exp_name: g_8_ep_200_R50
model: ResNet-50
group size: 8
epochs: 200
data: 97 cultivar testing data

For the 3 cultivars which are more likely to be predicted as, I colored them in the t-SNE plot of epoch 200. And the “red vs. blue” t-SNE plot for training and testing data.

I also computed the standard deviation of each cultivar in both training and testing data. The average std of training data is 10.191, the min of it is 8.584 and the max of it is 17.640. The std of those 3 special cultivars (217, 266 and 292) are 9.798, 10.427 and 11.361. It’s hard to tell if they are “small and well clustered”.

For the training data:
array([10.46416187, 10.52323914, 10.40782642,  9.80121613, 10.91461849,
       10.59361935,  9.27608776, 10.63208294, 10.70333767,  9.64982319,
       10.78457069, 10.37274933,  9.98364162,  9.94653797,  9.72254467,
        9.2923727 , 10.25147915, 10.64735985, 10.51951218,  9.6249609 ,
       10.06981087,  9.56434917, 11.33858013, 10.03707123,  8.90496445,
       11.01442051, 10.0634861 , 10.67785645,  9.28373718,  9.8069067 ,
        8.74895096,  9.13527775, 10.45289898, 10.59641647, 10.1052494 ,
       12.39895058,  9.28281784, 10.66273308, 10.1367321 ,  9.09992504,
       10.6760788 ,  9.58821964, 10.77769184,  9.89109039,  9.76054573,
       10.47741604, 10.04706287, 10.15864277,  9.57283783,  9.16854095,
       10.62602806,  8.94312286,  9.55071926, 10.03430557,  9.92059803,
       10.5079565 , 10.24594975, 10.28643513, 11.30548   , 10.32584286,
       10.35098648,  9.36228371,  9.44154644, 10.26055527,  9.71262646,
        9.95606804, 10.14754963, 10.73396015,  8.83175564,  8.85721397,
        9.95898533, 10.3237114 ,  9.72339725, 10.00537491, 10.46583652,
        9.92854786, 10.02325344,  9.46943188, 12.16626072, 10.05992985,
        9.71245289,  9.77691841,  9.98790359,  9.84941006, 10.17197227,
        9.49926472, 10.93229866,  9.89042568, 10.04704285, 10.11202145,
        8.74681091,  9.77938843,  9.46885395,  9.62973976,  9.26524353,
       10.35689545, 10.31644821, 10.14037132,  8.87110901,  9.36151028,
       11.10306263, 11.20150185,  9.50447655,  9.8961916 , 10.21476841,
       10.30224895,  9.77637482, 10.64656258, 10.57614517, 10.79774189,
        9.24985695,  9.78373623, 11.80111408, 10.16897488,  9.2091856 ,
        9.65493393,  9.95381737, 11.11535645,  9.88694859,  9.53554153,
       10.16892147, 10.32589912, 10.41562176, 10.7139616 , 11.67827702,
        9.69061565,  9.33803082,  9.53231525, 10.19188213, 10.18854046,
        9.8159399 , 10.26011944, 10.29724884,  9.97103977,  9.67305756,
        9.7047348 , 11.54331398,  9.89892387,  9.73020649, 10.45486641,
        9.7187376 , 10.08184147, 10.53874493,  9.79509926, 10.14669132,
       10.35056114,  9.66332531,  9.39208794, 10.43498516, 10.5133009 ,
       10.57205105, 10.68231487, 10.11015797, 10.02883911,  9.97161293,
        9.02976131, 10.16350555, 11.41958523,  9.62627125, 10.69648933,
       11.61301708,  9.37502575,  9.04355717, 10.27496719,  9.30772495,
       10.48779106, 10.23538589, 10.22371006, 10.37346554,  9.86011314,
       10.34436226,  9.74267864, 10.44162178, 10.40956783,  9.29414654,
       10.05021477, 10.38432026, 10.26773739, 10.17112541,  9.01770878,
       10.28068447, 10.49031925, 11.42122269,  9.8182745 , 10.50577736,
       11.1862793 , 11.04720592, 10.3194685 , 10.29874992,  9.63099384,
       10.55593967,  9.96100903, 11.32965088, 10.1030798 , 10.15437603,
       10.72751522, 11.02680206, 10.17289352, 10.20807171, 10.5807972 ,
        9.82701492,  9.70990467,  9.47952938,  9.89400482,  9.90310192,
       10.88363266, 10.18148708,  9.6078186 , 11.3612957 ,  9.55789566,
       10.81572723, 10.1153183 ,  9.88606739,  9.82144833,  9.27096558,
       11.44052124, 10.06314659,  9.79760933, 10.63856602, 10.16060257,
        9.73304176, 10.69305706, 10.63834858,  9.93109226,  9.00567532,
        9.10076809, 11.36916447, 17.64044762,  9.75825214, 11.37934113,
        8.58387661,  9.7455349 ,  9.97937489, 11.80943584,  8.99349594,
       10.65017414, 11.27319622, 10.55690765,  9.8085041 ,  9.57181549,
       10.37087154, 10.07132339, 11.51591396, 10.98816109,  9.99708939,
        9.5718689 , 10.48200989,  9.7787447 , 11.26506901, 12.72904587,
        8.60349369, 10.42990398, 10.61195087, 10.49966335, 10.79094791,
        9.94118023, 10.96340847,  9.98089409, 10.97180176,  8.98347187,
        9.96456432,  8.73169136, 11.55494595,  9.42425919,  9.74652672,
       11.18292904, 10.42725754,  9.73459339, 12.27190781,  9.84189034,
        9.34319305,  9.17745876, 10.32913113, 10.85206032, 10.1724968 ,
       10.41303253, 10.36054325, 10.0679493 ,  9.92787743, 10.6046648 ,
       11.96834373, 12.10933113,  9.40503407, 10.526577  , 10.17176819,
       10.24804115,  9.4227066 , 10.24769497,  9.85063839, 10.28357792,
        9.91055489, 10.07267857, 11.36082458, 11.16549587,  9.50157356,
       10.33835506, 10.00465393])
For the testing data:
array([10.09661198,  9.5269556 ,  9.68397427, 10.92653275, 10.73310947,
        9.59925747,  9.71199703,  9.77644825, 10.23285961,  9.02428913,
        9.21716785, 11.85523224, 10.9337101 , 10.03600311,  9.44603062,
        9.91543293,  9.87280655, 10.11535931,  9.87183857, 10.0390377 ,
       10.87068272, 10.06609154, 10.34671307, 10.13275623,  9.83918476,
       10.12144756, 11.60314655, 13.00332069,  9.01407719,  9.10905933,
       10.55102444,  9.25766277, 10.28547096,  9.96294785,  9.73879814,
       10.35756779,  9.76302624, 10.07028675, 10.43801594,  9.91042614,
        9.38255119,  9.87288189, 11.13052464,  9.49461555, 10.38577747,
       10.26436234,  9.97051048,  9.2813549 , 10.22738838, 12.92765808,
       10.01099396, 12.18794727, 10.67965221, 11.53987885, 10.14023495,
       10.21092129, 10.68487072, 10.03306675, 10.95620728,  9.85312748,
       10.13000202, 10.13410473, 10.42385292, 10.51057148, 11.2904911 ,
       11.14164257, 10.1654644 ,  9.7957859 ,  9.86104488, 10.19400597,
        9.0341959 ,  9.65091038,  9.7457428 ,  9.54844666, 10.03637695,
       10.9693718 ,  9.86671925, 10.50609016,  9.17971325,  9.47025013,
       11.07761288, 11.22717667, 10.00571537, 10.14381218, 10.27927113,
        9.42927647, 10.02052784,  9.81192493,  9.4473629 ,  9.54242897,
        9.79098225, 10.72505283, 10.59535599, 10.49421978,  9.32136059,
        9.61936569,  9.69990921])