55-min long
Primesense presentation on Vimeo talks about some internal details of Kinect-Primesense 3D camera, such as calibration, middleware layer operation, OpenNI program interface and more. One can see some problematic conditions for the depth sensor, such as one at 5:55-6:10 time in the video:
primesense presentation @ kinect meetup from liubo on Vimeo.
What causes those problematic conditions?
ReplyDeleteA good question to Primesense engineers. A part of the hand disappears when moved against the ceiling light - I'd guess it's blooming but it's hard to say for sure.
ReplyDeletewith the narrow bandpass filter used, the ceiling light will not be visible at all.
ReplyDeleteThe question is how well the visible light is rejected by the filter.
ReplyDeleteAnother possibility is that the hand gets too close to the sensor, just like presenter says.
By the way, other artifact is a shadow from the hand with no IR pattern and no depth data. This is due to difference in location between IR projector and sensor.
The disappearing of the hand must come from saturation effects as the presenter says and from the depth computation algorithm. Not just single pixels are disappearing, but rather large pixel clusters. This is a very good indication that the distance extraction requires the information of adjacent pixels, which is, in fact, a big disadvantage of such a structured light method.
ReplyDeleteThe disappearance comes from the operation principal of Primesense device. They take a reference image at a konwn distance during the setup phase. Then they look for the local shifting of the pattern in a patch, due to optical parallax, this local shift corresponds to relative depth change compared to the reference plan during the setup.
ReplyDeleteThis is very ingenious method, because in this way, we can ignore the lens distortion in the depth computation. The lens distortion is already taken into account inside the reference image.
So when the hand is too close, the local shift can exceed the search radius fixed by the setting paramters (larger search radius will need more computation time, ...). In this case, the hand is disappeared.
Other possible reason is that the laser pattern projection can not create net image when the distance is too close. You can prove this by trying some presentation laser pointer with diffractive pattern generation, when it's too close, the pattern is not formed.
-yang ni
Yang:
ReplyDeleteIf you can assume the hardware to do whatever you want, how can this problem be solved then?
Well, I think that when the shift is too big, then it can not be assumed as "local". So in this case, you have to take into account the lens distortion.
ReplyDelete-yang ni