Thursday, December 29, 2011

Microsoft Proposes Double Helix PSF for Depth Sensing

Microsoft patent application US20110310226 "Use of wavefront coding to create a depth image" by Scott McEldowney proposes a fresh idea to acquire image depth information.

Here is the original description:

"[A] 3-D depth camera system includes an illuminator and an imaging sensor. The illuminator creates at least one collimated light beam, and a diffractive optical element receives the light beam, and creates diffracted light beams which illuminate a field of view including a human target. The image sensor provides a detected image of the human target using light from the field of view but also includes a phase element which adjusts the image so that the point spread function of each diffractive beam which illuminated the target will be imaged as a double helix. [A] ...processor ...determines depth information of the human target based on the rotation of the double helix of each diffractive order of the detected image, and in response to the depth information, distinguishes motion of the human target in the field of view."

Actually, it's much easier to understand this idea in pictures. Below is the illuminator with a diffractive mask 908:


There is another mask 1002 on the sensor side:


Below is the proposed double-helix PSF as a function of distance. One can see that the two points line angle changes as a function of depth:


The orientation angle of the PSF points depends on wavelength (not shown here, see in the application) and the distance (shown below):


From this angle the object distance can be calculated - this is the idea. Microfoft gives an image example and how it changes with the distance in what looks like Wide-VGA sensor plane:





Update: As written in comments, University of Colorado, Denver has been granted a patent US7705970 on a very similar idea. A figure in the patent looks very similar:

7 comments:

  1. This method has been used in CDROM pickup head. The focusing information is extracted from the elongation of the laser spot on a qudrature photodiode. For a single point image, it's easy and straight forward. But for passive use, it'll be very difficult. Maybe this can be combined with a laser projector like Kinect??

    ReplyDelete
  2. almost 20 years, Prof. Francis Devos & me filed a patent application on a 3D sensor by adding a cylindric lens between the main lens and the image sensor. In this configuration, a bright point will be seen as ellips of which the orientation changes with the distance. This idea refreshed my memory!

    -yang ni

    ReplyDelete
  3. @ "For a single point image, it's easy and straight forward. But for passive use, it'll be very difficult. Maybe this can be combined with a laser projector like Kinect??"

    For a more complex than a single point image Microsoft's application proposes to use two sensor - one to get a reference picture, another one - to determine the distance map based on the image elements rotation angle, as per pictures above.

    Microsoft's application specifically talks about a Kinect-style projector, it's not passive.

    @ "Prof. Francis Devos & me filed a patent application on a 3D sensor by adding a cylindric lens"

    Nice idea, but different from Microsoft's.

    ReplyDelete
  4. This is has def. been looked at before:

    http://www.freepatentsonline.com/7705970.pdf

    Where Figure 2 shows almost the exact same thing as Figure 13 in the Microsoft application. It also includes deconvolution with a reference image for non-point-like objects.

    ReplyDelete
  5. Thanks! Indeed, very similar idea. I added it to the post.

    ReplyDelete
  6. It is nice to see Microsoft's interest in using the Double-Helix PSF. The third figure in this post is from my 2008 paper in Optics Express: http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-16-26-22048

    ReplyDelete
  7. Indeed, looks very similar, except for the scale. What you measured in microns, Microsoft proposes to measure in meters.

    ReplyDelete

All comments are moderated to avoid spam.