Lists

Sunday, June 21, 2020

Few More iPad LiDAR Pictures

SystemPlus Consulting publishes Apple iPAD Pro 2020 LiDAR module reverse engineering report with few more pictures in addition to many that have already been published:

"This rear 3D sensing module is using the first ever consumer direct Time-of-Flight (dToF) CMOS Image Sensor (CIS) product with in-pixel connection.

The 3D sensing module includes a new generation of Near Infrared (NIR) CIS from Sony with a Single Photon Avalanche Diode (SPAD) array. The sensor features 10 µm long pixels and a resolution of 30 kilopixels. The in-pixel connection is realized between the NIR CIS and the logic wafer using hybrid Direct Bonding Interconnect technology, which is the first time Sony has used 3D stacking for its ToF sensors.

The LiDAR uses a vertical cavity surface emitting laser (VCSEL) coming from Lumentum. The laser is designed to have multiple electrodes connected separately to the emitter array. A new design with mesa contact is used to enhance wafer probe testing.

A wafer level chip scale packaging (WLCSP), five-side molded driver integrated circuit from Texas Instruments generates the pulse and drives the VCSEL power and beam shape. Finally, a new Diffractive Optical Element (DOE) from Himax is assembled on top of the VCSEL to generate a dot pattern.
"

12 comments:

  1. the information regarding the DOE from Hiax is wrong, it should be TSMC.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Why a DOE in combination with time-of-flight? A DOE creates a spot pattern, like used in structured light but not needed for time-of-flight.
    Can systemplus explain their thinking?

    ReplyDelete
    Replies
    1. That thing also has me stumped. I think that there
      is more behind that element than SystemPlus is either aware of or telling publicly. The spot pattern is the individual apertures of the VCSEL itself, the optics just focus and multiply them. However, Himax not only does DOEs, they also advertise some interesting liquid crystal elements on their website. How cool would be a switchable diffusor or a beam-sweeper? Having the energy of the laser focused on a few spots will give you less resolution but more range.

      Delete
  4. A little suprised nobody on this forum or SystemPlus has proposed this before, but here is my take on what Apple is doing:
    - If you look carefully, the VCSEL has 16x4 emitter but the pattern projected has 48 x 12 dots. So the lens does not just project the VCSEL pattern, but it projects a 9 copies (3x3) of the VCSEL pattern. This is probably what the DOE is doing.
    - Nobody seems to question why the VCSEL is (column) adressable. But look carefully on the photo of the VCSEL emitter. Then think what happens if you only lit every 4th column at once. This will create a pattern that is more sparse than the true pattern, only 1/4 of the dots will be lit, or 4x4 pixels. Combine this with the 3X3 pattern repeater, and you have a system where the entire FOV is lit but very sparse 12x12 pixel emitter.
    - Then, you can temporally scan to achieve higher resolution, so each captuer is 12x12 pixels but final resolution will be somthing like 48x24 (double row resolution since the each odd/even emitter column is offset)
    - So this wouls also correpons to Apple talk of a Lidar "scanner". Indead the emitter is being scanned but electonically not mechanicually
    - But why would you do this? Basically I can find three reasons:
    1. To reduce total system peak power (while maximizing per emitter power) 2. To improve eye safety (ie maximizing emitter power) since a 9 mm aperture at 100 mm is used for eye safety limits
    3. Because the emitter is low resolution, in this case actually a 12x12 SPAD detector is enough since the rest is done by scanning the emitter

    The remaining question would be why would anyone use a 12x12 emitter and a detector with 100x more pixels? Well, the only reason I have been able to find is that even if the SPAD receiver is ~120x120 pixels, in reality 10x10 SPADs are probably needed to handle dynamic range and get the performance needed. Most probably one must forget the 1 SPAD pixel = 1 depth pixel. Otherwise I cannot really see the use of such a huge sensor with so few emitters.

    Any thoughts?

    ReplyDelete
    Replies
    1. Good analysis.

      Having all VCSEL energy concentrated in small spots also helps with sunlight immunity. They probably process only those SPADs that have the spots on them and ignore those that receive only ambient light.

      Delete
    2. Indeed all efforts would work towards improving sunlight immunity and to a lesser degree distance range.
      Since they can rely on hand movements as additional scanning to fill the gaps, having few and small emitting dots will also avoid distance ambiguity, multi path effects, etc to provide a high final resolution with little noise.

      Delete
    3. The ST FlightSense sensors also have much more SPADSs (16x16 + dark) than emitters (1). The way I see it the IPad Pro solutions packs 576 of those into one. It propably also helps a lot with alignment issues to have such a dense detector mesh.

      Delete
    4. Can anyone suggest a good read on the 'eye safety' topic of LIDAR? Could it be desirable to use a wavelength where water has high absorbion (absorption of laser in eye before retina) and part of sunlight being absorbed by athmosphere? when looking at https://commons.wikimedia.org/wiki/File:Solar_Spectrum.png, it seems that between 1400 and 1500nm there is no significant sunlight. Could it be desirable for LIDAR to use such wavelenght to overcome eye safety and sunlight problems?

      Delete
    5. I don't know if it can be considered a "good read" but you will find what you need on eye safety in the international standard IEC 60825-1. Indeed the admissible power depends on wavelength with higher power admitted for longer wavelengths that are more absorbed in the eye.

      1400 to 1550 nm is a "bit" more complicated on the sensor side even if some automotive Lidars are targeting this spectrum. There is also a quite big dip in sunlight on ground level due to atmorspheric absorption around 940 nm.

      Delete
  5. Agree, with small adjustement: Since as described above only 12x12 emitters may be lit at the same time, it would be like having 12x12 = 144 ST FlightSense arrays at the time. The pixel count would be 12x12*16*16 = 37k which would indeed be close the reported sensor size.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.