In a paper titled "A LiDAR Camera with an Edge" in IOP Measurement Science and Technology journal, Oguh et al. describe an interesting approach of turning a conventional global shutter CMOS image sensor into a LiDAR. The key idea is neatly explained by these two sentences in the paper: "... we recognize a simple fact: if the shutter opens before the arrival time of the photons, the camera will see them. Otherwise, the camera will not. Thus, if the shutter jitter range remains the same and its distribution is uniform, the average intensity of the object in many camera frames will be uniquely associated with the arrival time of the photons."
Abstract: A novel light detection and ranging (LiDAR) design was proposed and demonstrated using just a conventional global shutter complementary metal-oxide-semiconductor (CMOS) camera. Utilizing the jittering rising edge of the camera shutter, the distance of an object can be obtained by averaging hundreds of camera frames. The intensity (brightness) of an object in the image is linearly proportional to the distance from the camera. The achieved time precision is about one nanosecond while the range can reach beyond 50 m using a modest setup. The new design offers a simple yet powerful alternative to existing LiDAR techniques."
Full paper (paywalled): https://iopscience.iop.org/article/10.1088/1361-6501/adcb5c



Something here doesn't make sense. An iToF sensor needs an additional node to separate charge collection (unmodulated) from the charge generation (modulated). This is not possible with a conventional global shutter sensor, unless you readout the frame after each illumination pulse. That means 1000's of readouts, i.e. possibly more than one minute effective "integration time", for every depth frame. If that's what the paper suggests (It's behind a paywall), then it's not very useful....
ReplyDeleteThe exposure/integration was done on chip at tens of kiloHerz while readout was at tens of frames per second.
ReplyDeleteIn that case, it's not a global shutter sensor, it's a custom design...
ReplyDeleteThat's interesting. So Sony imx252 is not a global shutter sensor? This is the sensor that was used.
DeleteI don't know much about the Sony IMX252. I believe it's a charge domain global shutter sensor. In order to do 100's or 1000's of exposure cycles per readout, the sensor needs to support toggling the TX and GRST as many times per readout, and be careful to not reset the storage node. It's a very non-standard way to operate a global shutter pixel, and I'll be very surprised if Sony have it as a user-accessible feature in their sensor. However, since I don't know this sensor, maybe they do.
DeleteTechnology advances fast. Yes, it is a global shutter sensor. Yes, it is a standard function since the second gen. IMX sensor. It is part of the HDR tool box. And yes, it is user accessible. I found this. https://www.ximea.com/support/wiki/allprod/Multiple_exposures_in_one_frame
DeleteI remember that there was a similar paper presented at IISW.
ReplyDeleteCould you please provide a reference on this? thanks.
DeleteMaybe you referred to my paper from 2015?
Deletehttps://imagesensors.org/papers/10.60928/lkfu-hsw0/
Thanks, Erez. But the two are different though: one is for a specialized cmos sensor (BTW, nice work! was it commercialized ?) and the current work is on a new LiDAR method utilizing global shutter sensors.
DeleteWhat was productized wad an (almost) standard interline-transfer CCD that worked in the same way. It was integrated as an iToF sensor into the 1st Gen. Microsoft Hololens. The published work was an attempt to apply the same working principle to CMOS image sensor. It was actually almost a standard global shutter pixel, just fabricated on an N-Type substrate to allow the fast modulation operation.
DeletePlease correct me, if necessary, but it seems to me that what is constant per object with this scheme is the product of reflectivity times distance. To sort out the distance alone, the starting frame for the detection of each object needs to be known.
ReplyDeleteYou are correct, Dave. For every depth frame, two camera frames are needed: one is from fully open shutter and one for jittering shutter. The ratio between them will take care of the reflectivity variations among targets. Thank you for your comment.
DeleteResearchGate offers: https://iopscience.iop.org/article/10.1088/1361-6501/adcb5c without a paywall here: https://www.researchgate.net/publication/391247942_A_LiDAR_camera_with_an_edge
ReplyDelete