Friday, December 02, 2022

2023 International Solid-State Circuits Conference (ISSCC) Feb 19-23, 2023

ISSCC will be held as an in-person conference Feb 19-23, 2023 in San Francisco. 

An overview of the program is available here:

Some sessions of interest to image sensors audience below:

Tutorial on  "Solid-State CMOS LiDAR Sensors" (Feb 19)
Seong-Jin Kim, Ulsan National Institute of Science and Technology, Ulsan, Korea

This tutorial will present the technologies behind single-photon avalanche-diode (SPAD)-based solid-state
CMOS LiDAR sensors that have emerged to realize level-5 automotive vehicles and the metaverse AR/VR in mobile devices. It will begin with the fundamentals of direct and indirect time-of-flight (ToF) techniques, followed by structures and operating principles of three key building blocks: SPAD devices, time-to-digital converters (TDCs), and signal-processing units for histogram derivation. The tutorial will finally introduce the recent development of on-chip histogramming TDCs with some state-of-the-art examples.

Seong-Jin Kim received a Ph.D. degree from KAIST, Daejeon, South Korea, in 2008 and joined the Samsung Advanced Institute of Technology to develop 3D imagers. From 2012 to 2015, he was with the Institute of Microelectronics, A*STAR, Singapore, where he was involved in designing various sensing systems. He is currently an associate professor at Ulsan National Institute of Science and Technology, Ulsan, South Korea, and a co-founder of SolidVUE, a LiDAR startup company in South Korea. His current research interests include high-performance imaging devices, LiDAR systems, and biomedical interface circuits and systems.

Session 5 on Image Sensors (Feb 20)

5.1 A 3-Wafer-Stacked Hybrid 15MPixel CIS + 1MPixel EVS with 4.6GEvent/s Readout, In-Pixel TDC and On-Chip ISP and ESP Function
Guo et al.
OmniVision Technologies

5.2 1.22µm 35.6Mpixel RGB Hybrid Event-Based Vision Sensor with 4.88µm-Pitch Event Pixels and up to 10K Event Frame Rate by Adaptive Control on Event Sparsity
Kodama et al.
Sony Semiconductor Solutions

5.3 A 2.97µm-Pitch Event-Based Vision Sensor with Shared Pixel Front-End Circuitry and Low-Noise Intensity Readout Mode
Niwa et al.
Sony Semiconductor Solutions

5.4 A 0.64µm 4-Photodiode 1.28µm 50Mpixel CMOS Image Sensor with 0.98e- Temporal Noise and 20Ke- Full-Well Capacity Employing Quarter-Ring Source-Follower
Kim et al.
Samsung Electronics

5.5 A 16.4kPixel 3.08-to-3.86THz Digital Real-Time CMOS  Image Sensor with 73dB Dynamic Range
Liu et al.
Chinese Academy of Sciences

5.6 A 400 117.7dB-DR SPAD X-Ray Detector with Seamless Global Shutter and Time-Encoded Extrapolation Counter
Park et al.
Yonsei University

5.7 55pW/pixel Peak Power Imager with Near-Sensor Novelty/Edge Detection and DC-DC Converter-Less MPPT for Purely Harvested Sensor Nodes
Ahmed & Okuhara et al.
National University of Singapore

5.8 Dual-Port CMOS Image Sensor with Regression-Based HDR Flux-to-Digital Conversion and 80ns Rapid-Update Pixel-Wise Exposure Coding
Gulve et al.,
University of Toronto

5.1, 5.2, 5.3, 5.6, 5.8 are also listed in the Demo Session on Monday Feb. 28.


  1. It will be interesting to see the event-camera's made by Sony (without Prophesee it seems), curious what they exactly will improve compared to their previous chips (the hybrid mode seems like something interesting already).

    1. This comment has been removed by a blog administrator.

    2. This is just a racist, unqualified comment. Sony will have a licensing deal with Prophesee and there will be some money flowing towards Prophesee for bringing them up to speed. What're you hoping for? That Sony acquires Prophesee? They raised over a hundred million dollars. A bit expensive of a purchase to justify in a market downturn...

  2. Could this be an adaptation of the DAVIS sensor idea? Sony did buy the startup with team members that worked on an early version.

    1. Some of the authors on the paper are indeed from the Zurich startup, so it does seem that way, we will have to see at ISSCC.

    2. A major difference with the proposed sensors both by Sony and OmniVision is that these now are hybrid vision sensors, so unlike the DAVIS with only event readout, here, it looks like the event pixels are distributed throughout the array (assumption but very likely) while the rest is conventional RGB pixels. Also, the output seems to be monochrome event + RGB. It'll be interesting to see which application(s) will adopt these new sensors first and if the event will be used to trigger RGB readout.

  3. This was our (and I think first and only previous) hybrid APS / DVS sensor. It combined 3 APS pixels with one DAVIS pixel in macropixels. Worked nicely but the QE was really low because of the technology and filters. Nice PhD work from Chenghan.

    Li, Chenghan, Christian Brandli, Raphael Berner, Hongjie Liu, Minhao Yang, Shih-Chii Liu, and Tobi Delbruck. 2015. “Design of an RGBW Color VGA Rolling and Global Shutter Dynamic and Active-Pixel Vision Sensor.” In 2015 IEEE International Symposium on Circuits and Systems (ISCAS), 718–21. Vaals, Netherlands:


All comments are moderated to avoid spam and personal attacks.