Monday, August 25, 2025

Sony 3-layer stacked sensor

Tranlated from baidu.com: https://baijiahao-baidu-com.translate.goog/s?id=1839758590887948034&wfr=spider&for=pc&_x_tr_sl=zh-CN&_x_tr_tl=de&_x_tr_hl=de&_x_tr_pto=wapp

In-depth: Sony's three-layer CIS changes the global sensor market

Source: AI Core World (Aug 7, 2025)

Sony is developing a three-layer image sensor

Sony Semiconductor Solutions (SSS) showcased a potentially groundbreaking three-layer image sensor design as part of a presentation to investors, the company's Imaging & Sensing Solutions (I&SS) division announced today. The design promises significant performance improvements.

Although Sony has used stacked sensors in several cameras, including its flagship a1 II, these sensors currently have a dual-layer structure. One layer is the photodiode layer responsible for capturing light, which contains all the light-sensitive pixels; the other layer is the transistor layer located below it, which is responsible for image processing tasks. Sony's core long-term goal is to introduce the crucial third layer in the image sensor stack. This essentially means an expansion of processing power and a leap in image quality.

When other conditions are equal, the stronger the processing power at the sensor level, the better the imaging effect will naturally be. Sony explains that increasing processing power at the sensor level will directly translate into improvements in several key performance areas: dynamic range, sensitivity, noise performance, power efficiency, readout speed, and resolution.

While adding sensor layers doesn't directly change the pixel resolution itself, it unlocks entirely new video recording modes by significantly improving the overall speed and performance of the sensor.
Image sensors remain a core pillar of Sony's strategy in diverse areas including mobile devices, automotive, industrial and cameras. Sony expects the camera-related sensor market to continue expanding at a compound annual growth rate of 9% by fiscal 2030, which indicates that Sony will continue to increase its investment in this field. 

Next-generation sensor technology will become a driving force for differentiation

Sony is focusing on R&D in multi-layer sensor stack architectures and advanced process nodes to improve sensitivity, dynamic range, power efficiency and readout speed – cutting-edge technologies that will directly power future Alpha and FX camera series. To achieve these goals, Sony plans to invest a total of nearly 930 billion yen in capital expenditures between 2024 and 2026, about half of which will be dedicated to the research and development and production of advanced image sensor processes.
As outlined in its long-term strategy, Sony is going all out and investing in next-generation sensor technologies, including multi-layer stacked image sensors.

Sony's triple-stacked sensor, used in the Xperia 1 V and adopted by other mainstream smartphone models, significantly improves image quality. The architecture also supports multimodal sensing and on-chip artificial intelligence processing, marking a shift in the industry's focus from simply pursuing resolution to intelligent sensing. The breakthrough in 22nm logic stacking technology is committed to achieving ultra-low power consumption and expanded computing power, among which FDSOI technology is expected to be applied in the field of neuromorphic sensing.

---------------------------------------------- 

Also covered by PetaPixel: https://petapixel.com/2025/07/30/sonys-triple-layer-image-sensor-promises-huge-performance-gains/ 

 Sony’s Triple-Layer Image Sensor Promises Huge Performance Gains

 




9 comments:

  1. https://mp.weixin.qq.com/s/rISu1smqyGniCZGkna2aaA and try chatgpt

    ReplyDelete
  2. https://mp.weixin.qq.com/s/rISu1smqyGniCZGkna2aaA this is the original article and try chatgpt not baidu

    ReplyDelete
  3. When processing images, my CPU & GPU tend to heat up, doesn't moving the processing to be just next to the sensor create a stronger thermal floor and thermal noise? This simplistic assumption doesn't match the claim they have about better sensitivity / noise.

    ReplyDelete
    Replies
    1. The ML / AI model deployed on your GPU or running on your CPU has compute in an order of almost a hundred times more than what the compute is going to be offered at the sensor.

      The tasks are also going to be pretty simplistic due to limitations in compute on the sensor as well, compared to what is currently running on CPU and GPU. So the head dissipation is not comparable.

      However, you are right that bringing AI / ML / DLA on the sensor silicon will add to the thermal floor and noise for the sensor. I myself would like to see the thermal floor and noise numbers for before - after 'DLA on the sensor silicon'.

      Delete
  4. I remember in the CIS history, single-chip USB camera, single-chip NTSC camera and single-chip fingerprint sensor … are all failed.

    ReplyDelete
  5. Sony has used triple stacking long ago with DRAM as the middle layer. OVT has used triple silicon stacking since years as well. Is this instead meant as triple layer on single silicon, not triple silicon stacking?

    ReplyDelete
    Replies
    1. If you look at the picture, it says "pixel transistors" in the middle layer. The 3-layers stacking here implies that the Cu-Cu from the pixel layer to the transistor layer is pixel-pitch at least. I am not aware of anyone doing this before in mass production.

      Delete
    2. first picture shows 2 examples of triple layer: one has 2 logic/analog dies. About pixel pitch cu-cu. Well, i'm not sure PD to pixel transistors uses cu-cu at all. Also, cu-cu per pixel at pixel pitch is already used by OVT since long time for their global s devices

      Delete
  6. Would MRAM becomes a must Vs. DRAM for the Memory stack for edge ?

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.