Lists

Tuesday, May 12, 2020

Sony Unveils SWIR Sensors for Industrial Applications

Sony announces the upcoming release of two new SWIR image sensors for industrial equipment. The new sensors capture images in the visible and invisible spectrum and boast a compact size made possible by the industry's smallest 5μm pixel size.

The new products employ Sony's original SenSWIR technology, in which photodiodes are formed on an InGaAs compound layer and are connected via Cu-Cu connection with the Si layer which forms the readout circuit.

When bonding the InGaAs layer, which forms the light receiving photodiodes, and the Si layer, which forms the readout circuit, using conventional bump connections, it is necessary to secure a certain bump pitch, which makes it difficult to achieve a smaller pixel size compared to current industrial CMOS sensors. This had made miniaturization a serious challenge. The Sony's new products, however, feature a smaller pixel pitch made possible by the Cu-Cu connection, resulting in the industry's smallest 5μm pixel size. This, in turn, makes it possible to reduce camera size while maintaining SXGA (IMX990)/VGA (IMX991) resolution, contributing to improved testing precision.

Sony's original SWIR image sensor technology is used to make the top InP layer,*4 which absorbs visible light, thinner, making it possible to transmit light to the InGaAs layer underneath, delivering high quantum efficiency even in the visible range. This design enables imaging in a broad range of wavelengths from 0.4μm to 1.7μm, enabling the use of a single camera instead of the conventional multiple that were required to capture visible light and SWIR.

21 comments:

  1. Many will cry...

    ReplyDelete
    Replies
    1. Many will cry, many will celebrate. This is a typical example for progress. If somebody appears with a product that is like a iPod, it is bad for companies producing Walkmans. We will see this type of step forward in more spots. Because a paradigm shift in semiconductor production is underway. The 'chiplet revolution', the era of combination of different components into advanced packages has begun. Technologies like Cu-Cu or hybrid bonding will drive the progress in the next years. The importance of "smaller nanometer structures" for progress is decreasing, it gets more and more uneconomical to make all structures smaller. Its better to make only core components with smaller structures and package components of different technology together. Technology where you can have an interconnect every few um is available - as we see at this example or the SPAD device in ipad lidar, where a teardown was published here recently.

      Delete
  2. Any idea on the photodiode frontend ?

    ReplyDelete
  3. Competitive prices for samples !

    ReplyDelete
  4. 4 very attractive features imo: cmos roic, 5u pitch, compatible 1.3mp and vga, plus 400..1700nm

    ReplyDelete
  5. impressive. they are probably working on automotive version with denoise/dehaze on chip/@companion chip. RGB needs new demosaic. wonder if RGB spectral curves are similar to that of standard sensor

    ReplyDelete
  6. I doubt this will/can scale to automotive price pressure. Sample price are still in the x1000s $, competitive for machine vision but too much for automotive. I think the production process, especially InGaAs layer is a difficult and costly task also for Sony. Is it thinkable that this can scale into the 10s$ per sensor? I doubt. This is an attractive product for classical machine vision tasks where the current sensors are in the 5000s $ (for VGA) and have 2 major drawbacks in my opinion: CCDish readout with external ADC and the big pixel pitch/small pixel count. For the camera integrator (still sometimes smaller companies or "embedded" type of approaches where its not a camera manufacturer that integrates the sensor and small teams inside big companies do it) it is a big argument when the ROIC is CMOS/Pregius style. When you can derive your sourcecode from the IMX code you already have and insert some ifs and your done. The lower the hurdle for integration the higher the chance that the sensor gets applied where the problem would otherwise be solved in another way. In my opinion the sensor manufacturers put too much stress on image quality (which is important - no doubt, but often it is good enough whereas the interface, the usability of the sensor gets too little focus, especially in smaller companies). I think the bad integrateability costs the "classical" InGaAs companies more business than the good image quality brings them. If they would put 5% less energy on trying to optimize image quality and spend this in integrateability I think they would sell more.

    On the customer/integrator side the projects get delayed or fail because the small teams are delayed to integrate the sensor (or estimate too much effort) because its complex to apply. I had/have such an example working with a 3D sensor in my past, where the sensor is unneccesary difficult to handle in details - not because it has to be, but simply because the supplier did consider integration as trivial. I had a similar experience with an InGaAs Sensor on a datasheet level but there we did not even start. Reading the datasheets of a "classic" ingaas sensor (marketed for machine vision), i had problems to understand what I actually have to do, I had troubles to estimate the effort (this is one of the bases to start a project or not). The focus in the datasheet was less in how to apply the sensor but more in the details that are usually hidden in integration. The datasheet and register layout of IMX990 in contrast is pregius'ish, i get the gut feeling of what i have to do in short time (the same effect applies when I read a datasheet from e2v, OnSemi, cmosis, gpixel or otheres that are around). The interfaces and tasks are somehow similar for all this sensors. The InGaAs Sensor datasheet and interface I came to know derived a lot from the sensors i knew before.

    ReplyDelete
  7. Congratulations Sony, impressive!
    Again a nice example of how technology made for the mass market (Cu-Cu stacking) changes the game in a niche market (SWIR imaging in this case).
    I'm curious to see the read noise spec.

    ReplyDelete
  8. Very impressive results. Depending on the volume price it could really be a market breakthrought for SWIR imaging. I am looking forward to see more specs and which company will propose camera based on those sensors.

    ReplyDelete
    Replies
    1. Lucid Vision Labs certainly...

      Delete
  9. Ok good news for niche market,
    but except maybe auto, i dont see much interest for other applications.
    Why consummer usecase would be looking for SWIR?
    So, this will never go further than niche indsutrial.
    Automotive haze removal? Ok maybe, but still far from decent volumes as required to get a less than $10.




    ReplyDelete
    Replies
    1. From my point of view, people saying SWIR is not interesting because it is a niche market are totally wrong. Their are plenty of applications out there for SWIR.
      Why do you need to address consumer mass market when your sensor is ~5k€? We are not speaking of 10$ sensors for smartphones ... With "only" 10000 sensor sold, that is quickly doable in the next couple of years with the huge growth of SWIR, we are around 50 millions euros a year. It is not that bad for a new family of sensors, even for Sony.
      Automotive is not the market to address yet with the actual technology. But one should not forget that Sony mentioned it is paper that 1.5µm pitch is feasible for them ... Maybe it will be the market for next gen sensors in a few years

      Delete
    2. exactly! And it is also a circular process. The technology that enables this kind of sensors requires this kind of sensor. In hybrid bonding or in general some advanced semiconductor packaging technologies one key effect that enables the ability to do this is that silicon gets transparent above 1100nm. You can implement active alignment for stacking because you see metal structures of the bottom die/wafer through the top die/wafer. The demand in the semiconductor processing environemnt alone for this kind of sensor is in the 1000s in my opinion and many of this applications do not get realized at the moment because there are cheaper alternatives yet(!) or the technology step yet can still live without it. Also because 15u is too big and VGA is not enough (the demand for megapixels also scales with the precision with which you want to measure positions and with the size of scene you want to cover, if you have VGA you have to move to various spots in order to cover the scene and this makes the whole story simply too slow). And because the SWIR application is only one part, not the whole story - you need the VIS as well. I think the advance in semiconductor processing, the move from structural scaling (it gets harder and harder to go from 7..5..3..2nm) to advanced combinations of die components into one package will drive the demand for SWIR sensors as well because the more you do this the more you will no longer have alternatives to looking through silicon.

      Delete
  10. Great demonstration of stacking technology. First, CMOS sensors, then SPAD (iPad) and event-driven sensors. And now InGaAs sensors being stacked. Sony is at its best, and trying to create revenue streams other than CIS.

    A giant entering a niche market should not be considered as "many cries". Sometimes, a niche market (that has a potential to grow) needs big companies and their mass-market methodologies to penetrate a lot of untouched applications and grow the overall market. This is definitely a + for the market in general.

    -AA

    ReplyDelete
  11. Hi friends
    I'm working on active imaging systems and need a sensor with these specifications:
    1. pixel pith: 4-5.5
    2. Frame rate @640×512: >240fps
    3. Quantum efficiency@915nm: >30%
    4. Exposure time: <1 ms
    Would you help me, please?
    Best regards

    ReplyDelete
    Replies
    1. well.. it seems IMX991 hits your spec, right? 5u pixel, frame rate for 512 lines should be about 260 in the fastest output mode, QE is >30% 915nm and min exposure time <1ms is possible.

      Delete
  12. I received a Newsletter from a Distributor of InGaAs Sensors today telling they cut the prices by half (for a qVGA InGaAs sensor). I guess this news has some link with the product in above posting. Its funny to see prices suddenly drop by 50%. It tells you a bit about the state of the market if such a drastic list price reduction is suddenly possible without any kind of negotiation. I mean... has the cost of the supplier to manufacture the sensor dropped by 50% due to a technical breakthrough in manufacturing? I guess the answer is no. We just see the ammount of margins that were possible up to now in this segment.

    ReplyDelete
  13. How can we see the spectra right from 400nm to 1700nm what would be the image format?

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.