Thursday, May 28, 2020

Not Only Sony: Attollo Introduces SWIR Sensor with 5um Pixel Pitch

Attollo Engineering introduces the Phoenix, a 640 x 512 SWIR camera based on its claimed to be the industry’s smallest VGA sensor with 5 µm InGaAs pixels.

"The Attollo Phoenix SWIR camera is a VGA format (640x512), uncooled SWIR camera featuring the industry’s smallest SWIR VGA sensor - 5um pixel size. The Phoenix captures snapshot SWIR imagery using Attollo Engineering’s high‑performance InGaAs detector material and the extremely small pixel pitch enables more pixels on target with a short focal length optic. The Phoenix’s sensor is designed specifically to support broadband imaging along with day and night laser see‑spot and range-gated imaging capabilities.

The high-performance, InGaAs 640 x 512, 5 µm pixel pitch SWIR camera’s spectral response ranges from 1.0 µm to 1.65 µm with more than 99.5% operability and greater than 70% quantum efficiency. Selectable frame rates include 30 Hz, 60 Hz, 120 Hz, and 220 Hz, with windowing available. The Phoenix has a global shutter imaging mode and presets and user-defined integration time of 0.1µs (minimum), plus triggering options of sync-in (low-latency see-spot and range-gating) and sync-out. Other specifications include onboard processing with non-uniformity corrections (NUCs) and bad pixel replacement.
"

9 comments:

  1. Very interesting. I wonder what are the sensor performances, especially the dark and readout noise.

    ReplyDelete
  2. There are more players than just Sony or Attollo in the InGaAs domain... Question is what's the underlying technology? The interesting thing about Sony is that it's hybrid-bonded on the ROIC which allows for different performance trade-offs, more aggressive scaling and low cost.

    Is Attollo using hybrid bonding, micro-bump bonding or hetero-epitaxial growth?

    ReplyDelete
    Replies
    1. More players with 5um pixel size?

      Delete
    2. Good point - maybe not so many with 5um pitch - especially not commerically available today. The Sony paper has a 2012 reference on 5um InGaAs by Teledyne Judson Technologies. NIT with 7um isn't far off either. I don't know the latest technology node of microbumps but there are papers on 5um microbumps out there that date back to early 2010s. In terms of hetero-epitaxy there are a few papers on small pixels as well - latest by IMEC at SPIE from April "Image sensors for low cost infrared imaging and 3D sensing".

      Delete
  3. Are these parts export/itar controlled? Then you have now one supplier each side of the world with such 5um VGA imagers.

    ReplyDelete
  4. With modern 3D packaging technologies available, going down to 5um is not a big challenge. The main challenge is readout noise, since passing 15um pitch to 5um pitch, the signal is reduced almost 10X. Taking into account the most advanced pixel frontend with a very small integration capacitor, the readout noise goes difficultly under 20e. If we follow the image sensor scaling down rule, 5um pixel should have 1-2e readout noise which is very difficult to reach. Besides the small pitch exiges more advanced CMOS process where the low frquency noise is even worth.

    ReplyDelete
    Replies
    1. As stated in a comment before (below the sony swir announcement post), in my opinion image sensor and camera manufacturers have a "wrong" bias towards image quality vs arguments like 'integrateability' or 'useability' ("wrong" menaing "it costs them business"). In many applications image quality is good enough even if it is a magnitude worse. Because some applications can scale illumination up and exposure time down. Some image sensor manufacturers lose applications because they overestimate the importance of image quality and underestimate the importance of usability and the importance to have a datasheet with which you can estimate effort and risk of integration in short time. In my opinion its not so much the 5u pitch that makes the sony swir sensor attractive but the Pregius style cmos roic. This has a huge impact on the effort and risk estimation people like myself give before projects get started. This is the base for a decision to go for it or to better go a different way. I dont even look deeply at image quality figures for such first estimates, you know? I know the image quality will most likely be good enough. Also - the proximity electronics has a huge impact on image quality (stable supply for example). Image sensor suppliers tend to focus on the pixel frontend and leave the electronics around the sensor out of focus, as something trivial. In my past 20 years the reason for image quality problem was in almost all cases outside the pixel/adc level.

      Delete
    2. You are right on the proxi-electronics problem and high risk to engage in such or such sensor. But the infrared sensor and camera industries are far behind the modern micro-elecrtonics :)=. New players such SONY or others from CMOS side will provide shortly all the integrated function to ease the sensor integration.
      From this point, the image quality and readout noise will come into account and the comparison will even more straight forward, since people can compare immediately. Before as you said, the image quality depended on board electronics, now all these problems will be waived.

      Delete
    3. One relatively small thing that would help a lot is a reference design. Most/all image sensor companies offer eval boards. Compared to creating the image sensor, the eval board is relatively easy to design, most do anyhow or outsource this to a 3rd party. If image sensor companies would offer components of a eval board as reference design for lets say 10000$, including FPGA IP cores to drive the sensor and/or the bom of components and/or schematics, this would reduce the step to integrate the sensor. Even FPGA test bench components would help. There are not only camera manufacturers, there are companies that integrate sensors as components of machines or whatever, microscopes etc. Such companies (that do not have design of a camera as core business) tend to have small groups designing cameras. To be able to buy a part of the project (you can put the cost into the project budget) increases the probability of success. It is by far easier to get 10 or 20k$ into a project budget than to get 1mon of electronic designer when you have to say that the risk that in the end the reality is 3 months. A company designing industrial camera does something easy compared to the company designing the image sensor. But take a InGaAs Sensor (just magnitude numbers...): the image sensor company has to spend lets say 1500$ to produce the sensor. It has to add margin to cover its cost and sells the sensor for 5000$. The camera manufacturer adds power supply, housing and communication and sells it for 15000. This gets integrated into a machine with optics an illumnination and the component is then sold to the end customer for 50000. Or - in most cases - not sold because it is too expensive. Isn this annoying to image sensor companies? All the complexity and technical beauty is in the sensor, most of the price of a camera is in the easy part around it. If you would sell a reference design, a machine builder with "embedded vision" team would use the sensor with higher probability because effort and risk of integration would be more calculable. If you could take a tested FPGA ip core you dont have to bugfix the type of bugs that cost you months. Also when companies offer 'image sensor families', the big advantage of this is not that you can reuse the same PCB but you can reuse firmware components. A PCB is isolated in one electronic component whereas firmware and software is re-used a lot. To use similar concepts, e.g. in the LVDS outputs with the concepts of framing data and how data gets distributed over channels makes it a lot easier to use the sensor because you can reuse or only slightly modify base fpga building blocks instead going for it again.

      Delete

All comments are moderated to avoid spam and personal attacks.