Monday, October 31, 2022

dToF Sensor with In-pixel Processing

In a recent preprint (https://arxiv.org/pdf/2209.11772.pdf) Gyongy et al. describe a new 64x32 SPAD-based direct time-of-flight sensor with in-pixel histogramming and processing capability.

Abstract
3D flash LIDAR is an alternative to the traditional scanning LIDAR systems, promising precise depth imaging in a compact form factor, and free of moving parts, for applications such as self-driving cars, robotics and augmented reality (AR). Typically implemented using single-photon, direct time-of-flight (dToF) receivers in image sensor format, the operation of the devices can be hindered by the large number of photon events needing to be processed and compressed in outdoor scenarios, limiting frame rates and scalability to larger arrays. We here present a 64 × 32 pixel (256 × 128 SPAD) dToF imager that overcomes these limitations by using pixels with embedded histogramming, which lock onto and track the return signal. This reduces the size of output data frames considerably, enabling maximum frame rates in the 10 kFPS range or 100 kFPS for direct depth readings. The sensor offers selective readout of pixels detecting surfaces, or those sensing motion, leading to reduced power consumption and off-chip processing requirements. We demonstrate the application of the sensor in mid-range LIDAR.



















Saturday, October 29, 2022

TechInsights Webinar on Hybrid Bonding Technologies Nov 15-16

https://www.techinsights.com/webinar/hybrid-bonding-technology

Hybrid bonding technology is rapidly becoming a standard approach in chipmaking due to its ability to increase connection densities.
This webinar will:
  • Examine different hybrid bonding approaches implemented in recent devices
  • Discuss key players currently using this technology
  • Look to the future of hybrid bonding, discussing potential wins – and pitfalls – to come.
This presentation compiles content from TechInsights’ subject matter experts in Memory, Image Sensor, and Logic, and from Engineers specializing in a variety of reverse engineering techniques. Many of these experts will be on hand for the live Q&A session following the presentation.

 
A preview of the topics that will be discussed:

Advanced Logic

First saw Chip on Wafer (CoW) hybrid bonding technology in the AMD Ryzen 7.
Stacking memory directly with the processor greatly increases available cache memory.
Milestone for system-technology-co-optimization (heterogeneous 3D scaling) described in the International Roadmap for Devices and Systems (IRDS) More Moore roadmap.
 
Image Sensors

We have seen Wafer-to-Wafer (W2W) stacking since 2016 from Sony.
Bond pitches as small as 2.2 µm are common, and the trend points to pitches as small as 1.4 μm.
Direct bond interconnect will ultimately enable digital pixel with in-pixel ADC and stacking of three or more wafers.
 
Memory

Hybrid bonding often used in High Bandwidth Memory (HBM) and 3D Xtacking applications.
Hybrid bonding will be one of most important high density memory enablers.
Further scaling, greater cost effectiveness, fewer defects, and solutions to thermal issues are still required.
 

 

Friday, October 28, 2022

Samsung announces 200MP ISOCELL HPX

From: https://r2.community.samsung.com/t5/Galaxy-S/Samsung-announces-200MP-ISCOCELL-HPX-sensor/td-p/12437363

Samsung has announced the ISCOCELL HPX, a new 200MP sensor in China. This follows the June announcement of the ISOCELL HP3 200MP sensor. The ISOCELL HPX has 0.56-micron pixel size, which can reduce the camera module area by 20%, making the smartphone body thinner and smaller.

Furthermore, Samsung employed Advanced DTI (Deep Trench Isolation) technology, which not only separates each pixel individually, but also increases sensitivity to capture crisp and vivid images. Furthermore, the Super QPD autofocus solution enables ISOCELL HPX to have ultra-fast and ultra-precise autofocus.



Tetra-Pixel technology in ISOCELL HPX sensor

Additionally, the Tetra pixel (16 pixels in one) technology is used in this new sensor that will give positive shooting experience in low light. With the help of this technology, the ISOCELL HPX is able to automatically switch between three different lighting modes depending on the available light: in a well-lit environment, the pixel size is maintained at 0.56 microns (μm), rendering 200 million pixels; in a low-light environment, the pixel is converted to 1.12 microns (μm), rendering 50 million pixels; and in a low-light environment, 16 pixels are combined to create a 2.24 micron (μm) 12.5 million pixel sensor.

According to Samsung, this technology enables ISOCELL HPX to deliver a positive shooting experience in low light and to reproduce sharp, sharp images as much as possible, even when the light source is constrained.

The ISOCELL HPX supports seamless dual HDR shooting in 4K and FHD modes and can capture 8K video at 30 fps. Depending on the shooting environment, Staggered HDR, according to Samsung, captures shadows and bright lights in a scene at three different exposures: low, medium, and high. Then it’ll combine all three exposure photos to produce HDR images and videos of the highest quality.

Additionally, it enables the sensor to render the image at over 4 trillion colours (14-bit colour depth), which is 64 times more than Samsung’s forerunner’s 68 billion colours (12-bit colour depth).

There’s no official wording from the firm regarding the availability. We should know in the coming weeks.

Another source covering this news: https://www.devicespecifications.com/en/news/17401435


Wednesday, October 26, 2022

VISION Stuttgart Videos

Videos from the recent machine vision trade fair VISION Stuttgart are now available online.

Opening, Camera Technology, Robot Vision, Software & Deep Learning, Optics and Ilumination

3D, Hyperspectral imaging, Vision Processing, Camera Technology, Software & Deep Learning, Standards

Hyperspectral imaging, Camera Technology, Software and Deep Learning, Vision Processing, Optics and Illumination





Monday, October 24, 2022

Atomos announces 8K video sensor

Source: https://announcements.atomos.com/4207684

Atomos completes development of world class 8K video sensor and is exploring commercialisation
[Oct 5, 2022]

Highlights:
Atomos announces that it has completed development of a world class 8K video sensor to allow video cameras to record in 8K ultra high resolution

8K Ultra HD televisions are already in the market from Samsung, Sony and LG but 8K content has been lagging 

The Company is exploring opportunities to commercialise its unique IP and is in discussion with several camera makers

Atomos Limited (‘ASX:AMS’, ‘Atomos’ or the ‘Company’) is pleased to announce that it has completed development of  a  world  class  8K  video  sensor    which  allows  cameras  to  record  in  8K  ultra  high  definition. 

Atomos acquired the intellectual property rights   and technical team from broadcast equipment firm, Grass Valley five years ago to develop a leading-edge 8K video sensor.

8K video has four times the resolution of 4K video and allows video creators much greater flexibility when  zooming in  or  cropping their  shots  during  editing,  as  the  resulting  shot  maintains  sharp resolution. 

There  are  several  8K  televisions  already  in  the  market  from  Samsung,  Sony  and  LG  and  8K  gaming  consoles are expected soon.  8K content however has  been  lagging  because,  outside  of  big  camera  makers such as Sony, owning 8K sensor technology is extremely rare.

Development of Atomos’ 8K sensor is now complete. The Company is actively exploring opportunities for commercialisation and is in discussion with several camera makers who are showing great interest.

Friday, October 21, 2022

IC Insights article on CMOS Image Sensors Market

 https://www.icinsights.com/news/bulletins/CMOS-Image-Sensors-Stall-In-Perfect-Storm-Of-2022/

CMOS Image Sensors Stall in ‘Perfect Storm’ of 2022


For most of the last two decades strong growth in CMOS image sensors pushed this product category to the top of the optoelectronics market, in terms of sales volume, generating over 40% of total opto-semiconductor annual revenues. In 2022, however, the CMOS image sensor market category is on track to suffer its first decline in 13 years with sales expected to fall 7% to $18.6 billion and unit shipments projected to drop 11% to 6.1 billion worldwide, according to IC Insights’ August 3Q Update of The McClean Report service (Figure 1).

The projected 2022 decline in CMOS image sensors comes after two years of meager sales growth in 2020 (+4%) and 2021 (+5%).  This year’s sales drop reflects overall weakness in consumer smartphones and portable computers with digital cameras for video conferencing following an upsurge in demand for Internet connections and online conferencing capabilities during the Covid-19 virus pandemic.  The 3Q Update forecast shows a modest recovery in CMOS image sensors next year with market revenues growing 4% to $19.3 billion and then rising 13% in 2024 to reach a new record high of $21.7 billion.

In addition to weak demand in mainstream consumer camera cellphones and portable computers, CMOS image sensors have been negatively impacted by deteriorating global economic conditions resulting from high inflation and spiking energy costs caused by the Russian war in Ukraine as well as U.S. trade bans on China, recent Covid-19 virus lockdowns in Chinese manufacturing centers, and slowing growth in the number of cameras being packed inside of new smartphones.  Some high-end smartphone models contain five or more cameras, but the average in most handsets has stayed at three (one on the front, facing the user for “selfie” photos and two main cameras on the backside of phones).  IC Insights’ 3Q Update Report says some managers in China have described the image sensor market conditions as a “perfect storm,” combining a slowdown in mainstream mid-range smartphone shipments and an unanticipated pause in the increase of embedded cameras being designed in new handsets.

CMOS image sensor market leader Sony—which accounted for about 43% of CMOS image sensor sales worldwide in 2021—reported a 12.4% sequential decline in image sensor dollar-volume revenues (-2% in Japanese yen) during the company’s fiscal 1Q23 quarter, ended in June 2022.  In the first half of calendar 2022, Sony struggled to match image-resolution requirements for camera phones and its CMOS image sensor sales to leading Chinese system manufacturers were lowered by U.S. trade bans.  Sony still believes excess inventories of phones and image sensors will be reduced by early 2023 and market conditions will “normalize” in the second half of its current fiscal year (ending next March).

Nearly two-thirds of CMOS image sensors are used in cellphones, and that share is expected to fall to about 45% by 2026, according to The McClean Report’s 3Q Update.  A slow–but-steady recovery in CMOS image sensors is forecast to be driven by a new upgrade buying cycle of smartphones and more embedded cameras being added in other systems, especially for automotive automation capabilities, medical applications, and intelligent security networks.  The 3Q Update shows CMOS image sensor sales rising by a CAGR of 6.0% between 2021 and 2026 to reach $26.9 billion in the final year of the forecast.  CMOS image sensor shipments are forecast to grow by a CAGR of 6.9% between 2021 and 2026, reaching 9.6 million units.

Monday, October 10, 2022

Videos du jour 2022-09-28: Teledyne, amsOSRAM, IEEE Sensors



Teledyne e2v's Topaz series of industrial CMOS sensors include 2MP (1920 x 1080) and 1.5MP (1920 x 800) resolution devices. The sensors use state-of-the-art low noise, global-shutter pixel technology to offer powerful solutions and enable compact designs for many applications.





Optimom™ 2M is the first in a range of MIPI CSI-2 optical modules. Powered by Teledyne e2v’s proprietary image sensor technology, Optimom has been thoughtfully designed to ensure there is minimum development effort required for vision-based embedded systems for robotics, logistics, drones, or laboratory equipment. Find out more https://imaging.teledyne-e2v.com/optimom




Time-of-Flight (ToF) sensors from ams enable highly accurate distance measurement and 3D mapping and imaging. Time-of-Flight (ToF) sensors from ams OSRAM are based on proprietary SPAD (Single Photon Avalanche Photodiode) pixel design and time-to-digital converters (TDCs) which have an extremely narrow pulse width. They measure in real time the direct time-of-flight (dToF) of a 940nm VCSEL (laser) emitter’s infrared ray reflected from an object. Accurate distance measurements are used in many applications e.g. presence detection, obstacle avoidance & ranging.






Title: Processing and Chracterisation of an Ultra-Thin Image Sensor Chip in Flexible Foil System
Author: Shuo Wang, Jan Dirk Schulze Spüntrup, Björn Albrecht, Christine Harendt, Joachim Burghartz
Affiliation: Institut für Mikroelektronik Stuttgart IMS CHIPS, Germany

Abstract: Unlike most image sensors, which are planar and inflexible, in this work, an ultra-thin image sensor is performed as a Hybrid System in Foil (HySiF) by using Chip-Film Patch technology, which is a concept for high-performance and ultra-thin flexible electronics. In order to characterize this image sensor embedded in foil, an adapter board for the Andvantest 93000SOIC test system was developed. This paper demonstrates production process of the HySiF and its´ behavior and performance. In addition, the applications and future work of this bendable image sensor in foil system is discussed.


Friday, October 07, 2022

ESA-ESTEC Space & Scientific CMOS Image Sensors Workshop 2022

Registration and other information: https://atpi.eventsair.com/workshop-on-cmos-image-sensors/registration-page/Site/Register

CNES, ESA, AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE, SODERN are pleased to invite you to submit an abstract to the 7th “Space & Scientific CMOS Image Sensors” workshop to be held in ESA-ESTEC on November 22 nd and 23rd 2022 within the framework of the Optics and Optoelectronics COMET (Communities of Experts).

The aim of this workshop is to focus on CMOS image sensors for scientific and space applications.

Although this workshop is organized by actors of the Space Community, it is widely open to other professional imaging applications such as Machine vision, Medical, Advanced Driver Assistance Systems (ADAS), and Broadcast (UHDTV) that boost the development of new pixel and sensor architectures for high end applications.

Furthermore, we would like to invite Laboratories and Research Centres which develop Custom CMOS image sensors with advanced smart design on-chip to join this workshop.

Topics

Abstracts shall preferably address one or more of the following topics:

  • Pixel design (low lag, linearity, FWC, MTF optimization, high quantum efficiency, large pitch pixels)
  • Electrical design (low noise amplifiers, shutter, CDS, high speed architectures, TDI, HDR)
  • On-chip ADC or TDC (in pixel, column, …)
  • On-chip processing (smart sensors, multiple gains, summation, corrections)
  • Electron multiplication, avalanche photodiodes
  • Photon-counting, quanta image sensors
  • Time resolving detectors (gated, time-correlated single-photon counting)
  • Hyperspectral architectures
  • Materials (thin film, optical layers, dopant, high-resistivity, amorphous Si)
  • Processes (backside thinning, hybridization, 3D stacking, anti-reflection coating)
  • Optical design (micro-lenses, trench isolation, filters)
  • Large size devices (stitching, butting)
  • CMOS image sensors with recent space heritage (in-flight performance)
  • High speed interfaces
  • Focal plane architectures

Tutorial Topics

Event-based sensors, SPADs

Industry exhibition

There are a limited number of small stands available for industry exhibitors. If you are interested in exhibiting at the Workshop, please contact the organisers.

Abstract submission

Please send a short abstract on one A4 page maximum in word or pdf format giving the title, the authors name and affiliation, and presenting the subject of your talk, to the organising committee (e-mail addresses are given hereafter).

Workshop format & official language

Oral presentation shall be requested for the workshop. The official language for the workshop is English.

Slide submission

After abstract acceptance notification, the author(s) will be requested to prepare their presentation in pdf or Powerpoint file format, to be presented at the workshop and to provide a copy to the organising committee with an authorization to make it available for all attendees, and on-line for the CCT members.

Calendar

13th September 2022 - Deadline for abstract submission

4th October 2022 - Author notification & preliminary programme

8th November 2022 - Final programme

22nd-23rd November 2022 - Workshop

Organising committee

 Alex MATERNE  CNES  alex.materne@cnes.fr   +33 5.61.28.15.44
 Nick NELMS  ESA  nick.nelms@esa.int   +31 71 565 8110
 Kyriaki MINOGLOU  ESA  kyriaki.minoglou@esa.int   +31 71 565 3797
 Serena RIZZOLO  Airbus Defence & Space  Serena.rizzolo@airbus.com   +33 5.62.19.62.77
 Stéphane DEMIGUEL  Thalès Alenia Space  stephane.demiguel@thalesaleniaspace.com   +33 4.92.92.61.89
 Aurelien VRIET  SODERN  aurelien.vriet@sodern.fr   +33 1.45.95.70.00

Wednesday, October 05, 2022

Calumino raises $10.3m for AI-based thermal sensing

From Geospatial World: https://www.geospatialworld.net/news/calumino-10-3mn-funding-ai-thermal-sensing-platform/

Calumino, the developer and manufacturer of a proprietary next-generation thermal sensor technology and AI, today announced its $10.3M USD Series A funding. The funding round is led by Celesta Capital and Taronga Ventures, with additional participation from Egis Technology and others. Calumino’s innovation offers the first-ever intelligent sensing platform, which is an aggregator of new and valuable data points on human presence, activity, hazards, and the environment.


As the world’s first thermal sensor to combine A.I. with high performance image sensing, privacy protection, and affordability, Calumino’s platform enables new benefits for a broad range of applications. This includes smart building management, pest control, safety and security, healthcare, and more. The Calumino thermal sensor has been natively designed with a sufficiently low resolution to protect an individual’s privacy, which in turn fills the current market gap between intrusive IP cameras and low performance motion sensors.


Sensing temperatures rather than light, the Calumino thermal sensor maps environments, assets, and individuals and has the ability to detect human presence, activity, and posture. It can also differentiate humans from animals and detect hazardous hotspots, fires, water leaks, and other anomalies. This unique data is essential for saving energy, increasing business operation efficiencies, increasing security and safety, improving life quality, and saving lives.


The Series A funding follows the successful commercialization of Calumino’s technology in the areas of commercial building management and pest control. Most recently, the company has entered the Japanese market with a Mitsubishi Electric subsidiary as a strategic partner, launching its innovative pest control product “Pescle” based upon the Calumino thermal sensor + AI.


“We are incredibly excited about this partnership and plan to roll this product out globally with our partners,” said Marek Steffanson, Founder and CEO of Calumino. “No other technology can differentiate between humans and rodents reliably, in darkness, affordably, intelligently, and with very low data bandwidth – but this application is just the beginning. Our technology is creating an entirely new space in the market and we are incredibly grateful to our investors for their support as we continue to scale production and enable the next generation of intelligent sensing to solve important problems.”


With its Series A proceeds, Calumino plans to expand existing applications and create new use cases. The team also plans to further invest in research and development, as well as expand its global team including new offices in Europe, Taiwan, Japan, and the United States.


“Calumino’s unique technology is helping to drive the proliferation of IoT – the intelligence of things – and enabling for the first time intelligent thermal imaging that is cost-effective, privacy-protecting, and scalable to mass markets,” said Nicholas Brathwaite, Founding Managing Partner of Celesta Capital. “Celesta is excited to offer our financial and intellectual capital support to help Calumino pursue their bold ambitions in becoming the ultimate IoT technology.”


“Calumino’s affordable and intelligent technology is changing the standard of how we live, work and play in real assets. The unique data and insights that Calumino is able to provide will enable asset owners to create safe, secure, and healthy environments using market-leading technology”, said Sven Sylvester, Investment Director at Taronga Ventures.

Monday, October 03, 2022

Prophesee closes EUR 50 million Series C

Prophesee closes €50M C Series round with new investment from Prosperity7 to drive commercialization of revolutionary neuromorphic vision technology;

Becomes EU’s most well-funded fabless semiconductor startup

Link: https://www.prophesee.ai/2022/09/22/prophesee-closes-50million-fundraising-round/

PARIS, September 22, 2022 – Prophesee, the inventor of the world’s most advanced neuromorphic vision systems, today announced the completion of its Series C round of funding with the addition of a new investment from Prosperity7 ventures. The round now totals €50m, including backing from initial Series C investors Sinovation Ventures and Xiaomi. They join an already strong group of international investors from North America, Europe and Japan that includes Intel Capital, Robert Bosch Venture Capital, 360 Capital, iBionext, and the European Investment Bank.

With the investment round, Prophesee becomes EU’s most well-funded fabless semiconductor startup, having raised a total of €127M since its founding in 2014.

Prosperity7 Ventures, the diversified growth global fund of Aramco Ventures – a subsidiary of Saudi Aramco, is on constant search for transformative technologies and innovative business models. Its mission is to invest in the disruptive technologies with the potential to create next-generation technology leaders and bring prosperity on a vast scale. It currently has $1B under management and holds diversified investments across various sectors, including in deep tech and bio-science companies.

 “Gaining the support of such a substantial investor as Prosperity7 adds another globally-focused backer that has a long-term vision and understanding of the requirements to achieve success with a deep tech semiconductor investment. Their support is a testament to the progress achieved and the potential that lies ahead for Prophesee. We appreciate the rigor which they, and all our investors, have used in evaluating our technology and business model and are confident their commitment to a long-term relationship will be mutually beneficial,” said Luca Verre, co-founder and CEO of Prophesee.

The round builds further momentum for Prophesee in accelerating the development and commercialization of its next generation hardware and software products, as well as position it to address new and emerging market opportunities and further scale the company. The support from its investors strengthens its ability to develop business opportunities across key ecosystems in semiconductors, industrial, robotics, IoT and mobile devices.

Event cameras address the challenges of applying computer vision in innovative ways

 “Prophesee is leading the development of a very unique solution that has the potential to revolutionize and transform the way motion is captured and processed.” noted Aysar Tayeb, the Executive Managing Director at Prosperity7. “The company has established itself as a clear leader in applying neuromorphic methods to computer vision with its revolutionary event-based Metavision® sensing and processing approach. With its fundamentally differentiated AI-driven sensor solution, its demonstrated track record with global leaders such as Sony, and its fast-growing ecosystem of more than 5,000 developers using its technology, we believe Prophesee is well-positioned to enable paradigm-shift innovation that brings new levels of safety, efficiency and sustainability to various market segments, including smartphones, automotive, AR/VR and industrial automation.” Aysar further emphasized “Prophesee and its unique team hit the criteria we are constantly searching for in startup companies with truly disruptive, life-changing technologies.”

Prophesee’s approach to enabling machines to see is a fundamental shift from traditional camera methods and aligns directly with the increasing need for more efficient ways to capture and process the dramatic increase in the volume of video input. By utilizing neuromorphic techniques to mimic the way that human brain and eye work, Prophesee’s event-based Metavision technology significantly reduces the amount of data needed to capture information. Among the benefits of the sensor and AI technology are ultra-low latency, robustness to challenging lighting conditions, energy efficiency, and low data rate. This makes it well-suited for a broad range of applications in industrial automation, IoT, consumer electronics that require real-time video data analysis while operating under demanding power consumption, size and lighting requirements.

Prophesee has gained market traction with key partners around the world who are incorporating its technology into sophisticated vision systems for uses cases in smartphones, AR/VR headsets, factory automation and maintenance, science and health research. Its partnership with Sony has resulted in a next generation HD vision sensor that combines Sony’s CMOS image sensor technology with Prophesee’s unique event-based Metavision® sensing technology. It has established commercial partnership with leading machine vision suppliers such as Lucid, Framos, Imago and Century Arks, and its open-source model for accessing its software and development tools has enabled a fast-growing community of 5,000+ developers using the technology in new and innovative ways.