Wednesday, September 28, 2022

Arducam's New ToF Camera Module for Embedded Applications

- Real-time point cloud and depth map.
- Resolution: 240x180@30fps on RPI4/CM4
- Up to 4M measuring distance
- Onboard 940nm laser for both indoor & outdoor uses, no external light source needed.
- V4L2-based video kernel device
- C/C++/Python SDK for userland depth map output and example source code
- ROS ready
- 38 x 38mm board size





Kickstarter link: https://www.kickstarter.com/projects/arducam/time-of-flight-tof-camera-for-raspberry-pi

Monday, September 26, 2022

Alpsentek "hybrid" vision sensor for HDR imaging

From the VISION 2022 exhibition in Stuttgart:

https://www.messe-stuttgart.de/vision/en/visitors/exhibitor-index#/408938/news/news.8e4a0ca4-5e68-418c-9f12-22b682433854

AlpsenTek® launches the ALPIX-Eiger™, a fusion vision sensor for high-end imaging 

AlpsenTek®, a leading developer of fusion vision sensors, announced the launch of the ALPIX-Eiger™ fusion vision sensor chip, specifically designed for high-end imaging applications. Using the original patented Hybrid Vision™ fusion vision technology, ALPIX-Eiger™ enables the fusion of image sensing and event sensing at the pixel level, making the simultaneous output of both image and event streams possible.

ALPIX-Eiger ™ is a patented chip architecture and pixel design with advanced 3D stacking and BSI back-illuminated/backlight technology. With a pixel size of just 1.89µm×1.89µm and a resolution of 8.0 megapixels, it is the smallest pixel size and highest resolution image sensor with event-aware capabilities in the industry. It offers broad application capabilities to small-sized smart devices, such as mobile phones and motion cameras.

High performance

The ALPIX-Eiger™ not only maintains the advantages of image sensors, ensuring full image quality and rich image details but also facilitates event sensing through the patented design of digital-analog mixed signal processing in pixels. This technology allows single pixel to work independently to detect light changes. Having a response speed of microseconds, high frame rate (equivalent to 5000fps), high dynamic range (110dB), low data redundancy and other characteristics helps the image sensor obtain more information and enhances image quality.  Compared with previous event camera solutions, the event stream output by ALPIX-Eiger™ carries color information, which aids color reconstruction of the image and achieves better quality photo and video capabilities.

Wide Applications

In practical applications, intelligent imaging devices equipped with ALPIX-Eiger™ chips can achieve high-end functions such as de-blurring, high frame rate, and super-resolution to facilitate the development of more visual applications. The HDR performance and instantaneous response of the ALPIX-Eiger™ also allow the device to obtain better imaging results at night in scenarios with extreme light and dark contrasts.






Other recent posts about Alpsentek:

https://image-sensors-world.blogspot.com/2022/06/alpsentek-vision-sensor-startup-raises.html

https://image-sensors-world.blogspot.com/2021/11/interview-with-ceo-of-ruisi-zhixin.html

http://image-sensors-world.blogspot.com/2021/10/chinese-startup-ruisizhixin-develops.html

Friday, September 23, 2022

ST's new ToF sensor uses a metalens

Recent EE Journal article suggests that the 2nd generation time-of-flight proximity sensor from STMicro uses metalenses designed by Metalenz, a Harvard spinoff.

Original article here: https://www.eejournal.com/article/time-of-flight-sensors-trilobites-and-tunable-optics-what-an-unlikely-combo/

Some excerpts below:

Time of Flight Sensors, Trilobites, and Tunable Optics – What an Unlikely Combo!

STMicroelectronics has added a new member to its line of VL53 FlightSense TOF (time of flight) distance/ranging sensors, but this new sensor takes a radical departure from the previous generation by replacing a conventional lensing system with metalenses, developed in conjunction with a startup company called Metalenz and based on technology originally developed in a Harvard University metamaterial lab. According to Metalenz, this is the first commercial product to incorporate its metalens technology.



The resulting TOF sensor can achieve more than double the range – as much as 4m in indoor settings – or it can operate at half the power consumption relative to ST’s previous generation of VL53 TOF sensors. The expanded range and reduced power consumption arise from a combination of a more efficient VCSEL driver circuit and the improved light-gathering ability of the metalens covering the SPAD array. ST’s announcement did not specify a minimum target distance for the VL53L8 sensor, but its predecessor, the VL53L5CX sensor, has a minimum ranging distance of 2cm and an apparent resolution of 1mm, with ±15mm accuracy in the 20-200mm range and 4-11% in the 201-4000mm range, depending on ambient lighting.

Like the earlier VL53L5CX sensor, the new VL53L8 sensor can determine distance to a target or multiple targets simultaneously using either 16 zones at 60Hz or 64 zones at 15Hz, as observed through a 43.5° x 43.5° square field of view. An integrated microcontroller in the TOF sensor delivers range-to-target information directly over an I2C or SPI serial interface and can generate interrupts to wake a host processor when each distance reading is made. Even with all it packs inside, this is a physically tiny sensor, measuring 6.4 x 3.0 x 1.75 mm. It’s supplied as a single, factory-aligned, reflowable component so you can drop it into a variety of products, even small portable ones.


A recent press-release from Metalenz confirms that they have indeed partnered with ST on the new VL53L8 proximity sensor.

Press release: https://www.metalenz.com/the-worlds-first-metasurfaces-have-arrived-on-the-market/

Boston, MA and Geneva, Switzerland – June 9, 2022 – Metalenz, the first company to commercialize meta-optics, and STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, today announce that ST’s currently released VL53L8 direct Time-of-Flight (dToF) sensor is the highly anticipated market debut of the meta-optics devices developed through their partnership, which was disclosed in June 2021.

Metalenz’s Harvard-born, meta-optics technology can replace existing complex and multi-element lenses and provide additional functionality with a single meta-optic embedded in time-of-flight (ToF) modules from ST, the leading company in supplying 3D sensing modules. The introduction of Metalenz technology in these modules brings performance, power, size, and cost advantages to a multitude of consumer, automotive, and industrial applications. This marks the first time metasurface technology is commercially available and being used in consumer devices.

Unlike traditional molded and curved lenses, Metalenz’s novel optics are completely planar. Planar metasurface optics are now being manufactured on silicon wafers alongside electronics in ST’s semiconductor front-end fabs for the first time. The meta-optics collect more light, provide multiple functions in a single layer, and enable new forms of sensing in smartphones and other devices, while taking up less space. Metalenz’s flat-lens technology replaces certain existing optics in ST’s FlightSense™ ToF modules, which serve applications such as smartphones, drones, robots, and vehicles. In these, ST has sold more than 1.7 billion units to date.

“More than a decade of foundational research has brought us to this point. Market deployment of our meta-optics makes this the first metasurface technology to become commercially available,” said Rob Devlin, co-founder and CEO of Metalenz. “ST’s technology, manufacturing expertise, and global reach allow us to impact millions of consumers. We have multiple wins that mark the first application of our platform technology and we are now designing entire systems around its unique functionality. Our meta-optics enable exciting new markets and new sensing capabilities in mobile form factors and at a competitive price.”


Thursday, September 22, 2022

CIS market to exceed $30B by 2027

Optics.org news: https://optics.org/news/13/9/18



13 Sep 2022

Yole Intelligence says that Sony is looking to consolidate its leading position as the market begins a new growth cycle.

Analysts at Yole Intelligence, the France-based research market company, are predicting that the market for CMOS image sensors is about to begin a new phase of growth - and will be worth more than $31 billion by 2027.

If correct, it would represent a compound annual growth rate (CAGR) of nearly 7 per cent, with the sector recovering from a slump caused by US sanctions on Chinese vendors.

That rate of growth is expected to be propelled primarily by applications in smart phones - the largest single end market for the devices - as well as emerging use cases in the automotive sector and security imaging.

Solid growth

Yole says that the market for CMOS image sensors grew nearly 25 per cent in 2019 - with buyers partly responsible for creating a bubble as they stockpiled devices in advance of those sanctions taking effect.

In 2020 the rate of growth slowed to just over 7 per cent, before the combination of sanctions and supply-chain effects resulting from the Covid-19 pandemic saw growth drop to just 2.8 per cent in 2021.

However, a recovery appears to have begun in the closing quarter of 2021, which is said to have been the best ever in terms of CMOS image sensor production. The market ended up being worth $21.3 billion that year, thinks Yole.

“A new growth cycle is now expected, supported by opportunities in mobile and other markets such as automotive and security imaging,” stated the firm. “In the coming years the [CMOS image sensor] industry growth will at least match the general semiconductor [industry] growth rate, reaching $31.4 billion by 2027 with a 6.7 per cent CAGR.”

According to the analyst team’s figures, mobile phone applications accounted for $13.4 billion in sales in 2021, almost two-thirds of the total market for CMOS image sensors. That sub-sector is expected to grow at a CAGR of more than 6 per cent through 2027, reaching close to $20 billion.

Yole analyst Florian Domengie observed: “The [CMOS image sensor] ecosystem is still dominated by historical leaders. Sony, Samsung, OmniVision, and STMicroelectronics are all strong players in mobile and consumer markets.”

Market shares

The analyst firm’s figures indicate that Sony claimed the leading market share of 39 per cent in 2021, putting it ahead of Samsung (22 per cent), OmniVision (13 per cent), and STMicroelectronics (6 per cent).

However, that Sony figure is historically low, and the Japanese technology giant is now making efforts to increase it significantly.

“As a consequence of Covid-19 and the US Huawei ban, Sony’s market share faltered in 2021,” Yole explains. “That allowed its competitors to raise their game and play technology catch up.

“However, in June 2022, Sony confirmed its ambition to reclaim its market share, aiming at 60 per cent by 2025. This announcement should reinforce production capacity and research and development [spending] in the coming years.”

In its latest quarterly results announcement, Sony said that its sales of image sensors had been impacted by a slump in the Chinese smart phone sector, but added that it was expecting growing adoption of CMOS sensors with larger die sizes and higher resolution.

"In addition, due to an easing supply and demand equilibrium for logic semiconductors, it has become possible to gradually increase the production of high-value-added image sensors, the production of which was previously restricted due to supply constraints," Sony also pointed out.

The Yole report suggests that the leading Chinese manufacturers - OmniVision, GalaxyCore, and SmartSens - have responded well to the Huawei sanctions, and are now outperforming the competition.

Although its corporate headquarters is in Silicon Valley, and it used to be listed on the Nasdaq, OmniVision was acquired by a consortium of Chinese private equity investors in 2016 - and has since been sold to Shanghai-based Will Semiconductor.

Yole says that of the other Chinese players GalaxyCore is strengthening its position, approaching $1 billion in annual revenues and ranking fifth in the market, just ahead of US-based Onsemi.

Next comes Korea’s SK Hynix with a 3 per cent market share, and SmartSens with 2 per cent. Japan’s Canon and US defense group Teledyne round out the top ten suppliers, with Hamamatsu and Panasonic earning similar shares in 2021.


Monday, September 19, 2022

Lumotive secures funding from Samsung Ventures

Original news article: https://www.autonomousvehicleinternational.com/news/sensors/lumotive-secures-funding-for-lcm-beam-steering-chips.html

Lumotive secures funding for LCM beam steering chips

Lumotive, which is developing its Light Control Metasurface (LCM) beam steering chips, says it has secured a round of strategic funding led by Samsung Ventures. The investment includes contributions from new, strategic investors such as Himax Technologies, as well as Bill Gates, Quan Funds and MetaVC Partners.

Lumotive is planning to use the funding to accelerate the development and customer delivery of optical semiconductor devices that enable the next generation of lidar sensors. More than two dozen companies are currently engaged with Lumotive to utilize the high-performance, small-form-factor LCM chips to develop next-generation systems for autonomy, automation and augmented reality (AR) markets.

“Our optical semiconductor solutions are making it possible for a number of markets to quickly and cost-effectively implement advanced sensing capabilities in industry-changing products,” said Dr Sam Heidari, CEO of Lumotive. “With the LCM technology, Lumotive is uniquely positioned to be able to address the broad range of requirements across consumer, automotive and industrial sectors. Samsung Ventures is well known for identifying companies with disruptive technologies. We are very excited to partner with them as we deliver scalable products enabling Lidar 2.0 across diverse market segments by addressing power, cost and size requirements of consumer products as well as the performance needs of automotive products.”



Friday, September 16, 2022

FLIR's Periodic Table of Machine Vision Sensors

FLIR has published an updated version of their "periodic table" of machine vision sensors: https://www.flir.com/discover/iis/machine-vision/sensor-periodic-table/

The full table features CCD and CMOS image sensors across various manufacturers including Sony, e2v, Onsemi, SHARP, Canon, CMOSIS, OmniVision, Gpixel and Aptina.

 


Updated for 2022 - The Teledyne machine vision sensor periodic table is a useful resource for system designers looking to quickly compare sensor specifications including resolution, pixel size, frame rates and optical formats. Now with more than 100 widely used machine vision sensors including third generation Sony Pregius, fourth generation Sony Pregius S, e2v, onsemi, OmniVision, CMOSIS, and GPixel, this periodic table also visually differentiates CCD, CMOS rolling and CMOS global shutter sensors.
 

With so many sensors to choose from, we understand that it could be tricky to keep track of them. This handy resource organizes currently available machine vision sensors in an easy-to-understand colour coded periodic table with an overview of important specifications. We suggest printing this free poster and pinning it up on your wall for easy reference.

Wednesday, September 14, 2022

Monday, September 12, 2022

OmniVision three-layer stacked sensor

From Businesswire --- "OMNIVISION Announces World’s Smallest Global Shutter Image Sensor for AR/VR/MR and Metaverse".

OmniVision has announced the industry’s first and only three-layer stacked BSI global shutter (GS) image sensor. The OG0TB is the world’s smallest image sensor for eye and face tracking in AR/VR/MR and Metaverse consumer devices, with a package size of just 1.64mm x 1.64mm, it has a 2.2µm pixel in a 1/14.46-inch optical format (OF). The CMOS image sensor features 400×400 resolution and ultra-low power consumption, ideal for some of the smallest and lightest battery-powered wearables, such as eye goggles and glasses. Ultra-low power consumption is critical for these battery-powered devices, which can have 10 or more cameras per system. Their OG0TB BSI GS image sensor consumes less than 7.2mW at 30 frames per second (fps).



SANTA CLARA, Calif.--(BUSINESS WIRE)--OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog, and touch & display technology, today announced the industry’s first and only three-layer stacked BSI global shutter (GS) image sensor. The OG0TB is the world’s smallest image sensor for eye and face tracking in AR/VR/MR and Metaverse consumer devices, with a package size of just 1.64mm x 1.64mm, it has a 2.2µm pixel in a 1/14.46-inch optical format (OF). The CMOS image sensor features 400x400 resolution and ultra-low power consumption, ideal for some of the smallest and lightest battery-powered wearables, such as eye goggles and glasses.

“OMNIVISION is leading the industry by developing the world’s first three-layer stacked global shutter pixel technology and implementing it in the smallest GS image sensor with uncompromising performance,” said David Shin, staff product marketing manager – IoT/Emerging at OMNIVISION. “We pack all of these features and functions into the world’s smallest ‘ready-to-go’ image sensor, which provides design flexibility to put the camera in the most ideal placement on some of the smallest and slimmest wearable devices.” Shin adds, “Ultra-low power consumption is critical for these battery-powered devices, which can have 10 or more cameras per system. Our OG0TB BSI GS image sensor consumes less than 7.2mW at 30 frames per second (fps).”

The worldwide market for AR/VR headsets grew 92.1% year over year in 2021, with shipments reaching 11.2 million units, according to new data from the International Data Corporation (IDC) Worldwide Quarterly AR/VR Headset Tracker1. New entrants as well as broader adoption from the commercial sector will propel the market further as headset shipments are forecast to grow 46.9% year over year in 2022. In fact, IDC expects this market to experience double-digit growth through 2026 as global shipments of AR/VR headsets surpass 50 million units by the end of the forecast, with a 35.1% compounded annual growth rate (CAGR).

OMNIVISION is supporting the growing market for AR/VR headsets by introducing new products such as the OG0TB GS image sensor, which features the company’s most advanced technology:

 It is built on OMNIVISION’s PureCel®Plus-S stacked-die technology.

 It features a three-layer stacked sensor with pixel size at 2.2µm in a 1/14.46-inch OF to achieve 400x400 resolution.

 Nyxel® technology enables the best quantum efficiency (QE) at the 940nm NIR wavelength for sharp, accurate images of moving objects.

 The sensor’s high modulation transfer function (MTF) enables sharper images with greater contrast and more detail, which is especially important for enhancing decision-making processes in machine vision applications.

 The sensor supports a flexible interface, including MIPI with multi-drop, CPHY, SPI, etc.

The OG0TB GS image sensor will be available for sampling in Q3 2022 and in mass production in the 2H 2023.


PS: It is worth noting that Sony made a claim for "world's first 3 layer stacked CIS" back in 2017 after their ISSCC paper titled "A 1/2.3inch 20Mpixel 3-layer stacked CMOS Image Sensor with DRAM" (DOI: 10.1109/ISSCC.2017.7870268). The three layers consisted of photodiodes, DRAM memory, and mixed-signal ISP. But that was a rolling shutter sensor, whereas this one from OmniVision is a global shutter sensor. 

PPS: Readers of blog who know of any journal or conference publication about OmniVision's new design please share them in the comments below! 


Friday, September 09, 2022

STMicro and trinamiX collaboration on face authentication

https://www.yolegroup.com/industry-news/stmicroelectronics-and-trinamix-collaborate-on-behind-oled-face-authentication-solution-to-be-showcased-live-at-ifa-2022/

STMicroelectronics and trinamiX collaborate on behind-OLED face-authentication solution to be showcased live at IFA 2022

  • Companies will demonstrate full facial authentication solution for smartphone integration and for applications behind OLED screens
  • Solution combines high-performance near-infrared global-shutter image sensor from ST and sophisticated trinamiX algorithm
  • Certified for use in mobile payments according to IIFAA, AndroidTM, and FIDO standards


STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, and trinamiX, a wholly owned subsidiary of BASF SE and pioneer of new biometric technologies, today announced their collaboration on a reference design for face authentication. The solution performs behind an OLED screen and on the security level required for mobile payments. A demonstration of this system will be first presented live at IFA 2022 in Berlin on September 2-6.

The joint development and reference design for smartphone OEMs is a full system implementation that integrates illumination, a camera module that combines ST’s global-shutter image sensor with enhanced near-infrared (NIR) sensitivity (VD56G3), and trinamiX’s patent-protected algorithms running on the processor. The system offers a contactless, fast, and reliable authentication method for integration into smartphones and other products requiring user authentication. The solution’s strength lies in a unique technology, which uses skin detection to verify a user’s liveness. In addition to verifying the user’s identity, it effectively differentiates between skin and other materials, to recognize fake presentations like photos, hyper-realistic masks, and deepfakes.

"The collaboration with ST provides us with very small, high-performance image sensors at a competitive price point. This is particularly important for our products in the consumer electronics sector," said Stefan Metz, Head of Smartphone Business Asia at trinamiX. “Furthermore, trinamiX Face Authentication can fully operate behind OLED while maintaining the highest security levels. If required, the high NIR sensitivity of ST’s image sensors supports the easy integration of our solution behind display.” According to Metz, smartphone manufacturers are thus offered a powerful, attractive package: “During the development of our smartphone reference design, we focused on particularly compact hardware sizes without compromising the performance.”

"ST’s advanced image sensors use the company’s process technologies that enable class-leading pixel size while offering both high sensitivity and low crosstalk, delivering significant improvements in performance, size, and system integration. The collaboration with trinamiX provides ST with additional opportunities to extend our support to technologies, use cases, and ecosystems addressing the thriving under-display market in Personal Electronics and beyond," said David Maucotel, Head of the Personal Electronics, Industrial and Mass Market Product Business Line at ST’s Imaging Sub-Group.

In 2021, trinamiX Face Authentication was approved for Android integration and certified according to the high biometric security requirements of Android Biometric Class 3, IIFAA Biometric Face Security Test Requirement, and FIDO Level C – the FIDO alliance’s soon-to-be top standard.

A demonstration of the joint system for face authentication will debut at IFA 2022, taking place in Berlin, Germany on September 2-6. Customer presentations as well as appointments during the fair can be requested at info@trinamiX.de.

Wednesday, September 07, 2022

Yole interview with OmniVision's marketing director

https://www.yolegroup.com/player-interviews/security-imaging-industry-omnivision-delivers-state-of-the-art-performance-products/

Security has become the largest CMOS image sensor market segment after mobile and computing devices. From 2021 to 2027, according to Yole Intelligence’s latest report, Imaging for Security 2022, revenue is expected to increase from $2.1 billion to $3.6 billion at a 9% Compound Annual Growth Rate (CAGR). 2020 and 2021 were exceptional years for the security CIS segment, and IP security cameras still provide a major growth opportunity. 



Security imaging is sustained by the growing need for security everywhere: in consumer, commercial, and infrastructure monitoring applications, driven by the increasing adoption of home Internet of Things (IoT) solutions, the demand for video analytics in retail and monitored buildings, the need for more touchless access control solutions, the development of public surveillance in cities and for critical infrastructure. The rise of video analytics, edge computing, and the development of AI vision processors enable a wider range of products.



Florian Domengie, Senior Analyst in the Imaging team at Yole Intelligence, had the opportunity to discuss with Devang Patel, Marketing Director of IoT/Emerging Segment at OMNIVISION, about the recent activities of the company and the current trends in the field of security imaging.

Florian Domengie (FD): Please introduce yourself and OMNIVISION to our readers.

Devang Patel (DP): I am OMNIVISION’s Marketing Director and have a long history in the semiconductor industry in various roles. My missions were dedicated to product management/planning, strategic marketing, and partnerships.
OMNIVISION is a global fabless semiconductor organization that develops advanced digital imaging, analog, and touch & display solutions for multiple applications and industries, including mobile phones; security and surveillance; automotive; computing; medical; and emerging applications. Its award-winning innovative technologies enable a smoother human/machine interface in many of today’s commercial devices.

FD: What new products has OMNIVISION released recently? Which applications are you targeting with these new products?
 

DP: From the security side, our company continues to be at the leading edge in providing discreet, energy-efficient power management solutions as well as the best interface protection products on the market. Indeed, thanks to the emergence of the Internet of Things (IoT), surveillance cameras are no longer limited to enterprise applications such as airports, train stations, banks, and office buildings. Instead, they have become an integral part of retail establishments, smart cities, and smart homes for the purpose of gathering and analyzing Big Data.
Here are a few examples of products developed by our experts:
 OMNIVISION’s OS03B10 CMOS image sensor brings high-quality digital images and high-definition (HD) video to security surveillance, IP, and HD analog cameras in a 3-megapixel (MP) 1/2.7-inch optical format (OF). The OS03B10 image sensor features a 2.5 micron (µm) pixel based on OMNIVISION’s OmniPixel®3-HS technology. The high-performance, cost-effective solution uses high-sensitivity frontside illumination (FSI) to detect objects better than the human eye for true-to-life color reproduction in bright and dark conditions.
 OMNIVISION’s OS02H10 is a 2.9µm, 1080p image sensor that provides a high-value option for adding the premium near-infrared (NIR) and ultra-low light performance of its Nyxel® and PureCel®Plus technologies to mainstream surveillance cameras. This sensor also offers multiple high dynamic range (HDR) options for the best quality 1080p still and video captures of fast-moving objects at 60 frames per second (fps). The OS02H10 provides a high-value option for adding premium near-infrared (NIR), ultra-low light, and HDR performance to high-volume, mainstream security systems with AI functionality. It also offers an ultra-low power mode that consumes 97.7% less power than the normal mode to support long battery life.
 OMNIVISION’s OS04C10 is a 2.0µm pixel, 4 MP resolution image sensor for both IoT and home security cameras. When paired with the designer’s selected platform, the OS04C10 can enable a system’s ultra-low power mode for battery-powered cameras with AI functionality. Additionally, it provides a high 2688 x 1520 resolution with a 16:9 aspect ratio while adding the premium NIR and ultra-low light, SNR1 performance of its Nyxel® and PureCel®Plus technologies. This sensor also offers multiple HDR options for the highest quality 4MP still and video captures of fast-moving objects at 60fps. The OS04C10 is built on the PureCel®Plus pixel architecture to achieve a superior low-noise design, providing an SNR1 that is 150% better than OMNIVISION’s prior-generation OV4689 4MP mainstream security sensor.


FD: There has been very significant growth in CMOS imaging products for the security market these last two years. How do you explain this evolution? What benefits do these bring to this market specifically?

DP: There are many factors. We see that the home security market is growing, including DIY battery-powered types of cameras. You can basically install them by yourself. The number of companies getting into this specific product line is growing.
The second factor is infrastructure. Lots of cities worldwide are adding artificial intelligence to their surveillance. For example, looking at intersections, train stations, and airports, we have seen that the city surveillance infrastructure needs growth supported by government initiatives.
The third factor is AI, which is the big thing that brings higher-resolution cameras into this market.
At Yole Intelligence, part of Yole Group, we have noticed increasing opportunities in all the security imaging market segments: consumer, commercial, and infrastructure.

FD: Which types of applications are becoming popular, and what will sustain growth in the security market in the years to come?

DP: I think commercial, and infrastructure are steadier markets, and we expect them to continue to grow. If you look at the CAGR, infrastructure has slowed down a little during the pandemic. On the consumer side, the need for “smart home” has increased. We have seen a CAGR of about 20% for “smart home”, while the traditional consumer commercial security CAGR is about 11%. As a result, we expect both security imaging market segments, commercial and infrastructure, will continue to grow in the coming years.

FD: Have you seen an increasing penetration of 3D sensing into surveillance and security applications?

DP: So far, it has not been huge. We have seen 3D sensing mentioned for some authentication use cases, but it’s mainly applied to indoor access.

FD: As a leading supplier of CMOS image sensors for security imaging, how do you see the competitive landscape and market demand develop? Is there any geographic differentiation between Europe, America, and Asia?

DP: On the overall landscape, we see that the 1080p resolution market is very competitive and is essentially replacing 720p, which used to be the low end. If you look at the product portfolio from OMNIVISION, as well as our competitors, you will see that more and more 1080p cost-sensitive products are being brought into the market.
In terms of geographical differentiation, 1080p seems to be the norm across the globe. Some applications like doorbells and smart home cameras are looking for higher resolution. Some applications like doorbells and smart home cameras are focusing on higher resolution so that they can deploy AI. To better enable AI, Some applications at the very high end would even deploy 4K2K resolutions. When it comes to city or street surveillance, high resolution is needed for counting the number of people or vehicles or for zooming in for detail. However, that market is small compared to 1080p.
While the race for smaller pixels and increased resolution is still ongoing for mobile, it seems less important for security applications where image quality is preferred.

FD: What is the trend in sensor resolution for security applications? What are the most critical performance parameters?

DP: As we mentioned previously, the resolution trend is not as severe in security compared to the smartphone market. The three key buckets we see in security are firstly the 1080p, which is roughly 2MP. Then the next bucket is 4 or 5MP, and at the high end is the 4K2K. From a volume point of view, the 1080p is by far the lion’s share. The 4 and 5MP would be the next, and then 4K2K are very high-end in terms of critical performance.
The key parameter in security is still low light sensitivity: in low light, how good is your camera? So typically, a larger pixel is used in the market. So, in the security market today, the smallest pixel we see is 1.45μm. Slowly, the smaller pixels are being deployed in security, but the majority are still large pixels.
Lower power consumption is critical for numerous applications, particularly battery-powered consumer security cameras.

FD: How can you address this with your products?

DP: To enable longer battery power, we have a unique solution called always-on architecture. Essentially, we provide a total system approach that involves a sensor with our own video processor that enables very low-power system solutions. We are taking our low-power architecture to a new level with a product we will launch later this year.

FD: There is a trend to bring more video analytics and Artificial Intelligence into security camera products. What is OMNIVISION’s view on this?

DP: Video analytics and AI are hot topics in the industry. We expect video analytics and AI technology, which used to be only at the high end a few years ago, to come down to mid-range or entry-level.
One of the key requirements on the sensor side is providing higher resolution so that you can do video analytics and AI functions simultaneously. To address this need, we have a portfolio of 4 and 5MP image sensors, all the way up to 4K2K, for our customers.

FD: Is there any other message you would like to share with our readers and the industry?

DP: In summary, we see 1080p as the dominant resolution for the foreseeable future. A higher resolution is needed for video analytics and AI applications. Always-on is one of the key features demanded for low-power battery cameras.
Low-light pixel performance is still one of the key criteria for security cameras. At OMNIVISION, we have a product line that addresses low light performance that goes from 1080p all the way to 4K2K. We also provide low-power video processors enabling a long-lasting battery solution.

Monday, September 05, 2022

CineD tests ARRI ALEXA 35 cinematography sensor

In June, this blog shared the announcement of ARRI's new cinematography sensor ALEXA 35. 

Last week CineD published a "lab test" of this sensor. CineD is an independent website that reviews latest advances in cinematography technology,

https://www.cined.com/arri-alexa-35-lab-test-rolling-shutter-dynamic-range-and-latitude-plus-video/






Friday, September 02, 2022

EETimes Europe article on emergence of consumer and automotive SWIR imaging

An article in EETimes Europe from August 24, 2022 argues that huge changes are happening in the consumer and automotive SWIR imaging industry. Some excerpts below.

https://www.eetimes.eu/how-smartphones-will-disrupt-the-swir-imaging-industry

How Smartphones Will Disrupt the SWIR Imaging Industry

August 24, 2022 Axel Clouet and Eric Mounier

Sensing SWIR radiation requires imagers based on other materials, making them orders of magnitude more expensive than silicon-based imagers. Therefore, SWIR’s use today is limited to specific applications in defense, industry, or research.

... [A] pull from the consumer market is inspiring unprecedented changes in the SWIR industry, with the emergence of new technologies and the entrance of game-changing players who may enable market and technology disruption.

A newer technology, based on quantum dots (QDs), is emerging as a lower-cost alternative to InGaAs. ... with a manufacturing process that is compatible with CMOS, allowing cost reductions by orders of magnitude.

QD technology is still emerging, with the first commercial products released in 2018 for the industry by SWIR Vision Systems.

SWIR’s technology development will be accelerated by the entrance of game-changing players: Sony released its first commercial SWIR imager in 2020, and in 2021, STMicroelectronics announced the development of SWIR imagers based on QDs. ... [both are] leading companies in the consumer and automotive silicon-based imaging industry. Sony introduced a manufacturing method based on copper-to-copper bonding, inherited from its know-how in silicon-based imaging, to make InGaAs SWIR imagers. STMicroelectronics published initial results for its SWIR imagers based on QD technology ... demonstrated high sensitivity, optimized at about 1.4 µm.

[Yole Intelligence] expect[s] the number of industrial cameras to increase significantly in the coming years, thanks to price decreases linked to QD technology penetration and the introduction of new manufacturing processes for InGaAs. These segments could represent a US$828 million market in 2027 at the camera level.

[Since] an artificial SWIR source needs to be used in combination with the imaging system. The SWIR source market should therefore benefit from the growth of the SWIR imaging market. SWIR edge-emitting diode lasers (EELs) are widely used today in the telecommunications market, ... SWIR vertical-cavity surface-emitting lasers (VCSELs) should strongly benefit from the emerging consumer and automotive SWIR markets.





Wednesday, August 31, 2022

Gpixel announces new global shutter GSPRINT 4502 sensor

Gpixel press release on August 17, 2022:

Gpixel expands high-speed GSPRINT image sensor series with a 2/3” 2.5 MP 3460 fps global shutter GSPRINT4502


Gpixel announces a high-speed global shutter image sensor, GSPRINT4502, a new member of the GSPRINT series taking high speed imaging to another level.


GSPRINT4502 is a 2.5 Megapixel (2048 x 1216), 2/3” (φ10.7 mm), high speed image sensor designed with the latest 4.5 µm charge domain global shutter pixel. It achieves more than 30 ke- charge capacity and less than 4 e- rms read noise, with dynamic range of 68 dB which can be expanded using a multi-slope HDR feature. Utilizing an advanced 65 nm CIS process with light pipe and micro lens technology, the sensor achieves >65% quantum efficiency and < -92 dB parasitic light sensitivity.

GSPRINT4502 can achieve extremely high frame rates up to 3460 fps in 8-bit mode, 1780 fps in 10-bit mode or 850 fps in 12-bit mode, all at full resolution. With 2×2 on-chip charge binning, full well capacity can be further increased to 120 ke- and frame rate to 10,200 fps. GSPRINT4502 supports vertical and horizontal regions of interest for higher frame rates. GSPRINT4502 is perfect for high-speed applications including 3D laser profiling, industrial inspection, high speed video and motion analysis.

Data output from GSPRINT4502 is through 64 pairs sub-LVDS channels running 1.2 Gbps each. Flexible output channel multiplex modes make it possible to reduce frame and data rate to make the sensor compatible with all available camera interface options. GSPRINT4502 is packaged in a 255-pin uPGA ceramic package and will be offered in sealed and removable glass lid versions.

“The market reaction to the GSPRINT high-speed image sensor family provides evidence that a growing number of applications require higher frame rates,” said Wim Wuyts, Chief Commercial Officer of Gpixel. “We are excited to continue to expand the portfolio to bring these high frame rates to more applications.”

GSPRINT4502 engineering samples can be ordered today for delivery in October, 2022. 

About the GSPRINT sensor family

The GSPRINT series is Gpixel’s high-speed global shutter product family, including the 21 MP GSPRINT4521, 10 MP GSPRINT4510 and 2.5 MP GSPRINT4502. The GSPRINT technology will be used to expand the sizes and resolutions available in the family in the future. To learn more about the GSPRINT series, please contact us at: info@gpixel.com
 

About Gpixel

Gpixel provides high-end customized and off-the-shelf CMOS image sensors for industrial, professional, medical, and scientific applications. Gpixel’s standard products include the GMAX and GSPRINT global shutter, fast frame rate sensors, the GSENSE and GLUX high-end scientific CMOS image sensor series, the GL series of line scan imagers, the GLT series of TDI line scan imagers and the GTOF series of iTOF imagers. Gpixel’s broad portfolio of products utilizes the latest technologies to meet the ever-growing demands of the professional imaging market.

Monday, August 29, 2022

2023 International Image Sensors Workshop - Call for Papers

The 2023 International Image Sensors Workshop (IISW) will be held in Scotland from 22-25 May 2023. The first call for papers is now available at this link: 2023 IISW CFP.



FIRST CALL FOR PAPERS

ABSTRACTS DUE DEC 9, 2022
 

2023 International Image Sensor Workshop

Crieff Hydro Hotel, Scotland, UK

22-25 May, 2023


The 2023 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. Now in its 35th year, the workshop will return to an in-person format. The event is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is the tradition, the 2023 workshop will emphasize an open exchange of information among participants in an informal, secluded setting beside the Scottish town of Crieff. The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.

Papers on the following topics are solicited:

Image Sensor Design and Performance
CMOS imagers, CCD imagers, SPAD sensors
New and disruptive architectures
Global shutter image sensors
Low noise readout circuitry, ADC designs
Single photon sensitivity sensors
High frame rate image sensors
High dynamic range sensors
Low voltage and low power imagers
High image quality; Low noise; High sensitivity
Improved color reproduction
Non-standard color patterns with special digital processing
Imaging system-on-a-chip, On-chip image processing

Pixels and Image Sensor Device Physics
New devices and pixel structures
Advanced materials
Ultra miniaturized pixels development, testing, and characterization
New device physics and phenomena
Electron multiplication pixels and imagers
Techniques for increasing QE, well capacity, reducing crosstalk, and improving angular response
Front side illuminated, back side illuminated, and stacked pixels and pixel arrays
Pixel simulation: Optical and electrical simulation, 2D and 3D, CAD for design and simulation, improved models

Application Specific Imagers
Image sensors and pixels for range sensing: LIDAR, TOF,
RGBZ, Structured light, Stereo imaging, etc.
Image sensors with enhanced spectral sensitivity (NIR, UV, IR)
Sensors for DSC, DSLR, mobile, digital video cameras and mirror-less cameras
Array imagers and sensors for multi-aperture imaging,
computational imaging, and machine learning
Sensors for medical applications, microbiology, genome sequencing
High energy photon and particle sensors (X-ray, radiation)
Line arrays, TDI, Very large format imagers
Multi and hyperspectral imagers
Polarization sensitive imagers

Image sensor manufacturing and testing
New manufacturing techniques
Backside thinning
New characterization methods
Defects & leakage current

On-chip optics and imaging process technology
Advanced optical path, Color filters, Microlens, Light guides
Nanotechnologies for Imaging
Wafer level cameras
Packaging and testing: Reliability, Yield, Cost
Stacked imagers, 3D integration
Radiation damage and radiation hard imager



ORGANIZING COMMITTEE

General Workshop Co-Chairs
Robert Henderson – The University of Edinburgh
Guy Meynants – Photolitics and KU Leuven

Technical Program Chair
Neale Dutton – ST Microelectronics

Technical Program Committee
Jan Bogaerts - GPixel, Belgium
Calvin Yi-Ping Chao - TSMC, Taiwan
Edoardo Charbon - EPFL, Switzerland
Bart Dierickx - Caeleste, Belgium
Amos Fenigstein - TowerJazz, Israel
Manylun Ha -  DB Hitek, South Korea
Vladimir Korobov - ON Semiconductor, USA
Bumsuk Kim - Samsung, South Korea
Alex Krymski - Alexima, USA
Jiaju Ma - Gigajot, USA
Pierre Magnan - ISAE, France
Robert Daniel McGrath - Goodix Technology, US 
Preethi Padmanabhan - AMS-Osram, Austria
Francois Roy - STMicroelectronics, France
Andreas Suess - Omnivision Technologies, USA

IISS Board of Directors
Boyd Fowler – OmniVision
Michael Guidash – R.M. Guidash Consulting
Robert Henderson – The University of Edinburgh
Shoji Kawahito – Shizuoka University and Brookman Technology
Vladimir Koifman – Analog Value
Rihito Kuroda – Tohoku University
Guy Meynants – Photolitics
Junichi Nakamura – Brillnics
Yusuke Oike – Sony (Japan)
Johannes Solhusvik – Sony (Norway)
Daniel Van Blerkom – Forza Silicon-Ametek
Yibing Michelle Wang – Samsung Semiconductor

ISS Governance Advisory Committee:
Eric Fossum - Thayer School of Engineering at Dartmouth, USA
Nobukazu Teranishi - University of Hyogo, Japan
Albert Theuwissen - Harvest Imaging, Belgium / Delft University of Technology, The Netherlands

Wednesday, August 24, 2022

Surprises of Single Photon Imaging

[This is an invited blog post by Prof. Andreas Velten from University of Wisconsin-Madison.]

When we started working on single photon imaging we were anticipating having to do away with many established concepts in computational imaging and photography. Concepts like exposure time, well depth, motion blur, and many others don’t make sense for single photon sensors. Despite this expectation we still encountered several unexpected surprises.

Our first surprise was that SPAD cameras, which typically are touted for low light applications, have an exceptionally large dynamic range and therefore outperform conventional sensors not only in dark, but also in very bright scenes. Due to their hold off time, SPADs reject a growing number of photons at higher flux levels resulting in a nonlinear response curve. The classical light flux is usually estimated by counting photons over a certain time interval. One can instead measure the time between photons or the time a sensor pixel waits for a photon in the active state. This further increases dynamic range so that the saturation flux level is above the safe operating range of the detector pixel and far above eye safety levels. The camera does not saturate. [1][2][3]

The second surprise was that single photon cameras, without further computational improvements, are of limited use in low light imaging situations. In most imaging applications motion of the scene or camera demands short exposure times well below 1 second to avoid motion blur. At light levels low enough to present a challenge to current CMOS sensors results in low photon counts even for a perfect camera. The image looks noisy not because of a problem introduced by the sensor, but because of Poisson noise due to light quantization. The low light capabilities of SPADs only come to bear when long exposure times are used or when motion can be compensated for. Luckily motion compensation strategies inspired by burst photography and event cameras work exceptionally well for SPADs due to the absence of readout noise and inherent motion blur. [4][5][6]

Finally, we assumed early on that single photon sensors have an inherent disadvantage due to larger energy consumption. They either need internal amplification like the SPAD or high frame rates like QIS and qCMOS both of which result in higher power consumption. We learned that the internal amplification process in SPADs makes up a small and decreasing portion of the overall energy consumption of a SPAD. The lions share is spent in transferring and storing the large data volumes resulting from individually processing every single photon. To address the power consumption of SPAD cameras we therefore need to find better ways to compress photon data close to the pixel and be more selective about which photons to process and which to ignore. Even the operation of a conventional CMOS camera can be thought of as a type of compression. Photons are accumulated over an exposure time and only the total is read out after each frame. The challenge for SPAD cameras is to use their access to every single photon and combine it with more sophisticated ways of data compression implemented close to the pixel. [7]

As we transition imaging to widely available high resolution single photon cameras, we are likely in for more surprises. Light is made up of photons. Light detection is a Poisson process. Light and light intensity are derived quantities that are based on ensemble averages over a large number of photons. It is reasonable to assume that detection and processing methods that are based on the classical concept of flux are sub-optimal. The full potential of single photon capture and processing is therefore not yet known. I am hoping for more positive surprises.

References 

[1] Ingle, A., Velten, A., & Gupta, M. (2019). High flux passive imaging with single-photon sensors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6760-6769). [Project Page]

[2] Ingle, A., Seets, T., Buttafava, M., Gupta, S., Tosi, A., Gupta, M., & Velten, A. (2021). Passive inter-photon imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8585-8595). [Project Page]

[3] Liu, Y., Gutierrez-Barragan, F., Ingle, A., Gupta, M., & Velten, A. (2022). Single-photon camera guided extreme dynamic range imaging. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1575-1585). [Project Page]

[4] Seets, T., Ingle, A., Laurenzis, M., & Velten, A. (2021). Motion adaptive deblurring with single-photon cameras. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1945-1954). [Interactive Visualization]

[5] Ma, S., Gupta, S., Ulku, A. C., Bruschini, C., Charbon, E., & Gupta, M. (2020). Quanta burst photography. ACM Transactions on Graphics (TOG), 39(4), 79-1. [Project Page]

[6] Laurenzis, M., Seets, T., Bacher, E., Ingle, A., & Velten, A. (2022). Comparison of super-resolution and noise reduction for passive single-photon imaging. Journal of Electronic Imaging, 31(3), 033042.

[7] Gutierrez-Barragan, F., Ingle, A., Seets, T., Gupta, M., & Velten, A. (2022). Compressive Single-Photon 3D Cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 17854-17864). [Project Page]

 

About the author:

Andreas Velten is Assistant Professor at the Department of Biostatistics and Medical Informatics and the department of Electrical and Computer Engineering at the University of Wisconsin-Madison and directs the Computational Optics Group. He obtained his PhD with Prof. Jean-Claude Diels in Physics at the University of New Mexico in Albuquerque and was a postdoctoral associate of the Camera Culture Group at the MIT Media Lab. He has included in the MIT TR35 list of the world's top innovators under the age of 35 and is a senior member of NAI, OSA, and SPIE as well as a member of Sigma Xi. He is co-Founder of OnLume, a company that develops surgical imaging systems, and Ubicept, a company developing single photon imaging solutions.



Monday, August 22, 2022

amsOSRAM announces new sensor Mira220

  • New Mira220 image sensor’s high quantum efficiency enables operation with low-power emitter and in dim lighting conditions
  • Stacked chip design uses ams OSRAM back side illumination technology to shrink package footprint to just 5.3mm x 5.3mm, giving greater design flexibility to manufacturers of smart glasses and other space-constrained products
  • Low-power operation and ultra-small size make the Mira220 ideal for active stereo vision or structured lighting 3D systems in drones, robots and smart door locks, as well as mobile and wearable devices

Press Release: https://ams-osram.com/news/press-releases/mira220

Premstaetten, Austria (14th July 2022) -- ams OSRAM (SIX: AMS), a global leader in optical solutions, has launched a 2.2Mpixel global shutter visible and near infrared (NIR) image sensor which offers the low-power characteristics and small size required in the latest 2D and 3D sensing systems for virtual reality (VR) headsets, smart glasses, drones and other consumer and industrial applications.

The new Mira220 is the latest product in the Mira family of pipelined high-sensitivity global shutter image sensors. ams OSRAM uses back side illumination (BSI) technology in the Mira220 to implement a stacked chip design, with the sensor layer on top of the digital/readout layer. This allows it to produce the Mira220 in a chip-scale package with a footprint of just 5.3mm x 5.3mm, giving manufacturers greater freedom to optimize the design of space-constrained products such as smart glasses and VR headsets.

The sensor combines excellent optical performance with very low-power operation. The Mira220 offers a high signal-to-noise-ratio as well as high quantum efficiency of up to 38% as per internal tests at the 940nm NIR wavelength used in many 2D or 3D sensing systems. 3D sensing technologies such as structured light or active stereo vision, which require an NIR image sensor, enable functions such as eye and hand tracking, object detection and depth mapping. The Mira220 will support 2D or 3D sensing implementations in augmented reality and virtual reality products, in industrial applications such as drones, robots and automated vehicles, as well as in consumer devices such as smart door locks.

The Mira220’s high quantum efficiency allows device manufacturers to reduce the output power of the NIR illuminators used alongside the image sensor in 2D and 3D sensing systems, reducing total power consumption. The Mira220 features very low power consumption at only 4mW in sleep mode, 40mW in idle mode and at full resolution and 90fps the sensor has a power consumption of 350mW. By providing for low system power consumption, the Mira220 enables wearable and portable device manufacturers to save space by specifying a smaller battery, or to extend run-time between charges.

“Growing demand in emerging markets for VR and augmented reality equipment depends on manufacturers’ ability to make products such as smart glasses smaller, lighter, less obtrusive and more comfortable to wear. This is where the Mira220 brings new value to the market, providing not only a reduction in the size of the sensor itself, but also giving manufacturers the option to shrink the battery, thanks to the sensor’s very low power consumption and high sensitivity at 940nm,” said Brian Lenkowski, strategic marketing director for CMOS image sensors at ams OSRAM.

Superior pixel technology

The Mira220’s advanced back-side illumination (BSI) technology gives the sensor very high sensitivity and quantum efficiency with a pixel size of 2.79μm. Effective resolution is 1600px x 1400px and maximum bit depth is 12 bits. The sensor is supplied in a 1/2.7” optical format.

The sensor supports on-chip operations including external triggering, windowing, and horizontal or vertical mirroring. The MIPI CSI-2 interface allows for easy interfacing with a processor or FPGA. On-chip registers can be accessed via an I2C interface for easy configuration of the sensor.

Digital correlated double sampling (CDS) and row noise correction result in excellent noise performance.

ams OSRAM will continue to innovate and extend the Mira family of solutions, offering customers a choice of resolution and size options to fit various application requirements.

The Mira220 NIR image sensor is available for sampling. More information about Mira220.


Mira220 image sensor achieves high quantum efficiency at 940nm to allow for lower power illumination in 2D and 3D sensing systems
Image: ams

The miniature Mira220 gives extra design flexibility in space-constrained applications such as smart glasses and VR headsets
Image: OSRAM



Friday, August 19, 2022

Gigajot article in Nature Scientific Reports

Jiaju Ma et al. of Gigajot Technology, Inc. have published a new article titled "Ultra‑high‑resolution quanta image sensor with reliable photon‑number‑resolving and high dynamic range capabilities" in Nature Scientific Reports.

Abstract:

Superior low‑light and high dynamic range (HDR) imaging performance with ultra‑high pixel resolution are widely sought after in the imaging world. The quanta image sensor (QIS) concept was proposed in 2005 as the next paradigm in solid‑state image sensors after charge coupled devices (CCD)and complementary metal oxide semiconductor (CMOS) active pixel sensors. This next‑generation image sensor would contain hundreds of millions to billions of small pixels with photon‑number‑resolving and HDR capabilities, providing superior imaging performance over CCD and conventional CMOS sensors. In this article, we present a 163 megapixel QIS that enables both reliable photon‑number‑resolving and high dynamic range imaging in a single device. This is the highest pixel resolution ever reported among low‑noise image sensors with photon‑number‑resolving capability. This QIS was fabricated with a standard, state‑of‑the‑art CMOS process with 2‑layer wafer stacking and backside illumination. Reliable photon‑number‑resolving is demonstrated with an average read noise of 0.35 e‑ rms at room temperature operation, enabling industry leading low‑light imaging performance. Additionally, a dynamic range of 95 dB is realized due to the extremely low noise floor and an extended full‑well capacity of 20k e‑. The design, operating principles, experimental results, and imaging performance of this QIS device are discussed.








Ma, J., Zhang, D., Robledo, D. et al. Ultra-high-resolution quanta image sensor with reliable photon-number-resolving and high dynamic range capabilities. Sci Rep 12, 13869 (2022).

This is an open access article: https://www.nature.com/articles/s41598-022-17952-z.epdf

Wednesday, August 17, 2022

RMCW LiDAR

Baraja is an automotive LiDAR company headquartered in Australia which specializes in pseudo-random modulation continuous wave LiDAR technology which they call "RMCW". A blog post by Cibby Pulikkaseril (Founder & CTO of Baraja) compares and contrasts RMCW with the more-commonly-known FMCW and ToF LiDAR technologies.

tldr; There's good reason to believe that pseudo-random modulation can provide robustness in multi-camera environments where multiple LiDARs are trying to transmit/receive over a common shared channel (free space).

Full blog article here: https://www.baraja.com/en/blog/rmcw-lidar

Some excerpts:

Definition of RMCW

Random modulated continuous wave (RMCW) LiDAR is a technique published in Applied Optics by Takeuchi et al. in 1983. The idea was to take a continuous wave laser, and modulate it with a pseudo-random binary sequence before shooting it out into the environment; the returned signal would be correlated against the known sequence and the delay would indicate the range to target. 

Benefits

... correlation turns the pseudo-random signal, which to the human eye, looks just like noise, into a sharp pulse, providing excellent range resolution and precision. Thus, by using low-speed electronics, we can achieve the same pulse performance used by frequency-modulated continuous wave (FMCW) LiDARs... .

... incredible immunity to interference, and this can be dialed in by software.

RMCW vs. FMCW vs. ToF

... [FMCW and RMCW] are fundamentally different modulation techniques. FMCW LiDAR sensors will modulate the frequency of the laser light, a relatively complicated operation, and then attempt to recognize the modulation in the return signal from the environment.

Both RMCW and FMCW LiDAR offer extremely high immunity from interfering lasers – compared to conventional ToF LiDARs, which are extremely vulnerable to interference.

Spectrum-Scan™ + RMCW is also able to produce instantaneous velocity information per-pixel, also known as Doppler velocity ... something not [natively] possible with conventional LiDAR... .

 

Baraja's Spectrum OffRoad LiDARs are currently available.


Spectrum HD25 LiDAR samples (specs in image below) will be available in 2022.