Saturday, September 26, 2020

DVS Pixel Bias Self-Calibration

 Tobi Delbruk and Zhenming Yu from Zurich University publish a video presentation explaining their ISCAS 2020 paper "Self Calibration of Wide Dynamic Range Bias Current Generators ISCAS."

"Many neuromorphic chips now include on-chip, digitally programmable bias generator circuits. So far, precision of these generated biases has been designed by transistor sizing and circuit design to ensure tolerable statistical variance due to threshold mismatch. This paper reports the use of an integrated measurement circuit based on spiking neuron and a scheme for calibrating each chip set of biases against the smallest of all the biases from that chip. That way, the averaging across individual biases improves overall matching both within a chip and across chips. This paper includes measurements of generated biases, the method for remapping bias values towards more uniform values, and measurements across 5 sample chips. With the method presented in this paper, 1σ mismatch of subthreshold currents is decreased by at least a factor of 3. The firmware implementation completes calibration in about a minute and uses about 1kB of flash storage of calibration data."

CIS Prices in China

I came across an article talking about CIS prices in China in the second half of 2018. This somewhat outdated info gives an indication of how low the prices can be for a fairly complicated device:


Once we are talking about Galaxycore, the company has been requested to compare its sensors performance with others on the market while preparing its IPO. Galaxycore claims that it's similar to the market leaders.

Brigates has changed its name to Ruixin Microelectronics and prepares an IPO at the valuation of 1B yuan. The company has lost the money in 2017-208, then turned a profit in 2019:

"In the past three years, Ruixin Micro's revenue was 52.197 million yuan, 146 million yuan and 253 million yuan respectively. In 2017 and 2018, net losses were 15.1624 million yuan and 279 million yuan respectively. In 2019, it turned losses into profits, with net profit of 52.117 million yuan. Based on this, Ruixinwei finally chose a listing standard with an estimated market value of not less than 1 billion yuan, a positive net profit in the most recent year, and an operating income of not less than 100 million yuan."

Meanwhile, Omnivision's mother company Will Semi share price rose by 20x since its IPO 3 years ago:

Broadcom Expands ToF Sensor Lineup

 Broadcom keeps quietly expanding its ToF sensors lineup:

Friday, September 25, 2020

Automotive News: Cepton, Ouster, Voyant Photonics, Mobileye

BusinessWire: Cepton announces its automotive-grade lidar – the Vista-X90, priced at less than $1000 for high volume applications. It is said to set a new benchmark for high performance at low power in a compact form factor. Weighing less than 900 g, the Vista-X90 achieves up to 200 m range at 10% reflectivity with an angular resolution of 0.13° and power consumption of <12W. The sensor supports frame rates of up to 40 Hz.

With a width of 120 mm, depth of 110 mm and a front-facing height of <45 mm, Vista-X90 is compact and embeddable. Its 90° x 25° field of view, combined with its directional, non-rotational design allows seamless vehicle integration - such as in the fascia, behind the windshield or on the roof.

Vista-X90 has a licensable design architecture powered by Cepton’s patented Micro Motion Technology (MMT) – a frictionless, mirrorless, rotation-free lidar architecture. Cepton has licensed its technology to the world’s largest automotive headlamp Tier 1, Koito, who has non-exclusive rights to manufacture and sell Cepton’s lidar technology for an automotive applications, using key modules supplied by Cepton.

We are excited to disrupt the industry with the Vista-X90, which is the most cost-effective, high-performance lidar in the world for automotive applications,” said Jun Pei, Cepton’s CEO. “Automotive lidars have historically had either low performance at acceptable cost or claimed high performance while being too expensive for many OEM programs. The Vista-X90 fundamentally changes the game by bridging that divide and delivering the optimal mix of performance, power, reliability and cost. This is an integral part of our plan to make lidar available as an essential safety device in every consumer vehicle in the world.

The Vista-X90 is targeted for production in 2022 and beyond, and samples can be made available upon request.


Yole publishes an interview with Ouster CEO Angus Pacala "A rising star in the LiDAR landscape – An interview with Ouster." Few quotes:

"Since our founding in 2015, Ouster has secured over 800 customers and $140 million in funding. Our headquarters is in San Francisco, and we have offices in ​Paris, Hamburg, Frankfurt, Hong Kong, and Suzhou.

At the time, nobody thought that you could use VCSELs and SPADs to make a high-performance sensor, but we figured out how to do it and patented our approach. The design relies on these two chips with the lasers and detectors, and a lens in front of each chip. In that way, digital lidar really resembles a digital camera."

Voyant Photonics publishes a Medium article "LiDAR-on-a-Chip is Not a Fool’s Errand" with bold claims on its FMCW LiDAR capabilities:

"With every pulse our FMCW LiDAR receives reflectance and polarization measurements that let it differentiate pavement from metal, hands from coffee mugs, street signs from rubber tires, and of course a pot hole from a plastic bag. We expect to read painted markings on asphalt, in total darkness, far past where cameras could. FMCW Lidar is like a sensor from Star Trek. It can tell you where something is, how fast it’s moving, and also what it is made of.

Voyant’s devices are not science fiction. They fit in your hand. They are real. We plan on producing more of them by the end of 2022 than Velodyne has sold across all its products in the last 13 years.

High resolution 3D data generated by LiDAR turns scene analysis and real-time navigation tasks into a 5th grade geometry problem. No need for AI algorithms that simulate a cerebral cortex."


EETimes Junko Yoshida reports that a major Chinese car company Greely adopts MobilEye EyeQ5 chip with 11 cameras for ‘Hands-Free’ ADAS. There is no radar or lidar in the system. The cameras are split in 7 long range and 4 close range ones.

Mobileye SuperVision system consists of two Mobileye EyeQ5 SoCs and is the first public design win for EyeQ5 chip in a production car.

Thursday, September 24, 2020

Microsoft ToF Group Pursues Industrial Applications with SICK

SICK AG is working with Microsoft to enable the development of commercial industrial 3D cameras and related solutions, which will be compatible with a Microsoft ecosystem built on top of Microsoft depth, Intelligent Cloud, and Intelligent Edge platforms. Selected customers are already testing SICK cameras that incorporate Microsoft ToF depth technology.

SICK and Microsoft are expanding 3D ToF technologies in the context of Industry 4.0, to bring state of the art technologies to SICK’s 3DToF Visionary-T camera product line, and make it smarter, using Azure Intelligent Cloud and Intelligent Edge capabilities.

SICK’s latest industrial 3DToF camera Visionary-T Mini is expected to be available for sales in early 2021, while prototypes are already available now. Visionary-T Mini incorporates another variant of Microsoft’s 3D ToF technology with an impressive dynamic range and a resolution of ~510 x 420 pixels. It will offer extended performance and advanced on-device processing infrastructure and tools not currently available with Azure Kinect DK, to include, but not limited to: 24/7 robustness, industrial interfaces, enhanced resolution with sharper depth images and enhanced depth quality.  


Thanks to RW for the link!

Lessons Learned in a Hard Way

Tobi Delbruck from Zurich University, Switzerland, publishes a confession session "Lessons Learned the Hard Way" that he has organized at ISCAS 2020. Many of the confessions are related to the camera technology. Most of them are coverered in the video, but some appear only in the conference paper.

   

Wednesday, September 23, 2020

ADI Partners with Microsoft on ToF Imaging

BusinessWire: Analog Devices announces a strategic collaboration with Microsoft to leverage Microsoft’s 3D ToF sensor technology.

Our customers want depth image capture that ‘just works’ and is as easy as taking a photo,” said Duncan Bosworth, GM, Consumer Business Unit, Analog Devices. “Microsoft’s ToF 3D sensor technology used in the HoloLens mixed-reality headset and Azure Kinect Development Kit is seen as the industry standard for time-of-flight technologies. Combining this technology with custom-built solutions from ADI, our customers can easily deploy and scale the next generation of high-performance applications they demand, out of the box.

Analog Devices is an established leader in translating physical phenomena into digital information,” said Cyrus Bamji, Microsoft Partner Hardware Architect, Microsoft. “This collaboration will expand market access of our ToF sensor technology and enable the development of commercial 3D sensors, cameras, and related solutions, which will be compatible with a Microsoft ecosystem built on top of Microsoft depth, Intelligent Cloud, and Intelligent Edge platforms.

Tuesday, September 22, 2020

High-Speed SPAD Sensor

Edinbough University publishes an OSA Optica paper "High-speed 3D sensing via hybrid-mode imaging and guided upsampling" by Istvan Gyongy, Sam W. Hutchings, Abderrahim Halimi, Max Tyler, Susan Chan, Feng Zhu, Stephen McLaughlin, Robert K. Henderson, and Jonathan Leach.

"In recent years, single-photon avalanche diode (SPAD) arrays with picosecond timing capabilities have emerged as a key technology driving these systems forward. Here we report a high-speed 3D imaging system enabled by a state-of-the-art SPAD sensor used in a hybrid imaging mode that can perform multi-event histogramming. The hybrid imaging modality alternates between photon counting and timing frames at rates exceeding 1000 frames per second, enabling guided upscaling of depth data from a native resolution of 64×32 to 256×128. The combination of hardware and processing allows us to demonstrate high-speed ToF 3D imaging in outdoor conditions and with low latency. The results indicate potential in a range of applications where real-time, high throughput data are necessary. One such example is improving the accuracy and speed of situational awareness in autonomous systems and robotics."

Thanks to IG for the link!

Ams Presents its 3D Sensing Solutions

 Ams publishes a couple of videos about its 3D sensing solutions:

Assorted News: Egis Sues Goodix, LeddarTech Acquires Phantom Intelligence Assets

TaipeiTimes: Taiwan-based fingerprint sensor company Egis has filed a optical fingerprint patent infringement lawsuit with Beijing’s Intellectual Property Court against China-based world's largest fingerpint sensors manufacturer Goodix. Egis has requested the court to immediately stop Goodix’s practice and destroy all problematic products. It is also seeking 90M yuan (US$13.24M) in damages.

GlobeNewsWire: LeddarTech acquires the assets of Phantom Intelligence, including all of its intellectual property and technology. Founded in 2011, Phantom Intelligence is recognized for their expertise in signal processing and LiDAR technology that protects vulnerable road users (VRUs) and improves the safety and fluidity of travel by offering solutions that enable reliable advanced driver assistance systems (ADAS).

The Phantom Intelligence acquisition further advances our strategy to aggregate and consolidate automotive sensing technologies, enabling LeddarTech to offer comprehensive solutions to our customers at lower costs,” stated Frantz Saintellemy, President and COO of LeddarTech. “At LeddarTech we have, despite the COVID 19 pandemic, accelerated our drive towards serving multiple transportation and vehicle markets with end-to-end sensing solutions that provide our ADAS and AD customers a faster path to market at lower costs.


Lynred and PI Tutorials

 Lynred starts publishing tutorials on IR imaging. The first part is Infrared detection basics.

Teledyne Princeton Instruments publishes a tutorial "The Fundamentals Behind Modern Scientific Cameras."

Monday, September 21, 2020

Assorted News: Sense Photonics, MIPI, Pamtek, CSET, CoreDAR, Samsung

LaserFocusWorld publishes an interview with Shauna McIntyre, the new CEO of Sense Photonics. Few quotes:

"We have core flash lidar technology in the laser emitter, the detector array, and the algorithms and software stack. The proprietary laser emitter is based on a large VCSEL array, which provides high, eye-safe optical output power for long-range detection and wide field-of-view at a low cost point that is game-changing. Because the emitter’s wavelength is centered around 940 nm, our detector array can be based on inexpensive CMOS technology for low cost, and we get the added benefit of lower background light from the sun for a higher signal-to-noise ratio. From an architecture perspective, we intentionally chose a flash architecture because of its simple camera-like global shutter design, scalability to high-volume manufacture, the benefit of having no moving parts, and most importantly, it enables low cost.

Our laser array is a network consisting of thousands of VCSELs interconnected in a way that provides short pulses of high-power light. In keeping with our philosophy of design simplicity and high performance for our customers, we actuate the array to generate a single laser flash rather than adding complexity and cost associated with a multi-flash approach."


MIPI officially releases A-PHY interface for automotive applications. A-PHY v1.0 offers:
  • High reliability: Ultra-low packet error rate (PER) of 10-19 for unprecedented performance over the vehicle lifetime
  • High resiliency: Ultra-high immunity to EMC effects in demanding automotive conditions
  • Long reach: Up to 15 meters
  • High performance: Data rate as high as 16 Gbps with a roadmap to 48 Gbps and beyond; v1.1, already in development, will provide a doubling of the high-speed data rate to 32 Gbps and increase the uplink data rate to 200 Mbps
Pamtek publishes a nice video about automated SEM samples preparation. Now, we can see how all these beautiful pixel cross-sections are actually made:



Once we are at reverse engineering matters, CSET publishes its estimations of the wafer cost for different processes (for logic wafers, not including image sensor-specific extensions). RetiredEngineer posts a comparison table from the report:
CoreDAR publishes a video of its ToF device use cases:


AndroidAuthority publishes an interview with Jinhyun Kwon, VP image sensor marketing at Samsung. One of the questions was what is better: a high resolution sensor with small pixels or the same optical format large pixels lower resolution sensor?

Samsung’s thinking on the matter is simple — both approaches have their own merits and as a semiconductor solutions provider, our goal is to have the best solutions in either, ultra-high resolution with smaller pixels and relatively lower resolution with bigger pixels, available to our customers,” Kwon responds. “Big-pixel image sensors may not offer super-high resolutions of 100+MP, but are able to offer ultra-fast auto-focusing using Dual Pixel technology and brighter results.

On 8K video: "We expect 8K will take a similar path as 4K did, offering 60fps and HDR. For high resolution videos, at least 60fps is necessary for smooth and seamless motion, and HDR to record scenes in various lighting environments without loss of image information.

Currently FHD 240fps is becoming a common feature on devices and there are products that can support FHD resolution at up to 480fps or 960fps, allowing super slow motion shots. While we may not see 4K featuring 480fps or 960fps any time soon, due to high cost and power consumption, 4K 240fps could be something we can expect for the time being,
” Kwon says.

Sunday, September 20, 2020

DVS Company Celepixel Acquired by Will Semiconductor

QQ.com, DayDayNews: Will Semiconductor has quitely acquired Celepixel, a Shanghai-based developer of dynamic vision sensors (DVS). Will Semi already owns Omnivision and Superpix.
 

Upconversion Device for THz to LWIR to MWIR Imaging

Phys.org: Physical Review paper "Molecular Platform for Frequency Upconversion at the Single-Photon Level" by Philippe Roelli, Diego Martin-Cano, Tobias J. Kippenberg, and Christophe Galland from EPFL and Max Planck Institute proposes a way to convert photons with wavelength of 60um to 3um to 0.5-1um ones.

"Direct detection of single photons at wavelengths beyond 2um under ambient conditions remains an outstanding technological challenge. One promising approach is frequency upconversion into the visible (VIS) or near-infrared (NIR) domain, where single-photon detectors are readily available. Here, we propose a nanoscale solution based on a molecular optomechanical platform to up-convert photons from the far- and mid-infrared (covering part of the terahertz gap) into the VIS-NIR domain. We perform a detailed analysis of its outgoing noise spectral density and conversion efficiency with a full quantum model. Our platform consists in doubly resonant nanoantennas focusing both the incoming long-wavelength radiation and the short-wavelength pump laser field into the same active region. There, infrared active vibrational modes are resonantly excited and couple through their Raman polarizability to the pump field. This optomechanical interaction is enhanced by the antenna and leads to the coherent transfer of the incoming low-frequency signal onto the anti-Stokes sideband of the pump laser. Our calculations demonstrate that our scheme is realizable with current technology and that optimized platforms can reach single-photon sensitivity in a spectral region where this capability remains unavailable to date."


"A promising way to further reduce the gated dark-count level consists in designing an array of molecular converters, sufficiently distant from each other so as not to interact by near-field coupling. We assume that the array is illuminated by a spatially coherent IR signal and optical pump beam, which is achievable when using a high f-number lens due to the subwavelength dimensions of the antennas. The key advantage of this scheme is that the anti-Stokes fields of thermal origin from different antennas would not exhibit any mutual phase coherence; they will add up incoherently in the far field. On the contrary, the up-converted (sum-frequency) anti-Stokes fields would be phase coherent and interfere constructively in specific directions, in analogy with a phased emitter array [44,45].

Considering a simple linear array, as demonstrated in Appendix F, this effect would jointly decrease the thermal contribution to the dark-count rate and dilute the intracavity photon number per device, enabling single-photon operation with improved sensitivity. A configuration with multiple converters within the IR spot could alternatively be leveraged for on-chip IR multiplexing [46–49] with distinct converters responding to distinct IR frequencies by the proper choice of molecular vibrations and antenna design, thereby bypassing the limited detection bandwidth of a single converter. This subwavelength platform benefits from the coherent nature of the conversion process and opens the route to IR spectroscopy, IR hyperspectral imaging, and recognition technologies."

Thanks to TG for the link!

Saturday, September 19, 2020

Samsung Registers ISOCELL Vizion Trademark for ToF and DVS Products

 LetsGoDigital noticed that Samsung has registered ISOCELL Vizion trademark:

"Samsung ISOCELL Vizion trademark description: Time-of-flight (TOF) optical sensors for smartphones, facial recognition system comprising TOF optical sensors, 3D modeling and 3D measurement of objects with TOF sensors; Dynamic Vision Sensor (DVS) in the nature of motion sensors for smartphones; proximity detection sensors and Dynamic Vision Sensor (DVS) for detecting the shape, proximity, movement, color and behavior of humans."

Friday, September 18, 2020

PRNU Pattern is Not Unique Anymore

University of Florence, FORLAB, and AMPED Software, Italy, publish an interesting arxiv.org paper "A leak in PRNU based source identification? Questioning fingerprint uniqueness" by Massimo Iuliani, Marco Fontani, and Alessandro Piva.

"Photo Response Non Uniformity (PRNU) is considered the most effective trace for the image source attribution task. Its uniqueness ensures that the sensor pattern noises extracted from different cameras are strongly uncorrelated, even when they belong to the same camera model. However, with the advent of computational photography, most recent devices of the same model start exposing correlated patterns thus introducing the real chance of erroneous image source attribution. In this paper, after highlighting the issue under a controlled environment, we perform a large testing campaign on Flickr images to determine how widespread the issue is and which is the plausible cause. To this aim, we tested over 240000 image pairs from 54 recent smartphone models comprising the most relevant brands. Experiments show that many Samsung, Xiaomi and Huawei devices are strongly affected by this issue. Although the primary cause of high false alarm rates cannot be directly related to specific camera models, firmware nor image contents, it is evident that the effectiveness of PRNU-based source identification on the most recent devices must be reconsidered in light of these results. Therefore, this paper is to be intended as a call to action for the scientific community rather than a complete treatment of the subject."

Melexis Reports 1M Automotive ToF Sensors Shipped

Melexis publishes an article on its history in automotive ToF imaging, starting from cooperation with Free University of Brussels (VUB), continuing with VUB spin-off Softkinetic that was acquired by Sony and renamed to Sony Depthsensing Solutions.

So far, Melexis has shipped 1M automotive ToF sensors:

"At Melexis, we are proud of having designed the first automotive qualified ToF sensor IC with our first generation MLX75023. This proves our capability to not only design but also produce the new technology in line with stringent automotive quality standards. It is therefore with great pleasure that we are the first to have reached in 2019 the impressive milestone of having more than 1 million ToF image sensor ICs on the road."


Melexis also announces a QVGA ToF sensor MLX75027 and publishes "ToF Basics" tutorial.

Thursday, September 17, 2020

Yole on AI and Mass Surveillance

Yole Developpement report "Artificial intelligence and mass surveillance are pushing the camera and computing market to $38B in revenue by 2025" says: 

"Over the past five years, China has invested heavily in building its mass surveillance system. The development of this “Skynet” network has greatly benefited native Chinese companies HiKVision and Dahua Technologies. In fact, they quickly became #1 and #2, respectively."

Melexis ToF Sensor Presentation

Melexis publishes its Autosens September 2020 presentation on driver and in-cabin monitoring with its new ToF QVGA sensor MLX75026: