Thursday, January 26, 2023

Towards a Colorimetric Camera - Talk from EI 2023 Symposium

Tripurari Singh and Mritunjay Singh of Image Algorithmics presented a talk titled "Towards a Colorimetric Camera" at the recent Electronic Imaging 2023 symposium. They show that for low-light color imaging it is better to use a long/medium/short (LMS) filter that more closely mimics human color vision as opposed to the traditional RGB Bayer pattern.













Wednesday, January 25, 2023

Jabil Inc. collaboration with ams OSRAM and Artilux

Link: https://www.jabil.com/news/swir-3d-camera-prototype.html

ST. PETERSBURG, FL – January 18, 2023 – Jabil Inc. (NYSE: JBL), a leading manufacturing solutions provider, today announced that its renowned optical design center in Jena, Germany, is currently demonstrating a prototype of a next-generation 3D camera with the ability to seamlessly operate in both indoor and outdoor environments up to a range of 20 meters. Jabil, ams OSRAM and Artilux combined their proprietary technologies in 3D sensing architecture design, semiconductor lasers and germanium-silicon (GeSi) sensor arrays based on a scalable complementary metal-oxide-semiconductor (CMOS) technology platform, respectively, to demonstrate a 3D camera that operates in the short-wavelength infrared (SWIR), at 1130 nanometers.

Steep growth in automation is driving performance improvements for robotic and mobile automation platforms in industrial environments. The industrial robot market is forecast to grow at over 11% compound annual growth rate to over $35 billion by 2029. The 3D sensor data from these innovative depth cameras will improve automated functions such as obstacle identification, collision avoidance, localization and route planning — key applications necessary for autonomous platforms. 

“For too long, industry has accepted 3D sensing solutions limiting the operation of their material handling platforms to environments not impacted by the sun. The new SWIR camera provides a glimpse of the unbounded future of 3D sensing where sunlight no longer impinges on the utility of autonomous platforms,” said Ian Blasch, senior director of business development for Jabil’s Optics division. “This new generation of 3D cameras will not only change the expected industry standard for mid-range ambient light tolerance but will usher in a new paradigm of sensors capable of working across all lighting environments.”

“1130nm is the first of its kind SWIR VCSEL technology from ams OSRAM, offering enhanced eye safety, outstanding performance in high sunlight environments, and skin detection capability, which is of critical importance for collision avoidance when, for example humans and industrial robots interact,” says Dr. Joerg Strauss, senior vice president and general manager at ams OSRAM for business line visualization and sensing. “We are excited to partner with Jabil to make the next-generation 3D sensing and machine vision solutions a reality.”

Dr. Stanley Yeh, vice president of platform at Artilux, concurs, “We are glad to work with Jabil and ams OSRAM to deliver the first mid-range SWIR 3D camera with the use of near infrared (NIR)-like components such as CMOS-based sensor and VCSEL. It's a significant step toward the mass-adoption of SWIR spectrum sensing and being the leader of CMOS SWIR 2D/3D imaging technology.”
For nearly two decades, Jabil’s optical division has been recognized by leading technology companies as the premier service provider for advanced optical design, industrialization and manufacturing. Our Optics division has more than 170 employees across four locations. Jabil’s optics designers, engineers and researchers specialize in solving complex optical problems for its customers in 3D sensing, augmented and virtual reality, action camera, automotive, industrial and healthcare markets. Additionally, Jabil customers leverage expertise in product design, process development, testing, in-house active alignment (from Kasalis, a technology division of Jabil), supply chain management and manufacturing expertise.

More information and the test data could be found at the following website: www.jabil.com/3DCamera



Tuesday, January 24, 2023

CIS market news 2022/2023

Recent Will Semi report that includes some news about Omnivision (Howell): https://tech.ifeng.com/c/8MXij5vF1lP

It is worth noting that in December 2022, Howell Group, a subsidiary of Weir, issued an internal letter announcing cost control, with the goal of reducing costs by 20% by 2023.

In an internal letter, Howell Group said, "The current market situation is very serious. We are facing great market challenges, and prices, inventory and supply chains are under great pressure. Therefore, we must carry out cost control, with the goal of reducing costs by 20% by 2023.

In order to achieve this goal, Howell Group also announced: stop all recruitments and leave without substitutes; salary cuts for senior managers; stop work during the Spring Festival in all regions of the group; quarterly bonuses and any other form of bonuses will be discontinued; expenditure strictly controlled; and some R&D projects will also reduce NRE expenditure.

Howell Group said, "These measures are temporary, and we believe that business-level improvements will occur in the second half of next year, because we have a new product layout in the consumer market, while automobiles and emerging markets are rising steadily. We will reassess the situation at the end of the first quarter of next year (2023). 


More related news from Counterpoint Research: : https://www.counterpointresearch.com/global-smartphone-cis-market-revenues-shipments-dip-2022/

Global Smartphone CIS Market Revenues, Shipments Dip in 2022

  • In 2022, global smartphone image sensor shipments were estimated to drop by mid-teens YoY.
  • Global smartphone image sensor revenues were down around 6% YoY during the year.
  • Sony was the only major vendor to achieve a YoY revenue growth, thanks to Apple’s camera upgrades.
  • Both Sony and Samsung managed to improve their product mix.

Compare Omnivision sales with its peers in this graphic:



Thursday, January 19, 2023

European Defense Fund project for next gen IR sensors

From Wiley industry news: https://www.wileyindustrynews.com/en/news/eu19m-project-set-enable-next-generation-ir-sensors

€19M project set to enable next-generation IR sensors

13.01.2023 - A four-year defense project, led by Lynred, is first to see EU infrared product manufacturers jointly acquire access to advanced CMOS technology to design new infrared sensors.
A ten-member consortium aims to gain European sovereignty in producing highperformance IR sensors for future defense systems. Lynred, a French provider of high-quality infrared detectors for the aerospace, defense and commercial markets, leads HEROIC, a European Defence Fund project aimed at developing highlyadvanced electronic components for next-generation infrared (IR) sensors, while consolidating the supply chain of these state-of-the-art products in Europe.

High Efficiency Read-Out Integrated Circuit (HEROIC) is a four-year project starting January 2023 with a budget in the order of 19 million euros, of which the European Defence Fund is contributing €18M. 
 
HEROIC is the first collaboration of its kind to bring together European IR manufacturers, several of whom are competitors, to strategically tackle a common problem. The project’s main objectives are to increase access to, and dexterity in, using a new European-derived advanced CMOS technology that offers key capabilities in developing the next generations of high-performance infrared sensors – these will feature smaller pixels and advanced functions for defense applications. One overall aim is to enable Europe to gain technological sovereignty in producing high-performance IR sensors.
 
Consortium members include three IR manufacturers: AIM (DE), project leader Lynred (FR), and Xenics (BE); four system integrators: Indra (ES), Miltech Hellas (GR), Kongsberg (NO) and PCO SA (PL); a component provider: Ideas, an IC developer (NO), as well as two research institutions CEA-Leti (FR), and the University of Seville (ES).
 
“Lynred is proud to collaborate on this game-changing project aimed at securing European industrial sovereignty in the design and supply of IR sensors,” said David Billon-Lanfrey, chief strategy officer at Lynred. “This project represents the first phase for European IR manufacturers to gain access to a superior CMOS technology compatible with various IR detectors and 2D/3D architectures, and equally importantly, make it available within a robust EU supply chain.”

Acquiring the latest advanced CMOS technology with a node that no consortium partner has had an opportunity to access is pivotal to the sustainable design of a next-generation read-out integrated circuit (ROIC). Its commonly specified platform will allow each consortium partner to pursue its respective technological roadmap and more effectively meet the higher performance expectations of post-2030 defense systems.

“The HEROIC project will enable AIM to develop advanced ROICs based on European silicon CMOS technology, as an important building block in its next-generation IR sensors,” said Rainer Breiter, vice-president, IR module programs, at AIM. “We are looking forward to working together with our partners in this common approach to access the latest advanced CMOS technology.”

IR sensors are used to detect, recognize and identify objects or targets during the night and in adverse weather and operational conditions. They are at the center of multiple defense applications: thermal imagers, surveillance systems, targeting systems and observation satellites.

Next-generation IR systems will need to exhibit longer detection, recognition and identification ranges, and offer larger fields of view and faster frame rates. This will require higher resolution formats encompassing further reductions in pixel pitch sizes down from today’s standard 15 μm and 10 μm to 7.5 μm and below. This will need to be obtained without increasing the small footprint of the IR sensor, thus maintaining reasonable system costs and mechanical/electrical interfaces. These requirements make the qualification of a new CMOS technology mandatory to achieving higher performance at the IR sensor level.

"Xenics sees the HEROIC project as a cornerstone for its strategy of SWIR development for defense applications,” said Paul Ryckaert, CEO of Xenics. “Thanks to this project, the consortium partners will shape the future of European CMOS developments and technologies for IR sensors.”


Tuesday, January 17, 2023

Videos du Jour Jan 17, 2023: Flexible image sensors, Samsung ISOCELL, Hamamatsu



Flexible Image Sensor Fabrication Based on NIPIN Phototransistors

Hyun Myung Kim, Gil Ju Lee, Min Seok Kim, Young Min Song
Gwangju Institute of Science and Technology, School of Electrical Engineering and Computer Science;

We present a detailed method to fabricate a deformable lateral NIPIN phototransistor array for curved image sensors. The phototransistor array with an open mesh form, which is composed of thin silicon islands and stretchable metal interconnectors, provides flexibility and stretchability. The parameter analyzer characterizes the electrical property of the fabricated phototransistor.





ISOCELL Image Sensor: Ultra-fine Pixel Technologies | Samsung

ISOCELL has evolved to bring ultra-high resolution to our mobile photography. 
Learn more about Samsung's ultra-fine pixel technologies.
http://smsng.co/Pixel_Technologies



Photon counting imaging using Hamamatsu's scientific imaging cameras - TechBites Series

With our new photon number resolving mode the ORCA-Quest enables photon counting resolution across a full 9.4 megapixel image. See the camera in action and learn how photon number imaging pushes quantitative imaging to a new frontier.


Friday, January 13, 2023

Advantages of a one-bit quanta image sensor

In an arXiv preprint, Prof. Stanley Chan of Purdue University writes:

The one-bit quanta image sensor (QIS) is a photon-counting device that captures image intensities using binary bits. Assuming that the analog voltage generated at the floating diffusion of the photodiode follows a Poisson-Gaussian distribution, the sensor produces either a “1” if the voltage is above a certain threshold or “0” if it is below the threshold. The concept of this binary sensor has been proposed for more than a decade and physical devices have been built to realize the concept. However, what benefits does a one-bit QIS offer compared to a conventional multi-bit CMOS image sensor? Besides the known empirical results, are there theoretical proofs to support these findings? The goal of this paper is to provide new theoretical support from a signal processing perspective. In particular, it is theoretically found that the sensor can offer three benefits: (1) Low-light: One-bit QIS performs better at low-light because it has a low read noise and its one-bit quantization can produce an error-free measurement. However, this requires the exposure time to be appropriately configured. (2) Frame rate: One-bit sensors can operate at a much higher speed because a response is generated as soon as a photon is detected. However, in the presence of read noise, there exists an optimal frame rate beyond which the performance will degrade. A Closed-form expression of the optimal frame rate is derived. (3) Dynamic range: One-bit QIS offers a higher dynamic range. The benefit is brought by two complementary characteristics of the sensor: nonlinearity and exposure bracketing. The decoupling of the two factors is theoretically proved, and closed-form expressions are derived.

Pre-print available here: https://arxiv.org/pdf/2208.10350.pdf


The paper argues that, if implemented correctly, there are three main benefits:

1. Better SNR in low light

2. Higher speed (frame rate)

3. Better dynamic range

This paper has many interesting technical results and insights. It provides a balanced view in terms of the regimes where single-photon quanta image sensor provide benefits over conventional image sensors.

Wednesday, January 11, 2023

Startup Funding News from Semiconductor Engineering

Link: https://semiengineering.com/startup-funding-december-2022/#Sensors

Fortsense received hundreds of millions of yuan (CNY 100.0M is ~$14.3M) in Series C1 financing led by Chengdu Science and Technology Venture Capital, joined by BAIC Capital, Huiyou Investment, Shanghai International Group, Shengzhong Investment, and others. The company develops optical sensing chips, including 3D structured light chips for under-screen fingerprint sensors and time-of-flight (ToF) sensors for facial recognition in mobile devices. Funding will be used for development of single-photon avalanche diode (SPAD) lidar chips for automotive applications. Founded in 2017, it is based in Shenzhen, China.

PolarisIC raised nearly CNY 100.0M (~$14.3M) in pre-Series A financing from Dami Ventures, Innomed Capital, Legend Capital, Nanshan SEI Investment, and Planck Venture Capital. PolarisIC makes single-photon avalanche diode (SPAD) direct time-of-flight (dToF) sensors and photon counting low-light imaging chips for mobile phones, robotic vacuums, drones, industrial sensors, and AGV. Funds will be used for mass production and development of 3D stacking technology and back-illuminated SPAD. Based in Shenzhen, China, it was founded in 2021.

VicoreTek received nearly CNY 100.0M (~$14.3M) in strategic financing led by ASR Microelectronics and joined by Bondshine Capital. The startup develops image processing and sensor fusion chips, AI algorithms, and modules for object avoidance in sweeping robots. It plans to expand to other types of service robots and AR/MR, followed by the automotive market. Funds will be used for R&D and mass production. Founded in 2019, it is based in Nanjing, China.

Greenteg drew CHF 10.0M (~$10.8M) in funding from existing and new investors. The company makes heat flux sensors for applications ranging from photonics, building insulation, and battery characterization to core body temperature measurement in the form factor of wearables. Funds will be used for R&D into medical applications in the wearable market and to scale production capacity. Founded in 2009 as a spin off from ETH Zurich, it is based in Rümlang, Switzerland.

Phlux Technology raised £4.0M (~$4.9M) in seed funding led by Octopus Ventures and joined by Northern Gritstone, Foresight Williams Technology Funds, and QUBIS Innovation Fund. Phlux develops antimony-based infrared sensors for lidar systems. The startup claims its architecture is 10x more sensitive and with 50% more range compared to equivalent sensors. It currently offers a single element sensor that is retrofittable into existing lidar systems and plans to build an integrated subsystem and array modules for a high-performance sensor toolkit. Other applications for the infrared sensors include satellite communications internet, fiber telecoms, autonomous vehicles, gas sensing, and quantum communications. Phlux was also recently awarded an Innovate UK project with QLM Technology to develop a lidar system for monitoring greenhouse gas emissions. A spin out of Sheffield University founded in 2020, it is based in Sheffield, UK.

Microparity raised tens of millions of yuan (CNY 10.0M is ~$1.4M) in pre-Series A+ funding from Summitview Capital. Microparity develops high-performance direct time-of-flight (dToF) single photon detection devices, including single-photon avalanche diodes (SPAD), silicon photomultipliers (SiPM), and SiPM readout ASICs for consumer electronics, lidar, medical imaging, industrial inspection, and other applications. Founded in 2017, it is based in Hangzhou, China.

Yegrand Smart Science & Technology raised pre-Series A financing from Zhejiang Venture Capital. Yegrand Smart develops photon pickup and Doppler lidar equipment for measuring vibration. Founded in 2021, it is based in Hangzhou, China.

Monday, January 09, 2023

Electronic Imaging 2023 Symposium (Jan 15-19, 2023)

The symposium has many co-located conferences with talks and papers of interest to image sensors community. Short courses on 3D imaging, image sensors and camera calibration, image quality quantification, ML/AI for imaging and computer vision are also being offered.

Please visit the symposium website at https://www.imaging.org/site/IST/IST/Conferences/EI/EI2023/EI2023.aspx for full program. Some interesting papers and talks are listed below.

Evaluation of image quality metrics designed for DRI tasks with automotive cameras, Valentine Klein, Yiqi LI, Claudio Greco, Laurent Chanas, and Frédéric Guichard, DXOMARK (France)

Driving assistance is increasingly used in new car models. Most driving assistance systems are based on automotive cameras and computer vision. Computer Vision, regardless of the underlying algorithms and technology, requires the images to have good image quality, defined according to the task. This notion of good image quality is still to be defined in the case of computer vision as it has very different criteria than human vision: humans have a better contrast detection ability than image chains. The aim of this article is to compare three different metrics designed for detection of objects with computer vision: the Contrast Detection Probability (CDP) [1, 2, 3, 4], the Contrast Signal to Noise Ratio (CSNR) [5] and the Frequency of Correct Resolution (FCR) [6]. For this purpose, the computer vision task of reading the characters on a license plate will be used as a benchmark. The objective is to check the correlation between the objective metric and the ability of a neural network to perform this task. Thus, a protocol to test these metrics and compare them to the output of the neural network has been designed and the pros and cons of each of these three metrics have been noted.

 

Designing scenes to quantify the performance of automotive perception systems, Zhenyi Liu1, Devesh Shah2, Alireza Rahimpour2, Joyce Farrell1, and Brian Wandell1; 1Stanford University and 2Ford Motor Company (United States)

We implemented an end-to-end simulation for perception systems, based on cameras, that are used in automotive applications. The open-source software creates complex driving scenes and simulates cameras that acquire images of these scenes. The camera images are then used by a neural network in the perception system to identify the locations of scene objects, providing the results as input to the decision system. In this paper, we design collections of test scenes that can be used to quantify the perception system’s performance under a range of (a) environmental conditions (object distance, occlusion ratio, lighting levels), and (b) camera parameters (pixel size, lens type, color filter array). We are designing scene collections to analyze performance for detecting vehicles, traffic signs and vulnerable road users in a range of environmental conditions and for a range of camera parameters. With experience, such scene collections may serve a role similar to that of standardized test targets that are used to quantify camera image quality (e.g., acuity, color).

 A self-powered asynchronous image sensor with independent in-pixel harvesting and sensing operations, Ruben Gomez-Merchan, Juan Antonio Leñero-Bardallo, and Ángel Rodríguez-Vázquez, University of Seville (Spain)

A new self-powered asynchronous sensor with a novel pixel architecture is presented. Pixels are autonomous and can harvest or sense energy independently. During the image acquisition, pixels toggle to a harvesting operation mode once they have sensed their local illumination level. With the proposed pixel architecture, most illuminated pixels provide an early contribution to power the sensor, while low illuminated ones spend more time sensing their local illumination. Thus, the equivalent frame rate is higher than the offered by conventional self-powered sensors that harvest and sense illumination in independient phases. The proposed sensor uses a Time-to-First-Spike readout that allows trading between image quality and data and bandwidth consumption. The sensor has HDR operation with a dynamic range of 80 dB. Pixel power consumption is only 70 pW. In the article, we describe the sensor’s and pixel’s architectures in detail. Experimental results are provided and discussed. Sensor specifications are benchmarked against the art.

KEYNOTE: Deep optics: Learning cameras and optical computing systems, Gordon Wetzstein, Stanford University (United States)

Neural networks excel at a wide variety of imaging and perception tasks, but their high performance also comes at a high computational cost and their success on edge devices is often limited. In this talk, we explore hybrid optical-electronic strategies to computational imaging that outsource parts of the algorithm into the optical domain or into emerging in-pixel processing capabilities. Using such a co-design of optics, electronics, and image processing, we can learn application-domain-specific cameras using modern artificial intelligence techniques or compute parts of a convolutional neural network in optics with little to no computational overhead. For the session: Processing at the Edge (joint with ISS).

Computational photography on a smartphone, Michael Polley, Samsung Research America (United States)

Many of the recent advances in smartphone camera quality and features can be attributed to computational photography. However, the increased computational requirements must be balanced with cost, power, and other practical concerns. In this talk, we look at the embedded signal processing currently applied, including new AI-based solutions in the signal chain. By taking advantage of increasing computational performances of traditional processor cores, and additionally tapping into the exponentially increasing capabilities of the new compute engines such as neural processing units, we are able to deliver on-device computational imaging. For the session: Processing at the Edge (joint with ISS).

Analog in-memory computing with multilevel RRAM for edge electronic imaging application, Glenn Ge, Teramem Inc. (United States)

Conventional digital processors based on the von Neumann architecture have an intrinsic bottleneck in data transfer between processing and memory units. This constraint increasingly limits performance as data sets continue to grow exponentially for the various applications, especially for the Electronic Imaging Applications at the edge, for instance, the AR/VR wearable and automotive applications. TetraMem addresses this issue by delivering state-of-the-art in-memory computing using our proprietary non-volatile computing devices. This talk will discuss how TetraMem’s solution brings several orders of magnitude improvement in computing throughput and energy efficiency, ideal for those AI fusion sensing applications at the edge. For the session: Processing at the Edge (joint with ISS).

Processing of real time, bursty and high compute iToF data on the edge (Invited), Cyrus Bamji, Microsoft Corporation (United States)

In indirect time of flight (iToF), a depth frame is computed from multiple image captures (often 6-9 captures) which are composed together and processed using nonlinear filters. iToF sensor output bandwidth is high and inside the camera special purpose DSP hardware significantly improves power, cost and shuffling around of large amounts of data. Usually only a small percentage of depth frames need application specific processing and highest quality depth data both of which are difficult to compute within the limited hardware resources of the camera. Due to the sporadic nature of these compute requirements hardware utilization is improved by offloading this bursty compute to outside the camera. Many applications in the Industrial and commercial space have a real time requirement and may even use multiple cameras that need to be synchronized. These real time requirements coupled with the high bandwidth from the sensor makes offloading the compute purely into the cloud difficult. Thus, in many cases the compute edge can provide a goldilocks zone for this bursty high bandwidth and real-time processing requirement. For the session: Processing at the Edge (joint with ISS)..

A 2.2um three-wafer stacked back side illuminated voltage domain global shutter CMOS image sensor, Shimpei Fukuoka, OmniVision (Japan)

Due to the emergence of machine vision, augmented reality (AR), virtual reality (VR), and automotive connectivity in recent years, the necessity for chip miniaturization has grown. These emerging, next-generation applications, which are centered on user experience and comfort, require their constituent chips, devices, and parts to be smaller, lighter, and more accessible. AR/VR applications, especially demand smaller components due to their primary application towards wearable technology, in which the user experience would be negatively impacted by large features and bulk. Therefore, chips and devices intended for next-generation consumer applications must be small and modular, to support module miniaturization and promote user comfort. To enable the chip miniaturization required for technological advancement and innovation, we developed a 2.2μm pixel pitch Back Side Illuminated (BSI) Voltage Domain Global Shutter (VDGS) image sensor with the three-wafer stacked technology. Each wafer is connected by Stacked Pixel Level Connection (SPLC) and the middle and logic wafers are connected using a Back side Through Silicon Via (BTSV). The separation of the sensing, charge storage, and logic functions to different wafers allows process optimization in each wafer, improving overall chip performance. The peripheral circuit region is reduced by 75% compared to the previous product without degrading image sensor performance. For the session: Processing at the Edge (joint with COIMG).

A lightweight exposure bracketing strategy for HDR imaging without access to camera raw, Jieyu Li1, Ruiwen Zhen2, and Robert L. Stevenson1; 1University of Notre Dame and 2SenseBrain Technology (United States)
A lightweight learning-based exposure bracketing strategy is proposed in this paper for high dynamic range (HDR) imaging without access to camera RAW. Some low-cost, power-efficient cameras, such as webcams, video surveillance cameras, sport cameras, mid-tier cellphone cameras, and navigation cameras on robots, can only provide access to 8-bit low dynamic range (LDR) images. Exposure fusion is a classical approach to capture HDR scenes by fusing images taken with different exposures into a 8-bit tone-mapped HDR image. A key question is what the optimal set of exposure settings are to cover the scene dynamic range and achieve a desirable tone. The proposed lightweight neural network predicts these exposure settings for a 3-shot exposure bracketing, given the input irradiance information from 1) the histograms of an auto-exposure LDR preview image, and 2) the maximum and minimum levels of the scene irradiance. Without the processing of the preview image streams, and the circuitous route of first estimating the scene HDR irradiance and then tone-mapping to 8-bit images, the proposed method gives a more practical HDR enhancement for real-time and on-device applications. Experiments on a number of challenging images reveal the advantages of our method in comparison with other state-of-the-art methods qualitatively and quantitatively.

Monday, January 02, 2023

ESPROS LiDAR Tech Day Jan 30, 2022

Information and registration: https://www.espros.com/tof-lidar-technology-day-2023/

The TOF & LiDAR Technology Day — Powered by ESPROS, is carefully aimed at giving engineers and designers a very valuable hands-on, informative dive into the huge potential of TOF and LiDAR applications and eco-systems. Participants are assured of an eye-opening immersion into the ever expanding world of Time-of-Flight and LiDAR.

Thanks to the experience and quality of expert speakers who will be on hand to guide and inform everyone taking part, these comprise: Danny Kent, PhD, Co-Founder & President, Mechaspin, alongside Beat De Coi, CEO & Founder of ESPROS Photonics AG, and Len Cech, Executive Director, Safety Innovations at Joyson Safety Systems as well as Kurt Brendley, COO & Co-Founder, PreAct.
The TOF & LIDAR Technology Day event takes place on January 30th, 2023 in San Carlos California USA. 




Friday, December 30, 2022

News: Xenics acquired by Photonis; Omnivision to cut costs

Xenics acquired by Photonis

https://www.imveurope.com/news/xenics-bought-photonis-infrared-tech

Infrared imager maker, Xenics, has been acquired by Photonis, a manufacturer of electro-optic components.

Photonis’ components are used in the detection and amplification of ions, electrons and photons for integration into a variety of applications such as night vision optics, digital cameras, mass spectrometry, physics research, space exploration and many others. The addition of Xenics will bring high-end imaging products to Photonis’ B2B customers.

Jérôme Cerisier, CEO of Photonis, said: “We are thrilled to welcome Paul Ryckaert and the whole Xenics team in Photonis Group. With this acquisition, we are aiming to create a European integrated leader in advanced imaging in high-end markets. We will together combine our forces to strengthen our position in the infrared imaging market.”

Xenics employs 65 people across the world and its headquarters based in Leuven, Belgium.
Paul Ryckaert, CEO of Xenics, said: “By combining its strengths with the ones of Photonis Group, Xenics will benefit from Photonis expertise and international footprint which will allow us to accelerate our growth. It is a real opportunity to boost our commercial, product development and manufacturing competences and bring even more added value to our existing and future customers.” 

[Post title has been corrected as of January 8. Thanks to the commenters for pointing it out. Apologies for the error. --AI]

Wednesday, December 28, 2022

Videos of the day [TinyML and WACV]

Event-based sensing and computing for efficient edge artificial intelligence and TinyML applications
Federico CORRADI, Senior Neuromorphic Researcher, IMEC

The advent of neuro-inspired computing represents a paradigm shift for edge Artificial Intelligence (AI) and TinyML applications. Neurocomputing principles enable the development of neuromorphic systems with strict energy and cost reduction constraints for signal processing applications at the edge. In these applications, the system needs to accurately respond to the data sensed in real-time, with low power, directly in the physical world, and without resorting to cloud-based computing resources.
In this talk, I will introduce key concepts underpinning our research: on-demand computing, sparsity, time-series processing, event-based sensory fusion, and learning. I will then showcase some examples of a new sensing and computing hardware generation that employs these neuro-inspired fundamental principles for achieving efficient and accurate TinyML applications. Specifically, I will present novel computer architectures and event-based sensing systems that employ spiking neural networks with specialized analog and digital circuits. These systems use an entirely different model of computation than our standard computers. Instead of relying upon software stored in memory and fast central processing units, they exploit real-time physical interactions among neurons and synapses and communicate using binary pulses (i.e., spikes). Furthermore, unlike software models, our specialized hardware circuits consume low power and naturally perform on-demand computing only when input stimuli are present. These advancements offer a route toward TinyML systems composed of neuromorphic computing devices for real-world applications.