Friday, February 13, 2026

CIS startup MetaSilicon raises over $40m

https://www.startupresearcher.com/news/metasilicon-secures-over-usd43-2-million-in-series-a-funding

MetaSilicon, a designer of high-dynamic CMOS image sensors, has successfully closed its A+ financing round, securing over $43.2 million. The funding, led by a consortium of prominent investors, is earmarked for accelerating research and development efforts. This strategic capital infusion will bolster the company's dual-track strategy targeting both the automotive and consumer electronics markets.

Strategic Investment and Market Confidence
The round was jointly led by Ceyuan Capital, Wuxi Industrial Investment, and the FAW Hongqi Private Equity Fund, signaling strong confidence in MetaSilicon's vision. A diverse group of new investors, including Innovation Works and CSC Financial, also participated in the financing. Existing shareholder GRC SinoGreen demonstrated continued support by increasing its investment, underscoring the company's promising trajectory.

Rapid Growth and Commercial Success
Since its inception, MetaSilicon has demonstrated remarkable growth, with its revenue soaring from just a few million yuan in 2023 to nearly $28.8 million in 2025. This financial achievement is complemented by significant operational scale, as the company has shipped over 75 million chips to date. This rapid expansion has established MetaSilicon as one of the fastest-growing image sensor design firms in the industry.

Dual-Track Market Domination
The company's success is built on a dual-track strategy that effectively serves two major technology sectors. In consumer electronics, MetaSilicon has delivered nearly 100 projects for industry giants such as Samsung, Xiaomi, and OPPO. This broad adoption by leading brands highlights the quality and competitiveness of its sensor technology in a highly demanding market.
Simultaneously, MetaSilicon has made significant inroads into the smart automotive industry, a key area for future growth. Its 1.3-megapixel and 3-megapixel automotive-grade sensors have passed rigorous validation with over 20 OEMs and Tier 1 suppliers. The company has established deep collaborations, notably with FAW Hongqi, achieving mass production for critical in-vehicle systems.

Advancing Automotive Sensor Technology
These automotive chips are already being integrated into essential applications like Advanced Driver-Assistance Systems (ADAS), in-cabin monitoring, and electronic rearview mirrors. This widespread implementation in production vehicles confirms the reliability and performance of MetaSilicon's technology. The company's ability to secure pre-installation contracts signifies its trusted position within the automotive supply chain.

Looking ahead, MetaSilicon is developing a next-generation 8-megapixel automotive CIS chip to meet the demands of advanced autonomous driving. This high-performance sensor is specifically designed for high-end ADAS, prioritizing superior night vision, high dynamic range, and anti-interference capabilities. The company plans to begin market promotion for this innovative product in 2026, reinforcing its technological leadership.

This successful A+ financing round marks a significant milestone for MetaSilicon, providing the necessary resources to fuel its next phase of innovation. According to founder and chairman Liu Canyi, the capital will be pivotal in deepening R&D investment and enhancing product value for customers. With a proven track record and a clear vision for the future, MetaSilicon is well-positioned to solidify its leadership in the competitive image sensor market. 

Grass Valley needs an Engineer in The Netherlands

Grass Valley Nederland B.V.

Hardware-Sensor Engineer - Breda, Netherlands - Link

Thursday, February 12, 2026

IR sensor tech firm Senseeker acquires Axis Machine

Santa Barbara, California (February 11th, 2025) - Senseeker Corp, a leading innovator of digital infrared image sensing technology, can now respond to customer requirements more quickly and thoroughly through the acquisition of Axis Machine (Santa Barbara, California) by Senseeker Machining Company (SMC).

Senseeker Machining Company will continue to support and grow Axis Machine’s established customer base built up over 20+ years in delivering high-quality machined parts. The acquisition will enable Senseeker to further grow mechanical component lines and to reduce the lead time on machined parts used in Senseeker’s programs and portfolio of industry standard commercial cryogenic test equipment for testing infrared focal plane arrays.

SMC will continue to operate from the existing machine shop facility, located at 81 David Love Place, just a short walk from the Senseeker Corp headquarters in Santa Barbara. The SMC facility is equipped with several 3-Axis and 4-Axis CNC Machining Centers, Lathes and Multi-Axis Milling Equipment to be able to maintain a high throughput of work. A Mitutoyo DCC-CMM, optical comparator and a full range of precision inspection tools are used for quality control. SMC also runs industry standard CAD and CNC programming software.

“Bringing high-quality machining capability to Senseeker is an important step in the evolution of the company’s unique lateral business model. Senseeker’s cryogenic Sensor Test Unit product lines have grown significantly in recent years and this acquisition will help accelerate delivery times,” said Kenton Veeder, CEO of Senseeker. “Additionally, our mechanical engineering has expanded across our program portfolio and our new machining capability will help us build better mechanical systems through tight coupling between machining and engineering. We are excited to build SMC into a high-quality machining organization for existing shop customers and new sensor community customers alike.”

https://senseeker.com/news/PR-20260211.htm 

Monday, February 09, 2026

Paper on 3D-stacked InGaAs/InP SPAD

In a "hot-off-the-press" paper in Optics Express titled "Room-temperature, 96×96 pixel 3D-stacked InGaAs/InP SPAD sensor with complementary gating for flash LiDAR", Yildirim et al. from EPFL/Fraunhofer/FBH write:

A room-temperature 3D-stacked flash LiDAR sensor is presented for the short-wave infrared (SWIR). The 96×96 InGaAs-InP SPAD array in the top tier is biased by a circuit at the bottom tier that implements a complementary cascoded gating at the pixel level to control noise and afterpulsing. The bottom-tier chip is fabricated in a 110-nm CMOS technology. The sensor is tested with a 1550nm laser operating at 100μW to 3.1mW average power. The SPADs are gated with 3ns pulses with 500ps skew. Intensity images and depth maps are shown both indoors and outdoors at 10m in 120 klux background light with telemetry up to 100m, having better than 2% accuracy.


Proposed complementary optical gating pixel for InGaAs SPADs (a) arranged in a 9696 array (b) and its timing diagram (c).

Micrograph of the bottom tier (a) and 3D-stacked chip micrograph (b). Illustration of the indium bump bonding scheme (c).
 


Outdoors flash LiDAR images with 120klux background sunlight. The scene, intensity image and depth image shown for 3m(a-c) and 10m(d-f).

Friday, February 06, 2026

Passive SPAD simulator and dataset

Preprint: https://arxiv.org/abs/2601.12850

In a preprint titled "Accurate Simulation Pipeline for Passive Single-Photon Imaging" Suonsivu et al. write:

Single-Photon Avalanche Diodes (SPADs) are new and promising imaging sensors. These sensors are sensitive enough to detect individual photons hitting each pixel, with extreme temporal resolution and without readout noise. Thus, SPADs stand out as an optimal choice for low-light imaging. Due to the high price and limited availability of SPAD sensors, the demand for an accurate data simulation pipeline is substantial. Indeed, the scarcity of SPAD datasets hinders the development of SPAD-specific processing algorithms and impedes the training of learning-based solutions. In this paper, we present a comprehensive SPAD simulation pipeline and validate it with multiple experiments using two recent commercial SPAD sensors. Our simulator is used to generate the SPAD-MNIST, a single-photon version of the seminal MNIST dataset, to investigate the effectiveness of convolutional neural network (CNN) classifiers on reconstructed fluxes, even at extremely low light conditions, e.g., 5 mlux. We also assess the performance of classifiers exclusively trained on simulated data on real images acquired from SPAD sensors at different light conditions. The synthetic dataset encompasses different SPAD imaging modalities and is made available for download. 

The dataset download link is here: https://boracchi.faculty.polimi.it/Projects/SPAD-MNIST.html

This is based on work presented at the European Conference on Computer Vision, Synthethic Data for Computer Vision Workshop in 2024 

 

Wednesday, February 04, 2026

Samsung's US fab for iPhone CIS

TheElec reported in August 2025 that Samsung plans to use its Austin, Texas fab to make sensors for future iPhones:

Samsung to form smartphone image sensor line in Austin for Apple

3-layer stacked CMOS image sensor to power iPhone 18 in 2026

The plan ... seems to be a response to tariffs on South Korea-made semiconductors that the Trump Administration plans to impose.

If all goes to plan, it will mark the first time that Samsung is manufacturing CIS in the US.

The CIS is made with wafer-to-wafer hybrid bonding ... requires precise process control and only Sony and Samsung have commercialized it.

Monday, February 02, 2026

Canon's weighted photon counting SPAD array

In June 2025 Canon announced an HDR SPAD sensor that performs weighted counting (as opposed to simply accumulating photon counts): https://global.canon/en/news/2025/20250612.html

Canon develops High Dynamic Range SPAD sensor with potential to detect subjects even in low-light conditions or environments with strong lighting contrasts thanks to unique technology

TOKYO, June 12, 2025—Canon Inc. announced today that it has developed a 2/3" SPAD sensor featuring approximately 2.1 megapixels and a high dynamic range of 156dB. Thanks to a unique circuit technology, it realizes high dynamic range, low power consumption, and the ability to mitigate flickering from LED lights. Canon will continue further technological development and aims to start mass production.

 SPAD sensors employ a principle called photon counting, which detects each photon (light particle) entering a pixel and counts the incident number of photons. This sensor does not take in any noise during the readout process, making it possible to capture a clear image of subjects. Also, it can measure the distance to the subject at high speed with excellent timing precision.

However, due to limitations in processing speed, when the incident number of photons exceed a certain threshold level under high-illuminance conditions, conventional SPAD sensors experienced difficulties when separating individual photons to read out, which led the acquired image to white-out. In addition, such sensors consume a large amount of power as each photon counting independently consumes power.
On the other hand, Canon's newly developed SPAD sensor uses a unique technology called “weighted photon counting.” Focusing on the fact that the frequency at which photons reach the sensor correlates with illuminance, this technology measures the time it takes for the initial photon to reach the pixel within a certain time frame, then estimates the total number of photons that will arrive at the pixel over a certain time period. As a result, the image does not white-out due to a large number of photons precisely estimated while they are not being actually counted, allowing the subject to be captured clearly.

While the conventional SPAD sensor actually counts all incident photons one by one, the new method estimates the total amount of incident photons within a certain timeframe based on the time it takes for the first incident photon to arrive. As a result, the new sensor achieves a high dynamic range of 156dB, approximately five times higher than the previous sensor2. At the same time, this approach limits the power consumption per pixel by roughly 75% by reducing the frequency of photon detections. In addition, this technology also mitigates the flickering that occurs when capturing light from LEDs such as traffic lights.

Canon anticipates that this new sensor will have a wide variety of applications, such as surveillance, onboard vehicle equipment, and industrial use. For instance, it is expected to be applied to autonomous driving3 and advanced driving-assistance systems3. As autonomous driving technology advances, the demand for onboard sensors is increasing. At the same time, as many countries increasingly tighten related safety standards, there is a need for advanced sensor technology to ensure the safety of autonomous driving. However, the currently available CMOS sensors that are commonly used in vehicles are known to have several issues with visibility in environments with strong contrasts between bright and dark scenes, such as tunnel exits, or extremely low light conditions. Canon has addressed these issues by combining new features with the conventional SPAD sensors, which excel in low-light shooting.

Canon announced this new sensor technology on June 12, 2025 at the 2025 Symposium on VLSI Technology and Circuits held in Kyoto, Japan.

  •  While conventional SPAD sensors count all incident photons one by one, the newly developed SPAD sensor uses a unique technology called weighted photon counting that estimates the total amount of incident photons within a certain period of time based on the detection of the first incident photon. This greatly widens the number of photons that can be measured.
  •  This technology can also mitigate flickering when light from LEDs such as traffic lights is captured.

 

Weighted photon counting enables photon detection in both high and low levels of illuminance
 
With excellent high dynamic range performance of 156dB, a clear image is captured including bright and dark subjects

Simplified illustration of the weighted photon counting technique. The earlier the arrival of the first incident photon, the brighter the incident light.

Friday, January 30, 2026

Sony releases image stabilizer chip

Link: https://www.sony-semicon.com/en/products/lsi-ic/stabilizer.html

The Stabilizer Large-Scale Integration (LSI) CXD5254GG chip combines an image sensor and 6-axis inertial measurement unit (IMU) to perform electronic image stabilization (EIS), removing vibrations and maintaining a level horizon in the video input from the image sensor, and outputting the stabilized image. The advanced algorithm for attitude control reduces blurs caused by camera vibrations and achieves both real-time horizon stabilization and suppression of “jello effect” video distortion. The Stabilizer LSI is also equipped with Sony’s unique contrast improvement feature, the intelligent Picture Controller (iPC). Together with the stabilizing features, it enables the camera to clearly capture objects or information that could not be previously recognized due to vibrations.

The CXD5254GG creates new imaging value that conventional camera technologies cannot achieve, enabling applications across a wide range of fields including broadcasting, sports entertainment, security, and robotics. In addition to the CXD5254GG itself, a choice of compact camera modules combining the IMX577 sensor and lens is also available for broadcasting/video production applications, meeting a wide range of user needs.

The product performs a wide range of signal processing including high-precision blur correction via EIS, horizon maintenance, suppression of the jello effect, and lens distortion correction. We also provide established stabilizer sample parameters, derived from a variety of actual applications including onboard cameras, dashboard cameras, wearable devices, first-person view (FPV) drones, remote-controlled (RC) cars, and fixed-point cameras, backed by Sony’s many years of expertise and know-how. These sample parameter configurations can be optimized for specific applications to maximize the potential of the CXD5254GG’s stabilizing performance.


 

Wednesday, January 28, 2026

EETimes Prophesee article


Few quotes:

“We have the sensor, defined use cases, and the full-stack demonstration, [including] machine learning models to software integration in platforms such as Raspberry Pi,” Ferré said. “What probably [has been] missing is the scale of the business and demonstration of value.”

“Our technology is fantastic, but the way to make money with it…probably needed a bit of tuning, so this is what we’re doing,” he added.

“I’ve been on the phone with one of our integrators for Electronic Supervision System cameras, and they said, ‘we’ve never sold so many evaluation kits in so many industries—drones, manufacturing’. There’s traction [here]…this is huge.”

When asked about acquisition potential—given the recent SynSense-iniVation merger, and myriad market heavyweights—he replied: “We’re talking to very powerful players. They are not looking to buy us.”

Monday, January 26, 2026

Sony's global shutter image sensor in JSSC

In a recent paper titled "A 5.94-μm Pixel-Pitch 25.2-Mpixel 120-Frames/s Full-Frame Global Shutter CMOS Image Sensor With Pixel-Parallel 14-bit ADC", Sakakibara et al. from Sony Semiconductor Solutions (Japan) write:

We present a 25.2-Mpixel, 120-frames/s full-frame global shutter CMOS image sensor (CIS) featuring pixel-parallel analog-to-digital converters (ADCs). The sensor addresses the limitations of conventional rolling shutters (RSs)—including motion distortion, flicker artifacts, and flash banding—while maintaining image quality suitable for professional and advanced amateur photography. A stacked architecture with 3- μ m-pitch Cu–Cu hybrid bonding enables more than 50 million direct connections between the pixel array and the ADC circuits. The pixel-parallel single-slope ADCs operate with a comparator current of 25 nA and use a positive-feedback (PFB) scheme with noise-bandwidth control using an additional 11.4-fF capacitor, achieving 2.66 e−rms ( 166.8 μVrms ) random noise (RN) at 0-dB gain with an REF slope of 2161 V/s. The 5.94- μ m pixel pitch accommodates 30-bit latches designed under SRAM rules in a 40-nm CMOS process. Noise analysis reveals that in subthreshold operation, the dominant noise contributors are the comparator current, REF slope, and second-stage load capacitance. The sensor delivers 14-bit resolution, a 75.5-dB dynamic range (DR), and 120-frames/s operation at a power consumption of 1545 mW. A figure of merit of 0.083 e−rms⋅  pJ/step is comparable to state-of-the-art RS sensors. These results demonstrate that pixel-parallel ADC technology can be scaled to tens of megapixels while preserving high image quality and energy efficiency, enabling motion-artifact-free imaging in battery-powered consumer cameras.






 Full paper link [behind paywall]: https://ieeexplore.ieee.org/document/11219086