Lists

Wednesday, February 28, 2018

LiDAR Videos

Three new LiDAR videos have been published on Youtube today. AutoSens publishes Yole Developpment analyst Pierre Cambou presentation on LiDAR market:

The video is currently taken off-line. Will be re-posted here when it's available again.

Now, a shortened video is re-instated:



Waymo publishes self-driving experience from its imaging systems point of view:



SOSLab shows its "Hybrid Scanning" LiDAR demo:

AutoSens Detroit 2018

AutoSens Detroit conference to be held on May 14-17, 2018 announces it agenda with a rich image sensing content:

Near-Infrared QE Enhancing Technology for Automotive Applications
Boyd Fowler
CTO, OmniVision Technologies, Inc.
• Why is near infrared sensitivity important in automotive machine vision applications ?
• Combining thicker EPI, deep trench isolation and surface scattering to improve quantum efficiency, in CMOS image sensors, while still retaining excellent spatial resolution.
• Improving the performance of CMOS image sensors for in cabin monitoring and external night time imaging.

Challenges, opportunities and deep learning for thermal cameras in ADAS and autonomous vehicle applications
Mike Walters, VP of Product Management for Uncooled Thermal Camerast, FLIR Systems
• Deep learning analytic techniques including full scene segmentation, an AI technique that enables ADAS developers to create full scene classification of every pixel in the thermal image.

The emerging field of free-form optics in cameras, and its use in automotive
Li Han Chan, CEO, DynaOptics

Panel discussion: how many cameras are enough?
Tom Toma, Global Product Manager, Magna Electronics
Sven Fleck, Managing Director, SmartSurv Vision Systems GmbH
Patrick Denny, Senior Expert, Valeo
• OEM design engineer – can we make sensors a cool feature not an ugly bolt-on?
• Retail side – how to make ADAS features sexy?
• Tier 1 – minimal technical requirements
• Outside perspective – learning from an industry where safety sells (B2C market)

A review of relevant existing IQ challenges
Uwe Artmann
CTO/Partner , Image Engineering

Addressing LED flicker
Brian Deegan, Senior Expert - Vision Research Engineer , Valeo Vision Systems
• Definition, root cause and manifestations of LED flicker
• Impact of LED flicker for viewing and machine vision applications
• Initial proposals for test setup and KPIs, as defined by P2020 working group
• Preliminary benchmarking results from a number of cameras

CDP – contrast detection probability
Marc Geese, System Architect for Optical Capturing Systems, Robert Bosch

Moving from legacy LiDAR to Next Generation iDAR
Barry Behnken, VP of Engineering, AEye
• How can OEMs and Tier 1s leverage iDAR to not just capture a scene, but to dynamically perceive it?
• Learn how iDAR optimizes data collection, allowing for situational configurability at the hardware level that enables the system to emulate legacy systems, define regions of interest, focus on threat detection and/or be programmed for variable environments.
• Learn how this type of configurability will optimize data collection, reduce bandwidth, improve vision perception and intelligence, and speed up motion planning for autonomous vehicles.

Enhanced Time-Of-Flight – a CMOS full solution for automotive LIDAR
Nadav Haas, Product Manager, Newsight Imaging
• The need for a real 3D solid state lidar solution to overcome challenges associated with lidar.
• Enabling very wide dynamic range by means of standard processing tools, to amplify very weak signals to achieve high SNR and accurately detect objects with high resolution at long range.
• Eliminating blinding by mitigating or blocking background sunlight, random light from sources in other cars, and secondary reflections.
• Enabling very precise timing of the transmitted and received pulses, essential to obtain the desired overall performance.

Panel discussion: do we have a lidar bubble?
Abhay Rai, Director Product Marketing: Automotive Imaging, Sony Electronics
• Do we even need lidar in AV?
• Which is the right combo; lidar + cornering radar or no lidar just radar + camera?
• How many sensors are the minimum for autonomous driving
• Are image sensors and cameras fit for autonomous driving?

All-weather vision for automotive safety: which spectral band?
Emmanuel Bercier, Project Manager, AWARE Project
• The AWARE (All Weather All Roads Enhanced vision) French public funded project is aiming at the development of a low-cost sensor fitting to automotive requirements, and enabling a vision in all poor visibility conditions.
• Evaluation of the relevance of four different spectral bands: Visible RGB, Visible RGB Near-Infrared (NIR) extended, Short-Wave Infrared (SWIR) and Long-Wave Infrared (LWIR).
• Outcome of two test campaigns in outdoor natural conditions and in artificial fog tunnel, with four cameras recording simultaneously.
• Presentation of the detailed results of this comparative study, focusing on pedestrians, vehicles, traffic signs and lanes detection.

Automotive Sensor Design Enablement; a discussion of multiple design enablement tools/IP to achieve smart Lidar
Ian Dennison, Senior Group Director R&D, Cadence Design Systems
• Demands of advanced automotive sensors, driving design of silicon photonics, MEMS, uW/RF, advanced node SoC, and advanced SiP.
• Examining design enablement requirements for automotive sensors that utilize advanced design fabrics, and their integration.

Role of Specialty Analog Foundry in Enabling Advanced Driver Assistance Systems (ADAS) and Autonomous Driving
Amol Kalburge, Head of the Automotive Program , TowerJazz
• Driving improvements in device level figures of merit to meet the technical requirements of key ADAS sensors such as automotive radar, LiDAR and camera systems.
• Optimizing the Rdson vs breakdown voltage to enable higher bus voltages of the future hybrid/EV systems.
• Presenting an overview of advanced design enablement and design services capabilities required for designers to build robust products: design it once, design it right.

Tuesday, February 27, 2018

Himax Presents its Smartphone 3D Sensing Solution

GlobeNewswire: Himax presents Android smartphone samples equipped with its 3D sensing total solution with face recognition capability. The solution is now ready for mass production.

SLiM, Himax’s structured light based 3D sensing total solution which the Company jointly announced with Qualcomm last August, brings together Qualcomm’s 3D algorithm with Himax’s design and manufacturing capabilities in optics and NIR sensors as well as know-how in 3D sensing system integration. The Qualcomm/Himax solution is claimed to be by far the best performing 3D sensing and face recognition total solution available for the Android smartphone market right now.

The key features of the Himax SLiMTM 3D sensing total solution include:
  • Dot projector: More than 33,000 invisible dots, the highest in the industry, projected onto object to build the most sophisticated 3D depth map among all structured light solutions
  • Depth map accuracy: Error rate of < 1% within the entire operation range of 20cm-100cm
  • Face recognition: Enabled by the most sophisticated 3D depth data to build unique facial map that can be used for instant unlock and secure online payment
  • Indoor/outdoor sensitivity: Superior sensing capability even under total darkness or bright sunlight
  • Eye safety: Certified for IEC 60825 Class 1, the international laser product standard which governs laser product safety under all conditions of normal use with naked eyes
  • Glass broken detection: Patented glass broken detection mechanism in the dot projector whereby laser is shut down instantaneously in the event of broken glass in the projector
  • Power consumption: Less than 400mW for projector, sensor and depth decoding combined, making it the lowest power consuming 3D sensing device by far among all structured light solutions
  • Module size: the smallest structured light solution in the market, ideal for embedded and mobile device integration

3D sensing is among the most significant new features for smartphone. We are pleased to announce that our SLiM total solution is now ready for mass production. It outperforms all the peers targeting Android market in each and all aspects of engineering. We are working with multiple tier-1 Android smartphone makers, on target to launch 3D sensing on their premium smartphones starting the first half of 2018,” said Jordan Wu, President and CEO of Himax.

Mediatek P60 Features Triple ISP

PRNewswire: Mediatek flagship P60 application processor features triple ISP and AI processor:

"Compared to the previous Helio P series, MediaTek Helio P60's three image signal processors (ISPs) increase power efficiency by using 18 percent less power for dual-cameras set-ups. By combining the Helio P60's incredible camera technology with its powerful Mobile APU, users can enjoy AI-infused experiences in apps with real-time beautification, novel, real-time overlays, AR/MR acceleration, enhancements to photography, real-time video previews and more."

Monday, February 26, 2018

Leica Enters 3D ToF Imaging with PMD

BusinessWire: Leica Camera AG and pmdtechnologies announce a strategic alliance to jointly develop and market 3D ToF sensing camera solutions for mobile devices. The geographical proximity of the two companies allows a particularly fast and efficient coordination during development, testing and optimization of the lenses for the 3D sensor systems.

During the last months Leica designed a dedicated state-of-the-art optical lens for pmd’s recently announced new 3D depth sensing imager for mobile devices. By decreasing the f-number by 25% and simultanously decreasing the height of the pmd module by 30% to 11.5x7x4.2mm the dedicated lens for pmd’s latest 3D ToF pixel- and imager generation leads to a significant improvement compared to past lenses. As the Leica lens is optimized for a wavelength of 940nm, it enables ambient light robustness. With a depth data accuracy of 1%, the system is expected to reach the best in class performance despite the miniaturization regarding pixel, imager and module size. First samples of the new lens will be available in May 2018.

The co-work between Leica and pmd has as the result the most sophisticated and smallest optic design, which pmd used so far. The co-work with Leica aligned perfectly with our mission to miniaturize 3D depth sensing without sacrificing data quality so that 3D depth sensing can be put into any device and make 3D depth sensing ubiquitous. We are looking forward to the mobile device opportunities, which the super-small 3D depth sensing modules, which use Leica’s optic, will enable. And we are more than happy that with Leica we found a top-class partner, who will join us on this exciting journey,” stated Jochen Penne, Executive Board Member of pmdtechnologies ag.

Markus Limberger, COO of Leica Camera AG said: “The cooperation between pmd and Leica is an excellent example of how two globally leading companies combine their core competencies to drive market oriented innovation efficiently. The foremost position of pmdtechnologies in Time-of-flight sensor technology and Leica’s expertise in cutting edge optical design were used to develop a very compact and powerful lens, which fits perfect to the specific requirements and the uncompromising quality of the new 3D sensor generation of pmd.

16um Time-Gated SPAD Pixels Achieve 61% FF

OSA Optics Express publishes Heriot-Watt University's paper "High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor" by Ximing Ren, Peter W. R. Connolly, Abderrahim Halimi, Yoann Altmann, Stephen McLaughlin, Istvan Gyongy, Robert K. Henderson, and Gerald S. Buller.

"A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array."

SmartSens Unveils SmartClarity

PRNewswire: SmartSens launches the 5MP 1/2.7-inch SC5235 BSI sensor. The new sensor is capable of running 5MP (2608H x 1960V) at 25 fps and supports the interline HDR image synthetic algorithm that expands DR up to 100dB. It can be used in security surveillance systems, ip cameras, car digital video recorders, sport cameras and video telephone conference systems.

SmartSens Technology is also launching the NIR enhanced edition-SC5238. It extends the performance advantage of SC5235 based on the optimization of technology to improve QE in 850nm-940nm band. Moreover, SC5238 can run at a speed of 30 fps and supports the image format at 4MP 50 fps for 16:9 video. Both chips are expected to go into mass production in March 2018.

Samsung Announces 3-Layer ISOCELL Fast Sensor

BusinessWire: Samsung introduces the 3-stack ISOCELL Fast 2L3. The 1.4-μm 12MP image sensor with 2Gb of integrated LPDDR4 DRAM delivers fast data readout speeds for super-slow motion and sharper still photographs with less noise and distortion.

Samsung’s ISOCELL image sensors have made great leaps over the generations, with technologies such as ISOCELL for high color fidelity and Dual Pixel for ultra-fast autofocusing, bringing the smartphone camera ever closer to DSLR-grade photography,” said Ben K. Hur, VP of System LSI marketing at Samsung Electronics. “With an added DRAM layer, Samsung’s new 3-stack ISOCELL Fast 2L3 will enable users to create more unique and mesmerizing content.

Conventional image sensors are constructed with two silicon layers; a pixel array layer that converts light information into an electric signal, and an analog logic layer that processes the electric signal into digital code. The digital code is then sent via MIPI interface to the device’s mobile processor for further image tuning before being saved to the device’s DRAM. While all these steps are done instantaneously to implement features like zero-shutter lag, capturing smooth super-slow-motion video requires image readouts at a much higher rate.

The 2Gb LPDDR4 DRAM layer is attached below the analog logic layer. With the integration, the image sensor can temporarily store a larger number of frames taken in high speed quickly onto the sensor’s DRAM layer before sending frames out to the mobile processor and then to the device’s DRAM. This not only allows the sensor to capture a full-frame snapshot at 1/120 of a second but also to record super-slow motion video at up to 960fps.

By storing multiple frames in the split of a second, the sensor can support 3-Dimensional Noise Reduction (3DNR) when shooting in low-light, as well as real time HDR imaging, and detect even the slightest hint of movement for automatic instant slow-motion recording.

The image sensor is also equipped with Dual Pixel technology, which allows each and every one of the 12M pixels of the image sensor to employ two photodiodes that respectively work as a PDAF agent.

The ISOCELL Fast 2L3 is currently in mass production.

Sunday, February 25, 2018

Samsung Galaxy S9 Imaging and Vision Features

Samsung Galaxy S9 presentation seems to be build mostly around its cameras, imaging and vision features:








Magic Leap to Raise Another $400M

TradeArabia quotes FT report that Saudi Arabia’s sovereign wealth fund is in discussions to invest $400M in Magic Leap on valuation of $6B. This is supposed to be an extension of October 2017 financial round when the company raised $502M. The Saudi investment is to bring the total raised capital to $2.3B.

Magic Leap is said to be developing its own silicon, optics, operating system, and applications which explains the unprecedented scale of the fundraising.

Saturday, February 24, 2018

Omnivision Paper on 2nd Generation Stacking Technology

MDPI Special Issue on the 2017 International Image Sensor Workshop publishes Omnivision paper "Second Generation Small Pixel Technology Using Hybrid Bond Stacking" by Vincent C. Venezia, Alan Chih-Wei Hsiung, Wu-Zang Yang, Yuying Zhang, Cheng Zhao, Zhiqiang Lin, and Lindsay A. Grant.

"In this work, OmniVision’s second generation (Gen2) of small-pixel BSI stacking technologies is reviewed. The key features of this technology are hybrid-bond stacking, deeper back-side, deep-trench isolation, new back-side composite metal-oxide grid, and improved gate oxide quality. This Gen2 technology achieves state-of-the-art low-light image-sensor performance for 1.1, 1.0, and 0.9 µm pixel products. Additional improvements on this technology include less than 100 ppm white-pixel process and a high near-infrared (NIR) QE technology."

Friday, February 23, 2018

Yole on Automotive Sensing

Yole Developpement releases "Sensors for Robotic Vehicles 2018" report:

"As far as we know, each robotic vehicle will be equipped with a suite of sensors encompassing Lidars, radars, cameras, Inertial Measurement Units (IMUs) and Global Navigation Satellite Systems (GNSS). The technology is ready and the business models associated with autonomous driving (AD) seem to match the average selling prices for those sensors. We therefore expect exponential growth of AD technology within the next 15 years, leading to a total paradigm shift in the transportation ecosystem by 2032. This will have huge consequences for high-end sensor and computing semiconductor players and the associated system-level ecosystems as well.

...in 2022 we expect sensor revenues to reach $1.6B for Lidar, $44M for radar, $0.6B for cameras, $0.9B for IMUs and $0.1B for GNSS. The split between the different sensor modalities may not stay the same for the 15 years to come. Nevertheless the total envelope for sensing hardware should reach $77B in 2032, while, for comparative purposes, computing should be in the range of $52B.
"

TowerJazz Update on its CIS Business

SeekingAlpha: TowerJazz Q4 2017 earnings report has an update on the foundry's image sensor business:

"For CMOS image sensor we use the 300 millimeter 65 nanometer capability to develop unique high dynamic range and extremely high sensitivity pixels with very low dark current for the high-end digital SLR and cinematography and broadcasting markets.

In these developments, we've included are fab 2 stitching technology to enable large full frame sensors. In addition, we developed a unique family of global shutter state-of-the-art pixels ranging from 3.6 micron down to 2.5 micron to note the smallest in the world with extremely high-shutter efficiency using the unique dual light pipe technology already developed at TPS Go for high quantum efficiency and high image uniformity.

And lastly within the CIS regime, we've pushed the limits of our x-ray dye size developing a one dye per wafer x-ray stitch sensor to produce a 300 millimeter a 21 cm x 21 cm imager. All of the above technologies have been or are being implemented in our CIS customers next generation products and are ramping or are plan to begin ramping this year with some additional next year.

Our Image sensor end markets including medical, machine vision, digital SLR camera, cinematography and security among others represented about 15% of our corporate revenues or $210 million and provided the highest margins in the company. We are offering the most advanced global shutter pixel for industrial sensor market with a 2.8 micron global shutter pixel on 110 nanometer platform. The smallest global shutter pixel in the world already in manufacturing. Additionally, as mentioned we have a 2.5 micron state of the art global shutter pixel in development at 65 nanometer, 300 platforms with several leading customers allowing high sensor resolution for any given sensor size enabling TowerJazz to further grow its market leadership.

We also offer single photon avalanche diode which is state of the art technology and ultra fact global shutter pixel for automotive radars based on time of flight principle, answering automotive market needs. We have engaged with several customers in the development of their automotive radar and expect to be a major player in this market in the coming future.

During 2017, we announced a partnership with Yuanchen Microelectronics for backside illumination manufacturing in Changchun China that provide us the BSI process segment for CIS 8 inch wafer manufactured by TowerJazz to increase our service to our worldwide customer base in mass production. So I will be ready for this mass production early second half of this year with multiple customers already having started their product designs.

In addition, we developed backside illumination and stack way for technology on 12 inch wafers in the Uozu factory serving as a next generation platform for high end photography and high end security market. We now offer both BSI and column level stack wafer PDKs to our customers.

We are investing today in three main directions. Next generation global shutter technology for industrial sensor market. Backside illumination stack wafers for the high end photography market and special pixel technology for the automotive market.
"

An earlier presentation shows the company's CIS business in a graphical format:

Thursday, February 22, 2018

Automotive Videos

ULIS publishes a Youtube demo of its thermal sensors usefulness in ADAS applications. One can see how hot the car tires become on the highway, while keep being cool in city driving:



Sensata prizes Quanergy LiDAR performance:

Wednesday, February 21, 2018

Denso Vision Sensor for Improved Night Driving Safety

DENSO has developed a new vision sensor that detects pedestrians, cyclists, road signs, driving lanes and other road users at night. Working in conjunction with a millimeter-wave radar sensor, the new vision sensor allows automobiles to automatically activate emergency braking when obstacles are identified, helping reduce accidents and improve overall vehicle safety. It is featured in the 2018 Toyota Alphard and Vellfire, which were released in January this year.

It improves night vision by using a unique lens specifically designed for low-light use, and a solid-state imaging device with higher sensitivity. An improved white-line detection algorithm and road-edge detection algorithm also broaden the operating range of lane-keeping assistance and lane departure alert functions, while a 40% size reduction from previous models reduces costs and makes installation easier.

Recognition of human eyes
Recognition of vision sensor

Chronocam Changes Name to Prophesee, Raises More Money

GlobeNewswire: Chronocam, said to be the inventor of the world’s most advanced neuromorphic vision system, is now Prophesee, a branding and identity transformation that reflects the company's expanded vision for revolutionizing how machines see.

Prophesee SA (formerly Chronocam) receives the initial tranche of its Series B financing round, which will total $19M. Led by a new unnamed strategic investor from the electronics industry, the round also includes staged investments from Prophesee’s existing investors: 360 Capital Partners, Supernova Invest, iBionext, Intel Capital, Renault Group, and Robert Bosch Venture Capital. The latest round builds on the $20m Prophesee has raised over the past three years, and will allow it to accelerate the development and industrialization of the company’s image sensor technology.

The roots of Prophesee’s technology run deep into areas of significant achievements in vision, including the breakthrough research carried out by the Vision Institute (CNRS, UPMC, INSERM) on the human brain and eye during the past 20 years, as well as by CERN, where it was instrumental in the discovery of the invisible Higgs Boson, or “The God Particle” in 2012 after more than 30 years of research. Early incarnations of the Prophesee technology helped in the development of the first industry-grade silicon retina which is currently deployed to restore sight to the blind.

Thanks to its fast vision processing equivalent to up to 100,000 fps, Prophesee’s bio-inspired technology enables machines to capture scene changes not previously possible in machine vision systems for robotics, industrial automation and automotive.

Its HDR of more than 120dB lets systems operate and adapt effectively in a wide range of lighting conditions. It sets a new standard for power efficiency with operating characteristics of less than 10mW, opening new types of applications and use models for mobile, wearable and remote vision-enabled products.

Our event-based approach to vision sensing and processing has resonated well with our customers in the automotive, industrial and IoT sectors, and the technology continues to achieve impressive results in benchmarking and prototyping exercises. This latest round of financing will help us move rapidly from technology development to market deployment,” said Luca Verre, co-founder and CEO of Prophesee. “Having the backing of our original investors, plus a world leader in electronics and consumer devices, further strengthens our strategy and will help Prophesee win the many market opportunities we are seeing.

Prophesee AI-based neuromorphic vision sensor

Inerview with Nobukazu Teranishi

Nikkei publishes an interview with Nobukazu Teranishi, inventor of the pinned PD who recently was awarded the Queen Elizabeth Prize for Engineering.

"Now... except for Sony, which leads the world in the image sensor sector, Japanese companies have fallen behind, particularly in the semiconductor industry.

Teranishi said that changes are necessary for Japan to continue to compete globally.

He also suggested that engineers and technical experts should be held in higher esteem in Japan.

"Excellent engineers are a significant asset. Companies overseas shouldn't be able to lure them out of Japan just with better salaries. If they are that valuable, their value should to be recognized in Japan as well," he said.

Determining salaries by how long people have been at the company seems like "quite a rigid structure," he said.

He added that engineers get little recognition for the work they do, with individual names rarely mentioned within the company or in the media.

Looking ahead to the future of image sensors, Teranishi feels one peak has been reached, with around 400 million phones produced annually that incorporate his technology. Next, he says, is the era of "images that you don't see."

For facial recognition and gesture input for games, he said, "No one sees the image but the computer is processing information. So there are many cases where a human doesn't see the image.
"

Tuesday, February 20, 2018

IR-Enhancing Surface Structures Compared

IEEE Spectrum: TED publishes UCD and W&WSens Devices invited paper on light-bending microstructures to enhance PD QE and IR sensitivity "A New Paradigm in High-Speed and High-Efficiency Silicon Photodiodes for Communication—Part I: Enhancing Photon–Material Interactions via Low-Dimensional Structures" by Hilal Cansizoglu, Ekaterina Ponizovskaya Devine, Yang Gao, Soroush Ghandiparsi, Toshishige Yamada, Aly F. Elrefaie, Shih-Yuan Wang, and M. Saif Islam.

"[Saif] Islam and his colleagues came up with a silicon structure that makes photodiodes both fast and efficient by being both thin and good at capturing light. The structure is an array of tapered holes in the silicon that have the effect of steering the light into the plane of the silicon. “So basically, we’re bending light 90 degrees,” he says."


The paper compares the proposed approach with other surface structures for IR sensitivity enhancement:

Monday, February 19, 2018

Corephotonics and Sunny Ship Millions of Dual Camera Modules to Oppo, Xiaomi and Others

Optics.org: Corephotonics has partnered with Sunny Optical to bring to market a variety of solutions based on the company’s dual camera technologies. Under this agreement, Sunny has already shipped millions of dual cameras powered by Corephotonics IP to various smartphone OEMs, including Xiaomi, OPPO and others.

The new partnership combines Sunny’s automatic manufacturing capacity, quality control and optical development capabilities with Corephotonics’ innovation in optics, camera mechanics and computational imaging. This strategic license agreement covers various dual camera products, including typical wide + tele cameras, as well as various folded dual camera offerings, allowing an increased zoom factor, optical stabilization and a reduced module height.

The partnership allows Sunny to act as a one-stop-shop dual camera vendor, providing customized dual camera designs in combination with well-optimized software features. The collaboration leverages Sunny's manufacturing lead and strong presence in the Chinese dual-camera market.

Sunny Optical has the powerful optical development capability and automatic lean manufacturing capacity. We have experimented with virtually all dual camera innovations introduced in recent years, and have found Corephotonics dual camera technologies to have the greatest contribution in camera performance and user experience. Just as important is the compliance of their dual camera architecture with high volume production and harsh environmental requirements,” said Cerberus Wu, Senior Marketing Director of Sunny Optical.

We are deeply impressed by Sunny's dual camera manufacturing technologies, clearly setting a new benchmark in the thin camera industry," added Eran Briman, VP of Marketing & Business Development at Corephotonics. “The dual camera modules produced under this collaboration present smartphone manufacturers with the means to distinguish their handsets from those of their rivals through greatly improved imaging capabilities, as well as maximum flexibility and customizability."

EETimes Reviews ISSCC 2018

EETimes Junko Yoshida publishes a review of ISSCC 2018 image sensor session, covering Sony motion detecting event-driven sensor:


Microsoft 1MP ToF sensor:


Toshiba 200m-range LiDAR:


and much more...

Saturday, February 17, 2018

Materials of 3rd Workshop on Image Sensor and Systems Published

Image Sensor Society web site published most of the papers from 3rd International Workshop on Image Sensor and Systems (IWISS2016) held at the Tokyo Institute of Technology in November 2016. There are 18 invited papers and 20 posters presented at the Workshop, mostly from Japan and Korea.

Thanks to NT for the pointer!

Friday, February 16, 2018

LIN-LOG Pixel with CDS

MDPI Special Issue on the 2017 International Image Sensor Workshop publishes NIT paper "QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout" by Yang Ni.

"In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout."


"The readout noise was measured at 2.2 LSB, which is 268 µV. Taking into account the source follower gain, the temporal noise on the floating diffusion was estimated at 335 µV. With a floating diffusion node capacitance estimated from design at 4 fF, the noise electron number is 12.3 electrons. The temporal noise in the logarithmic regime was measured at 6 LSB, which represents 34 electrons inside the buried photodiode. From this Johnson noise, the photodiode capacitance can be estimated at 6.2 fF which is quite close to the estimation from the layout."

Thursday, February 15, 2018

DALSA Discusses Facial Recognition

Teledyne DALSA starts publishing a series of articles on facial recognition science. The first part discusses fairly generic issues, such as the resolution that humans use for facial recognition task. It's all dynamic:

"The ganglion cells in the human retina can produce the equivalent of a 600 megapixel image, but the nerve that connects to the retina can only transmit about one megapixel."

"Analysts predict that the global facial recognition market is expected to grow from USD 4.05 Billion in 2017 to USD 7.76 Billion by 2022. Companies are very interested in the possibilities of facial recognition technologies and global security concerns are driving interest in better biometric systems."

ISSCC Review: Sony, TSMC, NHK, Toshiba, Microsoft, TU Delft, FBK

Albert Theuwissen continues his review of ISSCC 2018 presentations. The second part includes Sony 3.9MP, 1.5um pixel pitch event-driven sensor:

"The overall resolution of 3.9 Mpixels is reduced to on 16×5 macro pixels. In this “macro” pixel mode, the power consumption is drastically reduced as well, and the sensor behaves in a sort of sleeping mode. Once the sensor detects any motion in the image (by means of frame differencing), the device wakes up and switches to the full resolution mode."

TSMC presents their 13.5MP 1.1um pixel sensor and NHK unveils 8K 36MP 480fps sensor for slow-mo sports shooting at the oncoming Tokyo Olympics.

The third part of the review starts with Toshiba hybrid LiDAR that is enhanced by a Smart Accumulation Mode that, basically, tracks the subjects in depth domain. As long as it works, the detection range can reach 200m, but it relies on a lot of intelligence inside what is supposed to be just a dumb sensor delivering the "food for thought" to the main CPU or NPU.

Microsoft presented an evolution of its ToF sensor used in Kinect-2 - higher resolution, smaller pixels, BSI, higher QE, better shutter efficiency, etc. AGC has been added to the pixel, and background light suppression has been removed, if we compare this pixel with the previous Microsoft design.

TU Delft and FBK presented SPAD designs. The FBK one is aimed to entangled photon microscopy to increase the resolution by a factor of N, where N is the number of mutually entangled photons.

Albert Theuwissen concludes his review on an optimistic note:

"Take away message : everything goes faster, lower supply voltages, lower power consumption, stacking is becoming the key technology, and apparently, the end of the developments in our field is not yet near ! The future looks bright for the imaging engineers !!!"

Panasonic 8K GS OPF Sensor

Panasonic has developed an 8K (36MP), 60fps, 450ke- saturation sensor with global shutter and with sensitivity modulation function. The new CMOS sensor has an organic photoconductive film (OPF).

"By utilizing this OPF CMOS image sensor's unique structure, we have been able to newly develop and incorporate high-speed noise cancellation technology and high saturation technology in the circuit part. And, by using this OPF CMOS image sensor's unique sensitivity control function to vary the voltage applied to the OPF, we realize global shutter function. The technology that simultaneously achieves these performances is the industry's first."

The new technology has the following advantages:
  • 8K resolution, 60fps framerate, 450Ke- saturation and GS function are realized simultaneously.
  • Switching between high sensitivity mode and high saturation mode is possible using gain switching function.
  • The ND filter function can be realized steplessly by controlling the voltage applied to the OPF.

This Development is based on the following technologies:
  1. "OPF CMOS image sensor design technology", in that, the photoelectric-conversion part and the circuit part can be designed independently.
  2. "In-pixel capacitive coupled noise cancellation technique" which can suppress pixel reset noise at high speed even at high resolution
  3. "In-pixel gain switching technology" that can achieve high saturation characteristics
  4. "Voltage controlled sensitivity modulation technology" that can adjust the sensitivity by changing the voltage applied to the OPF.

Panasonic holds 135 Japanese patents and 83 overseas patents (including pending) related to this technology.

Wednesday, February 14, 2018

Analyst: Himax/Qualcomm 3D Sensing Platform Struggles in China

Barron's quotes financial analyst Jun Zhang of Rosenblat saying:

"As we look across the landscape it appears to us that HIMX continues to struggle to find OEMs to incorporate its solution as our latest industry research suggests OPPO is working with Orbbec, Xiaomi with O-film (002456-SZ:NR) & Mantis Vision for its Mi7 Plus and Huawei on an internal solution. We also be- lieve other tier-2 and 3 OEMs are targeting a 2019 launch for their phones. On the conference call, management commented that its 3D sensing solution will be ready for mass production in Q2 but did not announce any design wins. Based on the long lead times for 3D sensing modules, Himax should have needed to have already secured a design-win if to be part of any solution."

SeekingAlpha earnings call transcript has the company's CEO Jordan Wu predicting: "3D sensing will be our biggest long term growth engine and, for 2018, a major contributor to both revenue and profits, consequently creating a more favorable product mix for Himax starting the second half of 2018."

Teledyne Announces Readiness of Wafer Level Packaged IR Sensors

BusinessWire: Teledyne DALSA completed the qual of a its Wafer-Level-Packaged Vanadium Oxide (VOx) Microbolometer process for LWIR imaging.

Teledyne DALSA’s manufacturing process, located in its MEMS foundry in Bromont, Quebec, bonds two 200 mm wafers precisely and under high vacuum, forming an extremely compact 3D stack. This technology eliminates the need for conventional chip packaging - which can account for 75% or more of the overall device cost.

This is an important milestone in our journey to bring a credible price/performance VOx solution to market,” said Robert Mehrabian, Chairman, President and CEO of Teledyne. “With the qualification process complete we will now begin ramping up production lines for a 17-micron pixel 320×240 (QVGA) device, closely followed by a 17-micron 640×480 (VGA), with longer-term plans to introduce a highly compact 12-micron detector family.

ISSCC 2018 Review - Sony, Panasonic, Samsung

Albert Theuwissen publishes a review of ISSCC papers, starting from Sony BSI-GS CMOS imager with pixel-parallel 14b ADC: "One can make a global shutter in a CMOS sensor in the charge domain, in the voltage domain, but also in the digital domain. The latter requires an ADC per pixel (also known as DPS : digital pixel sensor). And this paper describes such a solution : a stacked image sensor with per pixel a single ADC."

Panasonic organic sensor: "The paper claims that the reset noise is lowered by a factor of 10, while the saturation level is increased by a factor of 10 (but the high saturation mode cannot be combined with the low noise level)."

Samsung 24MP CIS with 0.9um pixel: "All techniques mentioned are not new, but their combination for a 0.9 um is new."

Omnivision HDR Promotional Video

Omnivision publishes HDR marketing video:

Update: Omnivision removed the video and sent me the following update:

"The video was made public by mistake. Once we finalize the video we will re-publish and share with OmniVision’s customers and media contacts."

Tuesday, February 13, 2018

Sony Presents GS Sensor with ADC per Pixel

Sony presents 1.46MP stacked BSI CMOS sensor with Global Shutter and newly developed low power pixel-parallel ADC to convert the analog signal from all pixels, simultaneously exposed, to a digital signal in parallel.

The inclusion of nearly 1,000 times as many ADCs compared to the traditional column-parallel ADC architecture means an increased demand for current. Sony addressed this issue by developing a compact 14-bit A/D converter which is said to boast the industry's best performance in low-current operation. The FoM of the new ADC is 0.24e-・nJ/step. (power consumption x noise) / {no. of pixels x frame speed x 2^(ADC resolution)}.

The connection between each pixel on the top chip uses Cu-Cu connection, that Sony put into mass production as a world-first in January 2016.

Main Features:
  • Low-current, compact pixel-parallel A/D converter
    In order to curtail power consumption, the new converter uses comparators that operate with subthreshold currents, resulting in the low current, compact 14-bit ADC. This overcomes the issue of the increased demand for current due to the inclusion of nearly 1,000 times as many ADCs in comparison with the traditional column ADC.
  • Cu-Cu (copper-copper) connection
    To achieve the parallel A/D conversion for all pixels, Sony has developed a technology which makes it possible to include approximately three million Cu-Cu (copper-copper) connections in one sensor. The Cu-Cu connection provides electrical continuity between the pixel and logic substrate, while securing space for implementing as many as 1.46 million A/D converters, the same number as the effective megapixels, as well as the digital memory.
  • High-speed data transfer construction
    Sony has developed a new readout circuit to support the massively parallel digital signal transfer required in the A/D conversion process using 1.46 million A/D converters, making it possible to read and write all the pixel signals at high speed.