Wednesday, July 31, 2019

MEMS and Imaging Summit

MEMS and Imaging Summit to be held in Grenoble, France on Sept. 25-27, 2019 publishes a part of its interesting agenda:

  • Considerations of Optical Fingerprint and 3D Face Recognition Sensors for Cellular Security Applications, Avi Strum, SVP, TowerJazz
  • Sensors in Mixed Reality, Sunil Acharya, Senior Director of Sensor Development Hololens, Microsoft
  • A Touch of Finnish Sense, Antti Vasara, President & CEO, VTT
  • Discovering New Dimensions: Our Vision to Sense the World, Christian Herzum, Senior Director 3D-Sensing & Discretes, Infineon Technologies
  • 7.2um Pixel Event-Based Vision Sensor with Frame Readout, Raphael Berner, Head of chip design, Insightness AG
  • PIXCURVE: a Global Approach for Curved Optical Components, David Henry, Head of Packaging and Assembly Laboratory, Optics and Photonics Division,CEA-Leti
  • Wafer Scale Image Sensors: New Developments and Technology Challenges for the X-ray Market, Thalis Anaxagoras, Founder, ISDI
  • Industrial Atomic Layer Deposition for Image Sensors, Mikko Söderlund, Director, Technical Sales Europe, Beneq
  • Excellence in Microlens Imprint Lithography and Wafer-Stacking, Reinhard Voelkel, CEO, SUSS MicroOptics SA
  • AI Close to the Sensor: an Approach for Energy Efficient and Real Time AI, Bram Senave, Business Development Manager, Easics
  • Organic Image Sensors for Fingerprint Acquisition in Smartphone Display and NIR Development for Hybrid CMOS Imager, Benjamin Bouthinon, Optics Manager, ISORG
  • 3D Depth Sensing: Time-of-Flight Technology and Application, Morin Dehan, Technology Engineer, Sony Depthsensing Solutions
  • Electrowetting Based Liquid Lenses – a Novel Technology that Improves Imaging Device Performance, Frederic Laune, Corning
  • Hyperspectral Imaging for Wafer Inspection, Jan Makowski, CEO / CTO, LuxFlux

LeddarTech Settles LiDAR Patent Dispute with Phantom Intelligence

GlobeNewsWore: LeddarTech announces that its patent infringement case against Phantom Intelligence was settled favorably in the Federal Court of Canada.

In December 2015 LeddarTech challenged Phantom Intelligence for the illegal use of its patented LiDAR sensing technology protected by Canadian Patent No. 2,710,212, entitled “Detection and Ranging Methods and Systems” and relating to systems and methods for acquiring an optical signal and converting it into a digital domain. The patent has been filed in 2007 and granted in 2014. It's said to be a core driver of LeddarTech’s IP and is being practiced in all of its LiDAR products for ADAS and autonomous driving (AD) applications. The settlement, the amount, terms and conditions of which are confidential, was agreed under a court-sponsored mediation process following which Phantom Intelligence became a customer of LeddarTech.

LeddarTech is proud of its years of innovation as a pioneer in LiDAR technology for ADAS and AD,” stated Pierre Olivier, CTO at LeddarTech. “We have always been confident that we would be successful in the defense of our intellectual property, and we will continue to be vigilant in protecting it.

LeddarTech patent portfolio consists of 72 patents: 52 granted and 20 pending. The company intends to continue monitoring market for possible infringements of its proprietary LiDAR technology.

Tuesday, July 30, 2019

Sony Re-Orgs Renames Image Sensor Division, Reports Sales Growth

Sony reports quarterly results for the quarter ended June 30, 2019. Probably, as a response to the calls to spin-off its semiconductor business, Sony splits it between different divisions, so that there is no Semiconductor Solutions division anymore. Image sensors become a part of "Imaging & Sensing Solutions Segment." It's not immediately clear what are the other products in this new segment. I was unable to find any official statement on that. A European site "Sony Image Sensing Solutions" talks about industrial cameras with no mention of image sensors.


Update: Sony official earnings call transcript explains the name change. It sounds like a reply to the spin-off proposals:

• From this quarter, we have changed the name of the Semiconductors segment to Imaging & Sensing Solutions (“I&SS”).
• Now I will explain the background and reasoning behind the change in name of the segment.
• The portion of Semiconductors segment revenue that comes from image sensors has been increasing every year, is expected to be approximately 85% of the segment this fiscal year and is expected to increase even more going forward.
• Image sensors are hybrids between analog and digital semiconductors, and, in terms of technology and business model, differ from logic LSI and memory, which most people think of when they hear the word semiconductors.
• Compared with logic LSI and memory, which require frequent capacity upgrades to maintain competitiveness due to quickly evolving process miniaturization, image sensors do not require regular, large capital investments because products can be differentiated through improvements in functionality and the addition of new features without having to upgrade production capacity.
• Moreover, since the image sensors business is focused on custom products that are differentiated through features and functionality, and because we have expanded our customer base the last several years and obtained a large share of the market, we have established a business model that experiences less impact from fluctuations in the market known as the silicon cycle.
• Over the last 10 years, we have achieved an extremely high level of compound annual sales growth at 17%, primarily from smartphone applications, and we have made significant investments to increase capacity as a result. However, we expect the investment requirements of this business to decrease significantly as the acute increase in demand transitions to a milder growth trajectory.
• The strategy for future growth in the I&SS segment is to develop AI sensors which make our sensors more intelligent by combining artificial intelligence with the sensors themselves.
• Development of these sensors will require us to leverage not only the strength of the hardware technology in the I&SS segment, such as the stacking of sensors on logic and copper-to-copper connections, but also the AI technology and diverse application technology in other parts of Sony, so our efforts in this area will span the entire Sony Group.
• We think that AI and sensing will be used across a wide range of applications such as autonomous driving, IoT, games and immersive entertainment. As such, we think there is a possibility that image sensors will evolve from the hardware they are today to solutions and platforms as visual data and sensing information is processed in a sophisticated manner inside sensors.
• The image sensor business is important because it is one of the pillars of the growth strategy of the Sony Group. We changed the name of the segment this time to assist your understanding of the characteristics and future strategy of this business, which I just explained.



Also, Sony gives few more details on the CIS business:

• FY19 Q1 sales increased 14% year-on-year to 230.7 billion yen and operating income increased 20.4 billion yen to 49.5 billion yen, primarily due to a significant increase in image sensor sales for mobile devices.
• Demand for our image sensors continues to be strong and our market share of image sensors for mid-range and high-end models of major smartphone makers remains high, due to adoption of multiple sensors per camera and growing demand for high value-added sensors made using large die-sizes.
• We are currently utilizing 100% of our internal capacity.
• However, concerns about the impact of trade issues in the second half of the fiscal year remain. We have already been conservative when forecasting the impact of these issues, but, because we want to evaluate the risks over the course of the first half of the fiscal year, we have made no changes to our April forecast.


Update #2: Yahoo: “It surprised us that the image sensor business was really strong,” said Masahiro Wakasugi, an analyst with Bloomberg Intelligence. “We thought there might be a risk that Huawei could be cutting some orders. We think going forward the image sensor business will be one of the key contributors to positive sales and earnings results for Sony.

Monday, July 29, 2019

TowerJazz Updates on its CIS Business

SeekingAlpha publishes a transcript of TowerJazz Q2 2016 earnings call. Few updates on image sensor business:

"Looking into our CMOS image sensor business, our largest application and market is the industrial market. As previously discussed, we have seen a pullback, which our customers attribute to the trade war. It is starting to pick up now with new projects, many of which are targeted towards large screen display inspection using very high resolution global shutter sensors. All of our new projects are based on our state-of-the-art global shutter pixels in our 65-nanometer, 12-inch line in Uozu. We expect these projects to ramp towards the end of next year. Orders for present products are forecast by customers to recover with wafer starts beginning during the fourth quarter of this year.

We have won a large face recognition sensor project for smartphones. It will be based on indirect time-of-flight technology and we'll use state-of-the-art stacking technology, utilizing our 300-millimeter, 65-nanometer platform. In parallel, for mobile applications, we are working with 3 leading fingerprint companies for under OLED and under LCD optical sensors, based upon our unique pixel technology. These projects are expected to begin to ramp in 2020, utilizing our well-established 0.18 micron, 200-millimeter CIS technology.

In the high-end photography area, we're moving along with the next-generation stacking sensor project, partnering with an undisputed leader in the market, targeted to ramp in 2021. Medical and dental x-ray demand has remained stable with strong margins. We see an increased demand for large CMOS-based panels. We're now in the final qualification stages of new products with one of the leading providers. Additionally, we are fully qualified and started to ship single-die wafer-scale medical x-ray sensors on 300-millimeter with 2 additional customers planning final product tapeout in the fourth quarter of this year.

...we have decided to accelerate our planned expansion and to allocate $100 million to increase the capacity of our 300-millimeter Uozu fab in Japan. Equipment should begin to arrive in this center, with most to all tools expected to be qualified during the first half of 2020. This investment not only increases our 300-millimeter wafer capacity but will drive additional benefits that tie to new and large 200-millimeter partnership activities. At image sensing, most of the capacity growth there is indeed at the end of 2020 and 2021, 2022.
"

Image Sensors with Frustrated Charge Transport

Journal of Applied Physics publishes a paper "Organic photodetectors with frustrated charge transport for small-pitch image sensors" by Z. Ma and C. K. Renshaw from University of Central Florida, Orlando, FL.

"We demonstrate a frustrated organic photodetector (F-OPD) that utilizes frustrated charge-transport to quench forward-bias current and provide a low-current, light-independent OFF state. Photocurrent is collected efficiently with −3 V reverse-bias recovering the sensitive OPD response with higher than 10-bit dynamic range. This intrinsic switching mechanism eliminates the need for thin-film transistors (TFTs) to provide readout control in high-resolution image sensors. Eliminating TFTs simplifies fabrication processing, improves fill-factor, and enables higher resolution image sensors on nonplanar, stretchable, or large-area substrates for a variety of imaging applications. We simulate image sensors and show that the performance is limited by the OFF state uniformity experimentally observed across 45 devices. We simulate performance in a 900-pixel array and show that the demonstrated F-OPDs can scale into megapixel arrays with a noise-equivalent power of less than 0.6 mW/cm2 and a dynamic range of more than 6-bits; better uniformity can substantially improve this performance for large arrays."


"The F-OPD utilizes a blocking layer integrated with the OPD to frustrate charge collection and provide a low-current OFF state under forward bias. A few volts of reverse bias switches the pixel into a conducting ON state where the OPD photocurrent is efficiently collected."


"We have demonstrated F-OPDs utilizing frustrated charge-transport to enable transistor-free pixels for organic image sensors. A blocking layer at the anode reduces forward-bias current a thousand-fold and provides a low-current, light-independent OFF state; meanwhile, a few volts of reverse-bias recovers the high sensitivity and dynamic range typical of OPDs. The F-OPD operates like conventional passive pixels but elimination of the readout transistor avoids (1) allocation of pixel area to the transistor, (2) definition of subpixel features for drain/source/gate/insulator/channel, and (3) additional gate interconnects spanning the circuit. Pixel functionality is defined by a single, monolithic stack to allow nearly 100% FF and small-pixel-pitch using minimal fabrication steps. Combined with high-resolution transfer patterning for organic circuits,12 F-OPDs could enable scaling OISs to a less than 10 μm pixel-pitch limited only by edge effects and lateral photoconductive leakage. This streamlined processing can also reduce cost for curved, flexible, lightweight, large-area, and/or attritable OISs."

Sunday, July 28, 2019

Automotive and Security Markets

IFNews: Credit Suisse report on Asia semiconductor market has a part about Kingpack CIS packaging business for ON Semi. Kingpack uses its proprietary wirebonding technology to make the tiny iBGA packages which has been qualified with AEC Q-100 Grade 1. "Kingpak holds a strong position in the automotive CIS packaging market and the company should be able to sustain its leading position as the company is the only supplier certified by AEC-Q100 Grade 1 qualification."

Saturday, July 27, 2019

Imaging through Noise with Quantum Illumination

ResearchGate, Arxiv.org: University of Glasgow, UK, paper "Imaging Through Noise With Quantum Illumination" by Thomas Gregory, Paul-Antoine Moreau, Ermes Toninelli, and Miles J. Padgett proposes a detection technique that preferentially select photon-pair events over isolated background events. This is somewhat similar to what SPAD designers do to reject the sunlight, but not exactly the same:

"The contrast of an image can be degraded by the presence of background light and sensor noise. To overcome this degradation, quantum illumination protocols have been theorised (Science 321 (2008), Physics Review Letters 101 (2008)) that exploit the spatial correlations between photon-pairs. Here we demonstrate the first full-field imaging system using quantum illumination, by an enhanced detection protocol. With our current technology we achieve a rejection of background and stray light of order 5 and also report an image contrast improvement up to a factor of 5.5, which is resilient to both environmental noise and transmission losses. The quantum illumination protocol differs from usual quantum schemes in that the advantage is maintained even in the presence of noise and loss. Our approach may enable laboratory-based quantum imaging to be applied to real-world applications where the suppression of background light and noise is important, such as imaging under low-photon flux and quantum LIDAR."


"We have demonstrated a quantum illumination protocol to perform full-field imaging achieving a contrast enhancement through the suppression of both background light and sensor noise. Structure within the thermal background illumination is potentially a-priori unknown and therefore cannot be suppressed with a simple ad-hoc background subtraction. Through resilience to environmental noise and losses, such a quantum illumination protocol should find applications in real-world implementations including quantum microscopy for low light-level imaging, quantum LIDAR imaging applications, and quantum RADAR. Improvements in detector technologies such as SPAD arrays capable of time-tagging events should enable time-of-flight applications to be realised and applied outside of the laboratory through the increased acquisition speed and time resolution that they enable."

Friday, July 26, 2019

Volvo LiDAR Cost Estimated at $22

Not all LiDARs are expensive. TechInsights teardown report of Volvo 31360888 Brake Assist LiDAR found in V40 car estimates its manufacturing costs at only $22.61, although the retail price is more than $586:

Hynix Shifts Fab Capacity from Memory to CIS

KoreaTimes. KoreaHerald: SK Hynix said yesterday that it will convert part of its M10 DRAM fab in Icheon, Gyeonggi Province, to CIS production. “This is to reduce DRAM wafer capacity considering the DRAM demand environment and to strengthen the competitiveness of its CIS business,” said the company.


Thursday, July 25, 2019

ST Q2 Earnings Call: Structured Light vs ToF in Smartphones

SeekingAlpha: ST updates on its Q2 imaging business results:

"During Q2, we had at least an Imaging sensor and/or a MEMS device in all of the top 10 smartphones currently on this market. We also continue to earn design wins and ramp shipments for our time-of-flight sensor, analog products and RF products for 4G front-end modules.

Clearly, in Q2, the performance of growth which was higher than the midpoint of our range is mainly related to specialized imaging product.

Well, about specialized imaging also, again, I will not comment on our competitor, but I will comment, okay, the visibility we have and what ST is doing. Well, it is clear that you know that ST is a key player in the 3D sensing for the face recognition. Since 2017 second half, the unique foolproof technology is based on what we call structured light where ST is a key player, and we are still, let's say, growing in this kind of application.

In parallel, components in the smartphone, there is clearly some other components like ambient licensing or, let's say, Time-of-Flight based proximity sensor or autofocus assist or ranging sensor. And clearly, okay, I repeat that we have accumulated a huge volume in this Time-of-Flight and we continue. That's the reason why, as I told you, in the top 10 smartphones on the market today, we have either a special imaging sensor, including this Time-of-Flight or a MEMS sensor. Also, we disclosed to you during the recent quarter that now ST is important player on imaging licensing on the smartphone or, let's say, other wearable application. Now in term of trend of the industry, it is clearly that one trend we are seeing is introduction of indirect Time-of-Flight for the world-facing camera first, which certainly will be, let's say, a future competitive solution to address the depth map sensing. Now ST here, I confirm to you that we have a very strong road map with a very competitive and high-performing product that we will deliver to the market, whatever in iOS or Android phone. So, this is, let's say, the dynamic specific to ST we have on this application for the smartphone.

I confirm to you that we are in competition -- for the time, you have only one unique solution for face recognition 3D sensing, it is the structured light. The other, let's say, architecture in term of system are, let's say, less foolproof. Well -- and again, this is what we confirmed. We said to the Capital Market Day and we know that in the near future, certainly, architecture like structured light improved and indirect Time-of-Flight will be in competition. ST addressed the two technology architecture. And for sure, certainly, Sony is more addressing the indirect Time-of-Flight kind of architecture, and we will be in competition. But we do not see, let's say, dramatic or material change in the dynamic for the short term."

...for the time being, the structured light for the front-facing is a technology, okay, again, since H2 2017 and certainly will continue for a while. As we disclosed to you, we do believe that at a certain moment of time, indirect Time-of-Flight base architecture will certainly come up on this kind of application. Presently, as far as the performance is, let's say, consistent with the structured light. Presently, okay, some advantages in form factor or something like that. Well, this trend, okay, is confirmed and ST will compete overall on both architecture. And then you know that as, let's say, generic trend for semiconductor, the challenge will be always to reduce the form factor to improve the sector offering and to reduce the cost of ownership.

Then on the world-facing, it is clear that indirect Time-of-Flight based solution will be certainly the winning architecture. And again, okay, here, you will see maybe this year and certainly next year introduction of this kind of technology. For the short term, I mean this year, it is not revenue for ST, but we are offering a solution for 2020 and beyond and we will be a key competitor in this market. Now then other competitors like ambient licensing or, let's say, ranging sensor based on direct Time-of-Flight will continue as the RGB camera will have more and more pixel and you need to have to focus assist and this kind of stuff. So, no major change, no change compared to what we said at Capital Market Day.
"

Wednesday, July 24, 2019

Image Sensor Americas Agenda

Image Sensors Americas conference to be held on October 15-16, 2019 in San Jose, CA announces its agenda. Naturally, a good part of it is image sensor presentations:

  • Towards Large-Scale, SPAD-Based ToF Imagers for Automotive, Robotic and EdgeAI Applications
    Wade Appelman | VP of Sales and Marketing of SensL Technologies
  • CMOS Image Sensors for Bio-medical, Industrial and Scientific Applications: Current Challenges and Perspectives
    Renato Turchetta | CEO of IMASENIC Advanced Imaging S.L.
  • Near Field Depth Generation From a Single Image Sensor
    Paul Gallagher | Vice-President of Strategic Marketing of Airy3D
  • Ge-on-Si ToF Imager Sensor SoC
    Neil Na | Co-Founder and Chief Science Officer of Artilux
  • Imaging Sensors and Systems for a Genomics Revolution
    Tracy Fung | Sr. Staff Engineer, Product Development CMOS Lead of Illumina
  • Far-Infrared thermal camera an effortless solution for improving ADAS detection robustness
    Emmanuel Bercier | Strategy and Automotive Market Manager of ULIS-SOFRADIR

Ams Bets Big on 3D Sensing

Ams Q2 2019 report emphasize the company's focus on 3D sensing solutions:

Tuesday, July 23, 2019

TechInsights' State of the Art of Smartphone Imagers Review - Part 3

Techinsights' posts "The state of the art of smartphone imagers" are based on Ray Fontaine presentation at IISW 2019 in June. Part 3 covers "Back-Illuminated Active Si Thickness, Deep Trench Isolation (DTI)."

"DTI was first introduced to back-illuminated pixels with conventional or slightly thicker active Si, and then optimized to enable substantially thicker active Si over time. For example, DTI came to early 1.0 µm pixels with a 2.5 µm to 2.7 µm active Si thickness and later enabled active Si up to 3.9 µm thick. Studying the 0.8 µm and 0.9 µm pixel generations it is clear an active Si thickness of >3.5 µm was selected to achieve sufficient pixel performance."

2013 Review of 3D Cameras

Not much has changed since 2013 when Nova Science Publishes unveiled a book with chapter of 3D imaging "A Review on Commercial Solid State 3D Cameras for Machine Vision." The review talks about PMD, MESA, Raytrix, TriDiCam, Fotonic, and Sick approaches:

Monday, July 22, 2019

Verge of CFA Diversity Era?

MDPI paper "The Effect of the Color Filter Array Layout Choice on State-of-the-Art Demosaicing" by Ana Stojkovic, Ivana Shopovska, Hiep Luong, Jan Aelterman, Ljubomir Jovanov, and Wilfried Philips from Ghent University, Belgium comes up with an interesting statement:

"In this study, by comparing performance of two state-of-the-art generic algorithms, we evaluate the potential of modern CFA-demosaicing. We test the hypothesis that, with the increasing power of NN-based demosaicing, the influence of optimal CFA design on system performance decreases. This hypothesis is supported with the experimental results. Such a finding would herald the possibility of relaxing CFA requirements, providing more freedom in the CFA design choice and producing high-quality cameras.

From this study, we derive a conclusion about the constantly increasing reconstruction power of the modern learning based demosaicing algorithms towards adaptiveness to any CFA design without loss in the reconstruction quality (which used to be dependent on the quality of the CFA design). This conclusion leads to a finding regarding the future opportunities for camera manufacturing and image reconstruction, specifically in combining lower hardware requirements with powerful reconstruction techniques. In other words, this means that, with the modern learning-based demosaicing methods, camera manufacturers have more freedom in the choice of the CFA pattern layout, without a noticeable loss in the image quality. In that direction, the patterns can be adapted to improve other image properties and facilitate various imaging tasks, such as the Quad Bayer that was designed to improve noise reduction in low-light imaging.
"

SmartSens Launches Two Industrial Grade Sensors - SC2310T and SC4210T

PRNewswire: SmartSens announced two new CMOS sensors SC2310T and SC4210T with unique pixel architecture to deliver superior low-light sensitivity and HDR of 100db, combined with industrial temperature range from -30C to 85C.

The new product are said to be leading the market with SNR1s of 0.21 lux. The 2MP SC2310T and 4MP SC4210T SmartClarity Sensors are based on 3um BSI pixel technology and support video capture in 60 fps in HDR mode.

"Over the last nine years, we have built a rich portfolio of imaging sensors used in electronics and hardware products, including sports cameras, drones, robot cleaners, consumer automotive cameras, surveillance and smart home cameras. With the growing demand for superior quality imaging sensors outside of traditional consumer applications, SC2310T and SC4210T will extend our product line for our customers," said Chris Yiu, CMO, SmartSens. "The ability to deliver unparalleled image quality in extreme lighting conditions, and being able to operate under critical environmental temperature ranges, makes the devices an attractive choice for next-generation catch-all video applications."

SC2310T and SC4210T are currently available for sampling and are expected to enter volume production in July.

Rumor on 5 New Sony Full Frame Sensors

Sony E-Mount Rumors publishes what it calls "leaked datasheets" of 5 full-frame CMOS sensors: IMX311, IMX313, IMX409, IMX521, IMX554. The most unusual one is IMX311 having 45-deg angled pixels. Note that the resolution of ~12,000 x ~4,000 of square pixels does not have the same aspect ratio as the optical format 41mm x 30mm, possibly due to 45 angle:


IMX521 is a high speed sensor with quad CFA:

Depth Sensing in Automotive Applications

First part of RSIP webinar series on automotive AI talks about ways to sense depth in ADAS and autonomous driving applications:

Sunday, July 21, 2019

ToF News: Broadcom, Renesas, Opnous

ToF market becomes rather crowded. Many companies enter it anticipating a fast growth.

Broadcom AFBR-S50MV85G is APD pixel-based distance and motion measurement ToF sensor. It supports up to 3000 frames per second with up to 16 illuminated pixels. The sensor is aimed to industrial applications and gesture sensing and is said to have best-in-class ambient light suppression of up to 200k Lux. So, the use in outside environments should not be a problem.

Features:
  • Integrated 850 nm laser light source
  • Between 7-16 illuminated pixels
  • FoV of up to 12.4°x 6.2°
  • Very fast measurement rates of up to 3 kHz
  • Variable distance range up to 10m
  • Operation up to 200k Lux ambient light
  • Works well on all surface conditions
  • Laser Class 1 eye safe ready
  • Accuracy better than 1%
  • Drop-in compatible within the AFBR-S50 sensor platform


Renesas ISL29501 (Intersil) ToF processor external emitter and detector. The sensor operates on i-ToF in-phase/out-phase priciple:


Shanghai, China-based Opnous offers a number of ToF sensors with different resolutions: