Yole Developpement publishes its webcast on 3D Imaging and Sensing :
Lists
▼
Saturday, September 30, 2017
Friday, September 29, 2017
Tessera Accuses Samsung in Imaging Patents Infringement
BusinessWire: Tessera and its subsidiaries file legal proceedings today against Samsung alleging infringement of 24 patents including some on bonding and imaging technologies:
Invensas Bonding Technologies (formerly Ziptronix) files an action against Samsung in the U.S. District Court for the District of New Jersey, alleging infringement of six patents relating to the Company’s semiconductor bonding technologies. The patents at issue are U.S. Patent Nos. 7,553,744; 7,807,549; 7,871,898; 8,153,505; 9,391,143; and 9,431,368.
FotoNation and DigitalOptics MEMS filed an action against Samsung in the U.S. District Court for the Eastern District of Texas, alleging infringement of eight patents relating to imaging technologies. The patents at issue are 8,254,674; 8,331,715; 7,860,274; 7,697,829; 7,574,016; 7,620,218; 7,916,897; and 8,908,932.
Invensas Bonding Technologies (formerly Ziptronix) files an action against Samsung in the U.S. District Court for the District of New Jersey, alleging infringement of six patents relating to the Company’s semiconductor bonding technologies. The patents at issue are U.S. Patent Nos. 7,553,744; 7,807,549; 7,871,898; 8,153,505; 9,391,143; and 9,431,368.
FotoNation and DigitalOptics MEMS filed an action against Samsung in the U.S. District Court for the Eastern District of Texas, alleging infringement of eight patents relating to imaging technologies. The patents at issue are 8,254,674; 8,331,715; 7,860,274; 7,697,829; 7,574,016; 7,620,218; 7,916,897; and 8,908,932.
Canon on Large Pixel Design Challenges
Canon whitepaper "Advances in CMOS Image Sensors and Associated Processing" by Shin Kikuchi, Daisuke Kobayashi, Hitoshi Yasuda, Hajime Ueno, and Laurence Thorpe, first presented at the Hollywood Professional Alliance (HPA) Tech Retreat in Palm Springs on February 19, 2016 talks about challenges in 19um large pixel design (note a strange definition of conversion gain, probably meant to be image lag):
The image sensor design sought optimization of three key attributes of the photosite:
1. Sensitivity – determined by the quantum efficiency of the photosite
2. Saturated charge quantity (sometimes termed full well capacity) – that determines dynamic range
3. Efficiency of the charge transfer (sometimes termed conversion gain) – the goal being to transfer all electrons during each reset period to ensure full sensitivity
Canon says it was able to achieve 70% QE at 500nm in monochrome sensor.
The continuation of the whitepaper, mostly about extended DR and dual pixel is available here.
The image sensor design sought optimization of three key attributes of the photosite:
1. Sensitivity – determined by the quantum efficiency of the photosite
2. Saturated charge quantity (sometimes termed full well capacity) – that determines dynamic range
3. Efficiency of the charge transfer (sometimes termed conversion gain) – the goal being to transfer all electrons during each reset period to ensure full sensitivity
Canon says it was able to achieve 70% QE at 500nm in monochrome sensor.
The continuation of the whitepaper, mostly about extended DR and dual pixel is available here.
Doogee Says to be 1st on the Market with 3D Face Unlock
PRNewswire: DOOGEE MIX 2 quad-camera smartphone is said to beat Apple in delivering 3D face recognition one month earlier to the market:
"The Face ID of iPhone X gives people one more reason to pay $999 for it, but what if the DOOGEE MIX 2 can do this as well? DOOGEE announced they would apply face recognition in MIX 2 for the first time, which will be operated by the front camera.
Since there is no room for the fingerprint sensor in full display devices at the front, face recognition may be a trend of the business. However, this is the first time that an Android smartphone using face recognition to unlock the full display. Considering iPhone X will be available in November, and DOOGEE MIX 2 is coming in October, it may become the world's first launched smartphone with face recognition and full display."
"The Face ID of iPhone X gives people one more reason to pay $999 for it, but what if the DOOGEE MIX 2 can do this as well? DOOGEE announced they would apply face recognition in MIX 2 for the first time, which will be operated by the front camera.
Since there is no room for the fingerprint sensor in full display devices at the front, face recognition may be a trend of the business. However, this is the first time that an Android smartphone using face recognition to unlock the full display. Considering iPhone X will be available in November, and DOOGEE MIX 2 is coming in October, it may become the world's first launched smartphone with face recognition and full display."
Toyota Autonomous Platform 2.1 Tests Different LiDARs
Toyota announces an autonomous driving car Platform 2.1 densely packed with cameras and LiDARs from different manufacturers - I was able to count 16 of them:
"Platform 2.1 also expands TRI's portfolio of suppliers, incorporating a new high-fidelity LIDAR system provided by Luminar. This new LIDAR provides a longer sensing range, a much denser point cloud to better detect positions of three-dimensional objects, and a field of view that is the first to be dynamically configurable, which means that measurement points can be concentrated where sensing is needed most. The new LIDAR is married to the existing sensing system for 360-degree coverage. TRI expects to source additional suppliers as disruptive technology becomes available in the future."
Toyota video explains the new Platform 2.1 features:
"Platform 2.1 also expands TRI's portfolio of suppliers, incorporating a new high-fidelity LIDAR system provided by Luminar. This new LIDAR provides a longer sensing range, a much denser point cloud to better detect positions of three-dimensional objects, and a field of view that is the first to be dynamically configurable, which means that measurement points can be concentrated where sensing is needed most. The new LIDAR is married to the existing sensing system for 360-degree coverage. TRI expects to source additional suppliers as disruptive technology becomes available in the future."
Toyota video explains the new Platform 2.1 features:
Thursday, September 28, 2017
Sony 3rd Generation Global Shutter Sensors
Sony publishes flyers of 2.8MP IMX421LLJ and 2MP IMX422LLJ Pregius sensors featuring 4.5um pixel and faster frame rates:
FLIR (former Point Grey) video explains the main differences between 1st, 2nd, and 3rd generations of Pregius sensors:
Sony also added VGA IMX397CLN 2nd generation sensor to its lineup.
FLIR (former Point Grey) video explains the main differences between 1st, 2nd, and 3rd generations of Pregius sensors:
Sony also added VGA IMX397CLN 2nd generation sensor to its lineup.
Basler Compares 3D Camera Technologies
Basler whitepaper "Applications for Time-of-Flight Cameras in Robotics, Logistics and Medicine" compares different 3D camera technologies:
Wednesday, September 27, 2017
IHS Markit Estimates iPhone 8+ Cameras Cost at 11.2% of BOM
IHS Markit publishes Apple iPhone 8 Plus BOM estimation at $288.08, while dual rear and single front cameras cost is estimated at $32.5. There are also a proximity ToF sensor and an ambient light sensor that are a part of the sensor hub estimated at $6.65:
IHS Markit says about the iPhone cameras:
"The cameras have been calibrated for AR, and the A11 Bionic chip is also optimized for AR. Slow-motion video capture at 1080p is also smoother, at 240 frames per second – double from last year.
The iPhone 8 and 8 Plus have new sensors, with lenses featuring f1.8 and f2.8 apertures – brighter than the 7 Plus telephoto – in the iPhone 8 Plus. It also has new color filters. Studio lighting includes dropping the background out completely to black.
“From our BOM analysis, we can see that Apple invested heavily in the camera capabilities of the iPhone 8 Plus due to the increase in component costs,” said Wayne Lam, principal analyst, mobile devices and networks for IHS Markit. “Based on these investments, we expect improvements not only in the optics in the dual camera module, but also in computationally intensive requirements of the portrait lighting capture feature that rely on the graphical horsepower and neural engine (AI) of the A11 Bionic chip.”
IHS Markit says about the iPhone cameras:
"The cameras have been calibrated for AR, and the A11 Bionic chip is also optimized for AR. Slow-motion video capture at 1080p is also smoother, at 240 frames per second – double from last year.
The iPhone 8 and 8 Plus have new sensors, with lenses featuring f1.8 and f2.8 apertures – brighter than the 7 Plus telephoto – in the iPhone 8 Plus. It also has new color filters. Studio lighting includes dropping the background out completely to black.
“From our BOM analysis, we can see that Apple invested heavily in the camera capabilities of the iPhone 8 Plus due to the increase in component costs,” said Wayne Lam, principal analyst, mobile devices and networks for IHS Markit. “Based on these investments, we expect improvements not only in the optics in the dual camera module, but also in computationally intensive requirements of the portrait lighting capture feature that rely on the graphical horsepower and neural engine (AI) of the A11 Bionic chip.”
Vivo to Use Dual Pixel Camera for 3D Face Recognition
InstantFlashNews reports that Vivo, one of the largest Chinese smartphone manufacturers, is going to use dual pixel camera for 3D face scanning in its "bi-directional Face Wake" facial recognition technology.
Vivo X series smartphone product manager, Han Boxiao, says that the company's "Bi-directional Face Wake" is "the only one with the 3D face recognition", other than Apple's Face ID. "3D human face information scanning is achieved through a single dual pixel camera. Our face recognition is bi-directional 3D recognition, after tens of thousands of tests, the planning time is probably the Spring Festival in February this year." (CN Beta)
Vivo X series smartphone product manager, Han Boxiao, says that the company's "Bi-directional Face Wake" is "the only one with the 3D face recognition", other than Apple's Face ID. "3D human face information scanning is achieved through a single dual pixel camera. Our face recognition is bi-directional 3D recognition, after tens of thousands of tests, the planning time is probably the Spring Festival in February this year." (CN Beta)
Tuesday, September 26, 2017
Basler Stock Skyrockets
Germany-based machine vision and industrial camera maker Basler stock market value more than triples since the beginning of 2017. The company earning reports reveal a surge in its imaging business:
10-05-17
"The strong demand is mainly due to high investments in the electronics industry in Asia along with a widely spread upswing in the market. Furthermore, bottle necks in materials and production led to increasing delivery times and these to early order placements."
09-08-17
"In a very dynamic market environment, Basler AG closed the first half-year of 2017 with new record values in incoming orders and sales. For the first six months of 2017, the VDMA (Verband Deutscher Maschinen- und Anlagenbau, German engineering association) reported the strongest growth for image processing components since 15 years. For German manufacturers of image processing components this meant an order growth by 47 % and a sales growth by 43 % - in the same period Basler's incoming orders grew by 100 % and sales by 62 %.”
Thanks to JK for the link!
Update: Here is the English version of the stock price:
10-05-17
"The strong demand is mainly due to high investments in the electronics industry in Asia along with a widely spread upswing in the market. Furthermore, bottle necks in materials and production led to increasing delivery times and these to early order placements."
09-08-17
"In a very dynamic market environment, Basler AG closed the first half-year of 2017 with new record values in incoming orders and sales. For the first six months of 2017, the VDMA (Verband Deutscher Maschinen- und Anlagenbau, German engineering association) reported the strongest growth for image processing components since 15 years. For German manufacturers of image processing components this meant an order growth by 47 % and a sales growth by 43 % - in the same period Basler's incoming orders grew by 100 % and sales by 62 %.”
Thanks to JK for the link!
Update: Here is the English version of the stock price:
Dalsa Enters Production of Industry's First Polarization Line Scan Camera
Marketwired: Teledyne Dalsa announces production of the new Polarization camera in its Piranha family. First announced in Q2 2017, the Piranha4 Polarization camera is now in full production with improved performance.
The camera uses a quadlinear CMOS sensor with nanowire micro-polarizer filters. It captures multiple native polarization state data without any interpolation. With a maximum line rate of 70 kHz, the camera outputs independent images of 0°(s), 90° (p), and 135° polarization states as well as an unfiltered channel.
“Polarization brings vision technology to the next level for many industrial applications. It detects material properties such as birefringence, stress, film, composition, and grading etc. that are not detectable using conventional imaging,” said Xing-Fei He, Senior Product Manager.
The company's whitepaper gives the polarization sensor details:
The camera uses a quadlinear CMOS sensor with nanowire micro-polarizer filters. It captures multiple native polarization state data without any interpolation. With a maximum line rate of 70 kHz, the camera outputs independent images of 0°(s), 90° (p), and 135° polarization states as well as an unfiltered channel.
“Polarization brings vision technology to the next level for many industrial applications. It detects material properties such as birefringence, stress, film, composition, and grading etc. that are not detectable using conventional imaging,” said Xing-Fei He, Senior Product Manager.
The company's whitepaper gives the polarization sensor details:
Recent Stacking Advances Update
Phil Garrou reviews the recent image sensor advances in its IFTLE 353, including IISW 2017 (Hiroshima) papers and Samsung and Sony news. Hybrid bonding is said to be capturing most of the market.
Monday, September 25, 2017
Zion Research Forecasts Image Sensor Market Growth
GlobeNewsWire: Zion Market Research report "Image Sensor Market by Technology (CMOS Image Sensor, CCD Image Sensor and Hybrid Image Sensor) for Aerospace, Automotive, Consumer Electronics, Healthcare, Industrial, Entertainment, Security & Surveillance and Other Applications: Global Industry Perspective, Comprehensive Analysis and Forecast, 2016-2022" unveils the company's view on the image sensor market:
"The image sensor is a sensor that senses and delivers the information. The information detected by the image sensor comprises an image.
<...>
ABB Ltd, Vishay Intertechnology, Inc, Delphi Automotive LLP, Honeywell International, Inc, Raytek Corporation, Meggitt Sensing Systems, Analog Devices Inc., Infineon Technologies AG, Motorola Solutions, Inc., Robert Bosch GmBH, Siemens AG and others are the major players in the image sensor market."
"The image sensor is a sensor that senses and delivers the information. The information detected by the image sensor comprises an image.
<...>
ABB Ltd, Vishay Intertechnology, Inc, Delphi Automotive LLP, Honeywell International, Inc, Raytek Corporation, Meggitt Sensing Systems, Analog Devices Inc., Infineon Technologies AG, Motorola Solutions, Inc., Robert Bosch GmBH, Siemens AG and others are the major players in the image sensor market."
Sunday, September 24, 2017
FLIR Boson Teardown
SystemPlus publishes a reverse engineering report on FLIR Boson low-cost LWIR camera module:
"The FLIR Boson camera core occupies only 4.9cm3 without its lens, including a 320×256 pixel microbolometer and an advanced processor. The system is made very compact and easy for integrators to handle. It includes a new chalcogenide glass for the lens and a powerful Vision Processing Unit for the first time.
The thermal camera uses 12µm pixels based on a vanadium oxide technology microbolometer, the ISC1406L, which features a 320×256 resolution and wafer-level packaging (WLP) to achieve a very compact design. The die is half the size of the one in the oldest ISC0901 model, but gives the same definition."
"The FLIR Boson camera core occupies only 4.9cm3 without its lens, including a 320×256 pixel microbolometer and an advanced processor. The system is made very compact and easy for integrators to handle. It includes a new chalcogenide glass for the lens and a powerful Vision Processing Unit for the first time.
The thermal camera uses 12µm pixels based on a vanadium oxide technology microbolometer, the ISC1406L, which features a 320×256 resolution and wafer-level packaging (WLP) to achieve a very compact design. The die is half the size of the one in the oldest ISC0901 model, but gives the same definition."
Saturday, September 23, 2017
Fly Vision vs Human Vision
BBC publishes an article "Why is it so hard to swat a fly?" comparing human vision with fly vision:
"...have a look at a clock with a ticking hand. As a human, you see the clock ticking at a particular speed. But for a turtle it would appear to be ticking at twice that speed. For most fly species, each tick would drag by about four times more slowly. In effect, the speed of time differs depending on your species.
This happens because animals see the world around them like a continuous video. But in reality, they piece together images sent from the eyes to the brain in distinct flashes a set number of times per second. Humans average 60 flashes per second, turtles 15, and flies 250."
"...have a look at a clock with a ticking hand. As a human, you see the clock ticking at a particular speed. But for a turtle it would appear to be ticking at twice that speed. For most fly species, each tick would drag by about four times more slowly. In effect, the speed of time differs depending on your species.
This happens because animals see the world around them like a continuous video. But in reality, they piece together images sent from the eyes to the brain in distinct flashes a set number of times per second. Humans average 60 flashes per second, turtles 15, and flies 250."
Intel Project Alloy Cancelled
SlashGear: Intel work on Project Alloy "Merged Reality" headset featuring RealSense 3D camera has been stopped. In a statement to RoadToVR Intel says:
"Intel has made the decision to wind down its Project Alloy reference design, however we will continue to invest in the development of technologies to power next-generation AR/VR experiences. This includes: Movidius for visual processing, Intel RealSense depth sensing and six degrees of freedom (6DoF) solutions, and other enabling technologies..."
"Intel has made the decision to wind down its Project Alloy reference design, however we will continue to invest in the development of technologies to power next-generation AR/VR experiences. This includes: Movidius for visual processing, Intel RealSense depth sensing and six degrees of freedom (6DoF) solutions, and other enabling technologies..."
TechInsights Unveils iPhone 8 Plus Camera Surprises
TechInsights was quick to unveil few finding from iPhone 8+ reverse engineering:
The dual rear camera uses 1.22um pixel size in 12MP wide angle sensor and 1.0um pixel is 12MP tele sensor. The 7MP front camera has 1.0um pixel size.
"The dual camera module size is 21.0 mm x 10.6 mm x 6.3 mm thick. Based on our initial X-rays it appears the wide-angle camera uses optical image stabilization (OIS), while the telephoto camera does not (the same configuration as iPhone 7 Plus).
The wide-angle Sony CIS has a die size of 6.29 mm x 5.21 mm (32.8 mm2). This compares to a 32.3 mm2 die size for iPhone 7’s wide-angle CIS.
We do note a new Phase Pixel pattern, but the big news is the absence of surface artifacts corresponding to the through silicon via (TSV) arrays we’ve seen for a few years. A superficial review of the die photo would suggest it’s a regular back-illuminated (BSI) chip. However, we’ve confirmed it’s a stacked (Exmor RS) chip which means hybrid bonding is in use for the first time in an Apple camera!"
The dual rear camera uses 1.22um pixel size in 12MP wide angle sensor and 1.0um pixel is 12MP tele sensor. The 7MP front camera has 1.0um pixel size.
"The dual camera module size is 21.0 mm x 10.6 mm x 6.3 mm thick. Based on our initial X-rays it appears the wide-angle camera uses optical image stabilization (OIS), while the telephoto camera does not (the same configuration as iPhone 7 Plus).
The wide-angle Sony CIS has a die size of 6.29 mm x 5.21 mm (32.8 mm2). This compares to a 32.3 mm2 die size for iPhone 7’s wide-angle CIS.
We do note a new Phase Pixel pattern, but the big news is the absence of surface artifacts corresponding to the through silicon via (TSV) arrays we’ve seen for a few years. A superficial review of the die photo would suggest it’s a regular back-illuminated (BSI) chip. However, we’ve confirmed it’s a stacked (Exmor RS) chip which means hybrid bonding is in use for the first time in an Apple camera!"
Friday, September 22, 2017
Cameras with Black Silicon Sensors Reach the Market
It came to my attention that a number of Japanese camera companies started selling cameras with SiOnyx Black Silicon sensors. One of these companies is Bitran with CS-64NIR cooled camera based on XQE-0920 sensor. The company publishes a presentation with the application examples for the new camera sensitive up to 1200nm.
Another company is ACH2 Technologies selling ACH100-NIR camera, saying it's sensitive up to 1400nm:
Yet another company is Artray with two cameras: 1.3MP ARTCAM-130XQE-WOM and 0.92MP ARTCAM-092XQE-WOM.
It's very nice to see a new, radically different technology finally reaching the market.
Another company is ACH2 Technologies selling ACH100-NIR camera, saying it's sensitive up to 1400nm:
Yet another company is Artray with two cameras: 1.3MP ARTCAM-130XQE-WOM and 0.92MP ARTCAM-092XQE-WOM.
It's very nice to see a new, radically different technology finally reaching the market.
Tractica Forecasts Rise of Enterprise AR
Tractica posts "Augmented Reality: The Rise of Enterprise Use Cases" article on its website. Few interesting statements:
"Smart glasses that replace or complement the desktop likely face a long-haul journey. There are a number of technical issues to overcome, including FOV, weight, ergonomics and comfort, and extended AR use.
The momentum for smart AR glasses has shifted toward mixed reality (MR) headsets, which offer a much more compelling user experience, using 3D depth sensing and positional tracking to immerse the user into a holographic world. Microsoft HoloLens is the first truly capable MR headset and is seeing rapid momentum in terms of trials and pilots. There are still questions about whether or not Microsoft has oversold the capabilities of the device, and if the enterprise market can scale to make it a commercially viable product. Tractica expects that Microsoft is likely to be committed to the enterprise market at least through the end of 2018 before it readies the HoloLens for consumer launch. If the pilots do not convert into meaningful volumes, Microsoft could find itself in an awkward place like Google did with Glass, eventually pulling the plug.
Tractica estimates that the monthly active users (MAUs) for smartphone/tablet enterprise AR will be 49 million by the end of 2022. In contrast, the installed base of enterprise smart glasses users at the end of 2022 will be approximately 19 to 21 million."
"Smart glasses that replace or complement the desktop likely face a long-haul journey. There are a number of technical issues to overcome, including FOV, weight, ergonomics and comfort, and extended AR use.
The momentum for smart AR glasses has shifted toward mixed reality (MR) headsets, which offer a much more compelling user experience, using 3D depth sensing and positional tracking to immerse the user into a holographic world. Microsoft HoloLens is the first truly capable MR headset and is seeing rapid momentum in terms of trials and pilots. There are still questions about whether or not Microsoft has oversold the capabilities of the device, and if the enterprise market can scale to make it a commercially viable product. Tractica expects that Microsoft is likely to be committed to the enterprise market at least through the end of 2018 before it readies the HoloLens for consumer launch. If the pilots do not convert into meaningful volumes, Microsoft could find itself in an awkward place like Google did with Glass, eventually pulling the plug.
Tractica estimates that the monthly active users (MAUs) for smartphone/tablet enterprise AR will be 49 million by the end of 2022. In contrast, the installed base of enterprise smart glasses users at the end of 2022 will be approximately 19 to 21 million."
Thursday, September 21, 2017
Espros Presents its Pulsed ToF Solution
Espros CEO Beat De Coi presents the first results of his company pulsed ToF chip (pToF) at AutoSens 2017 in Brussels, Belgium. Performance of the sensors include a QE of 70% at 905nm, a sensitivity trigger level as low as 20 e- for object detection, 250MHz CCD sampling and interpolation algorithms to reach centimeter accuracy. The sensors will be operating in full sunlight without disturbance and function under all weather conditions. The presentation is available for download at Espros site.
Beat De Coi says: «This new generation of pulsed time-of-flight sensors will show a performance that will boost autonomous driving effort. I have been working on time-of-flight technology since 30 years and I am extremely proud that we reached this level with conventional silicon.»
Few slides from the presentation explaining Espros new chip operation:
Beat De Coi says: «This new generation of pulsed time-of-flight sensors will show a performance that will boost autonomous driving effort. I have been working on time-of-flight technology since 30 years and I am extremely proud that we reached this level with conventional silicon.»
Few slides from the presentation explaining Espros new chip operation:
Auger Excitation Shows APD-like Gains
A group of UCSD researchers publishes an open-access Applied Physics Letters paper "An amorphous silicon photodiode with 2 THz gain‐bandwidth product based on cycling excitation process" by Lujiang Yan, Yugang Yu, Alex Ce Zhang, David Hall, Iftikhar Ahmad Niaz, Mohammad Abu Raihan Miah, Yu-Hsin Liu, and Yu-Hwa Lo. The paper proposes APD-magnitude gain mechanism in by means of 30nm-thing amorphous Si film deposited on top of the bulk silicon:
"APDs have relatively high excess noise, a limited gain-bandwidth product, and high operation voltage, presenting a need for alternative signal amplification mechanisms of superior properties. As an amplification mechanism, the cycling excitation process (CEP) was recently reported in a silicon p-n junction with subtle control and balance of the impurity levels and profiles. Realizing that CEP effect depends on Auger excitation involving localized states, we made the counter intuitive hypothesis that disordered materials, such as amorphous silicon, with their abundant localized states, can produce strong CEP effects with high gain and speed at low noise, despite their extremely low mobility and large number of defects. Here, we demonstrate an amorphous silicon low noise photodiode with gain-bandwidth product of over 2 THz, based on a very simple structure."
"APDs have relatively high excess noise, a limited gain-bandwidth product, and high operation voltage, presenting a need for alternative signal amplification mechanisms of superior properties. As an amplification mechanism, the cycling excitation process (CEP) was recently reported in a silicon p-n junction with subtle control and balance of the impurity levels and profiles. Realizing that CEP effect depends on Auger excitation involving localized states, we made the counter intuitive hypothesis that disordered materials, such as amorphous silicon, with their abundant localized states, can produce strong CEP effects with high gain and speed at low noise, despite their extremely low mobility and large number of defects. Here, we demonstrate an amorphous silicon low noise photodiode with gain-bandwidth product of over 2 THz, based on a very simple structure."
Wednesday, September 20, 2017
Yole on iPhone X 3D Innovations
Yole Developpement publishes its analysis of iPhone X 3D camera design and implications "Apple iPhone X: unlocking the next decade with a revolution:"
"The infrared camera, proximity ToF detector and flood illuminator seem to be treated as a single block unit. This is supplied by STMicroelectronics, along with Himax for the illuminator subsystem, and Philips Photonics and Finisar for the infrared-light vertical-cavity surface-emitting laser (VCSEL). Then, on the right hand of the speaker, the regular front-facing camera is probably supplied by Cowell, and the sensor chip by Sony. On the far right, the “dot pattern projector” is from ams subsidiary Heptagon... It combines a VCSEL, probably from Lumentum or Princeton Optronics, a wafer level lens and a diffractive optical element (DOE) able to project 30,000 dots of infrared light.
The next step forward should be full ToF array cameras. According to the roadmap Yole has published this should happen before 2020."
"The infrared camera, proximity ToF detector and flood illuminator seem to be treated as a single block unit. This is supplied by STMicroelectronics, along with Himax for the illuminator subsystem, and Philips Photonics and Finisar for the infrared-light vertical-cavity surface-emitting laser (VCSEL). Then, on the right hand of the speaker, the regular front-facing camera is probably supplied by Cowell, and the sensor chip by Sony. On the far right, the “dot pattern projector” is from ams subsidiary Heptagon... It combines a VCSEL, probably from Lumentum or Princeton Optronics, a wafer level lens and a diffractive optical element (DOE) able to project 30,000 dots of infrared light.
The next step forward should be full ToF array cameras. According to the roadmap Yole has published this should happen before 2020."
Luminar on Automotive LiDAR Progress
OSA publishes a digest of Luminar CTO, Jason Eichenholz, talk at 2017 Frontiers in Optics meeting. Few quotes:
"Surprisingly, however, despite this safety imperative, Eichenholz pointed out that the lidar system used (for example) in Uber’s 2017 self-driving demo has essentially the same technical specifications as the system of the winning vehicle in DARPA’s 2007 autonomous-vehicle grand challenge. “In ten years,” he said, “you have not seen a dramatic improvement in lidar systems to enable fully autonomous driving. There’s been so much progress in computation, so much in machine vision … and yet the technology for the main set of eyes for these cars hasn’t evolved.”
On the requirements side, the array of demands is sobering. They include, of course, a bevy of specific requirements: a 200-m range, to give the vehicle passenger a minimum of seven seconds of reaction time in case of an emergency; laser eye safety; the ability to capture millions of points per second and maintain a 10-fps frame rate; and the ability to handle fog and other unclear conditions.
But Eichenholz also stressed that an autonomous vehicle on the road operates in a “target-rich” environment, with hundreds of other autonomous vehicles shooting out their own laser signals. That environment, he said, creates huge challenges of background noise and interference. And he noted some of the same issues with supply chain, cost control, and zero error tolerance.
Eichenholz outlined some of the approaches and technical steps that Luminar has adopted in its path to meet those many requirements in autonomous-vehicle lidar. One step, he said, was the choice of a 1550-nm, InGaAs laser, which allows both eye safety and a good photon budget. Another was the use of an InGaAs linear avalanche photodiode detector rather than single-photon counting, and scanning the laser signal for field coverage rather than using a detector array. The latter two decisions, he said, substantially reduce problems of background noise and interference. “This is a huge part of our architecture.”
Wired UK publishes a video interview with LiDAR CEO Austin Russell:
"Surprisingly, however, despite this safety imperative, Eichenholz pointed out that the lidar system used (for example) in Uber’s 2017 self-driving demo has essentially the same technical specifications as the system of the winning vehicle in DARPA’s 2007 autonomous-vehicle grand challenge. “In ten years,” he said, “you have not seen a dramatic improvement in lidar systems to enable fully autonomous driving. There’s been so much progress in computation, so much in machine vision … and yet the technology for the main set of eyes for these cars hasn’t evolved.”
On the requirements side, the array of demands is sobering. They include, of course, a bevy of specific requirements: a 200-m range, to give the vehicle passenger a minimum of seven seconds of reaction time in case of an emergency; laser eye safety; the ability to capture millions of points per second and maintain a 10-fps frame rate; and the ability to handle fog and other unclear conditions.
But Eichenholz also stressed that an autonomous vehicle on the road operates in a “target-rich” environment, with hundreds of other autonomous vehicles shooting out their own laser signals. That environment, he said, creates huge challenges of background noise and interference. And he noted some of the same issues with supply chain, cost control, and zero error tolerance.
Eichenholz outlined some of the approaches and technical steps that Luminar has adopted in its path to meet those many requirements in autonomous-vehicle lidar. One step, he said, was the choice of a 1550-nm, InGaAs laser, which allows both eye safety and a good photon budget. Another was the use of an InGaAs linear avalanche photodiode detector rather than single-photon counting, and scanning the laser signal for field coverage rather than using a detector array. The latter two decisions, he said, substantially reduce problems of background noise and interference. “This is a huge part of our architecture.”
Wired UK publishes a video interview with LiDAR CEO Austin Russell:
Tuesday, September 19, 2017
Exvision High-Speed Image Sensor-Based Gesture Control
Exvision, a spin-off from University of Tokyo's Ishikawa-Watanabe Laboratory, demos gesture control from far away, based on a high speed image sensor (currently, 120fps Sony IMX208):
SensL Demos 100m LiDAR Range
SensL publishes a demo video of 100m LiDAR based on its 1 x 16 photomultiplier imager scanned in 5 x 80 deg angle:
OmniVision Announces Automotive Reference Design
PRNewswire: OmniVision announces an automotive reference design system (ARDS) that allows automotive imaging-system and software developers to mix and match image sensors, ISPs and long-distance serializer modules.
The imaging-system industry is anticipating significant growth in ADAS, including surround-view and rear-view camera systems. NCAP mandates all new vehicles in the U.S. to be equipped with rear-view cameras by 2018. Surround-view systems (SVS) are also expected to become an even more popular feature for the luxury-vehicle segment within the same timeframe. SVSs typically require at least four cameras to provide a 360-degree view.
OmniVision's ARDS demo kits feature OmniVision's 1080p60 OV2775 image sensor, optional OV495 ISP and serializer camera module. The OV2775 is built on 2.8um OmniBSI-2 Deep Well pixel with a 16-bit linear output from a single exposure.
The imaging-system industry is anticipating significant growth in ADAS, including surround-view and rear-view camera systems. NCAP mandates all new vehicles in the U.S. to be equipped with rear-view cameras by 2018. Surround-view systems (SVS) are also expected to become an even more popular feature for the luxury-vehicle segment within the same timeframe. SVSs typically require at least four cameras to provide a 360-degree view.
OmniVision's ARDS demo kits feature OmniVision's 1080p60 OV2775 image sensor, optional OV495 ISP and serializer camera module. The OV2775 is built on 2.8um OmniBSI-2 Deep Well pixel with a 16-bit linear output from a single exposure.
Monday, September 18, 2017
Samsung to Start Mass Production of 1000fps 3-Layer Sensor
ETNews reports that Samsung follows Sony footsteps to develop its own 1000fps image sensor for smartphones:
"Samsung Electronics is going to start mass-producing ‘3-layered image sensor’ in November. This image sensor is made into a layered structure by connecting a system semiconductor (logic chip) that is in charge of calculations and DRAM chip that can temporarily store data through TSV (Through Silicon Via) technology. Samsung Electronics currently ordered special equipment for mass-production and is going to start mass-producing ‘3-layered image sensor’ after doing pilot operation in next month.
SONY established a batch process system that attaches a sensor, a DRAM chip, and a logic chip in a unit of a wafer. On the other hand, it is understood that Samsung Electronics is using a method that makes 2-layered structure with a sensor and a logic chip and attaches DRAM through TC (Thermal Compression) bonding method after flipping over a wafer. From productivity and production cost, SONY has an upper hand. It seems that a reason why Samsung Electronics decided to use its way is because it wanted to avoid using other patents."
"Samsung Electronics is going to start mass-producing ‘3-layered image sensor’ in November. This image sensor is made into a layered structure by connecting a system semiconductor (logic chip) that is in charge of calculations and DRAM chip that can temporarily store data through TSV (Through Silicon Via) technology. Samsung Electronics currently ordered special equipment for mass-production and is going to start mass-producing ‘3-layered image sensor’ after doing pilot operation in next month.
SONY established a batch process system that attaches a sensor, a DRAM chip, and a logic chip in a unit of a wafer. On the other hand, it is understood that Samsung Electronics is using a method that makes 2-layered structure with a sensor and a logic chip and attaches DRAM through TC (Thermal Compression) bonding method after flipping over a wafer. From productivity and production cost, SONY has an upper hand. It seems that a reason why Samsung Electronics decided to use its way is because it wanted to avoid using other patents."
Turkish Startup Demos CMOS Night Vision
Magic Leap Valuation to Grow to $6B
Bloomberg reports that AR headset startup Magic Leap is in the process of raising a new financing round of more than $500M at the valuation close to $6B. The company has already raised more than $1.3B in the previous rounds valuing it at $4.5B.
"According to people familiar with the company’s plans, the headset device will cost between $1,500 and $2,000, although that could change. Magic Leap hopes to ship its first device to a small group of users within six months, according to three people familiar with its plans."
"According to people familiar with the company’s plans, the headset device will cost between $1,500 and $2,000, although that could change. Magic Leap hopes to ship its first device to a small group of users within six months, according to three people familiar with its plans."
Sunday, September 17, 2017
Haitong Securities Forecasts Smartphones with 3D Sensing Market $992.5B in 2020
InstantFlashNews quotes a number of sources in Chinese language saying that Haitong Securities analysts forecast the global sales of smartphones equipped with 3D sensors to reach $992.5B in 2020. The sales of smartphones with front structured light camera will be $667.8B, while the sales of smartphones with rear ToF camera will take $324.7B.
Haitong Securities estimates iPhone X 3D structured light components cost at ~$15, with 3D image sensor ~$3, TX component ~$7, RX ~$3, and system module about $2.
Haitong Securities estimates iPhone X 3D structured light components cost at ~$15, with 3D image sensor ~$3, TX component ~$7, RX ~$3, and system module about $2.
Image Sensors in AR/VR Devices
Citibank publishes a nice market report on Augmented and Virtual Reality dated by October 2016. The report emphasize a large image sensing content in almost all AR/VR devices:
Saturday, September 16, 2017
Espros Keeps Improving its ToF Sensors
Espros September 2017 Newsletter updates on the company progress with its ToF solutions:
"A real breakthrough was achieved in the field of camera calibration. Our initial goal was to simply find the optimum procedure to calibrate a DME660 camera. The result however is a revolutionary finding, that not only includes the compensation algorithm but also a simple desktop hardware for distance calibration.
No need any more for large target screens and moving stages! Simply put your camera in a shoebox sized flat field setup and calibrate the full distance range with help of the on-chip DLL stage. Done!"
"You won't recognize our epc660 flagship QVGA imager in version 007! Improved ADC performance, 28% higher sensitivity, as well as low distance response non-uniformity (DRNU) of a few centimeters only (uncalibrated). We took 3 rounds (versions 004-006) in the fab transfer process and did not let go before we got it right."
The company also presents a preliminary data on its ToFCam 635 module:
"A real breakthrough was achieved in the field of camera calibration. Our initial goal was to simply find the optimum procedure to calibrate a DME660 camera. The result however is a revolutionary finding, that not only includes the compensation algorithm but also a simple desktop hardware for distance calibration.
No need any more for large target screens and moving stages! Simply put your camera in a shoebox sized flat field setup and calibrate the full distance range with help of the on-chip DLL stage. Done!"
"You won't recognize our epc660 flagship QVGA imager in version 007! Improved ADC performance, 28% higher sensitivity, as well as low distance response non-uniformity (DRNU) of a few centimeters only (uncalibrated). We took 3 rounds (versions 004-006) in the fab transfer process and did not let go before we got it right."
The company also presents a preliminary data on its ToFCam 635 module:
Friday, September 15, 2017
iPhone X 3D Camera Cost Estimated at 6% of BOM
GSMArena, MyFixGuide quote Chinese site ICHunt.com estimating Apple iPhone X 3D camera components cost at $25 out of the whole BOM of $412.75:
Digitimes on iPhone X Influence on the Industry
Digitimes believes that iPhone X "new features... such as 3D sensing are likely to become new standards for next-generation smartphones launched by Android-based smartphone vendors. The demand for 3D sensor modules is likely to experience an explosive growth in 2018-2019. Major players in the Android camp, including Samsung and Huawei, certainly will jump onto the bandwagon."
Meanwhile, the smaller Android phone makers have jumped on this bandwagon even faster. Doogee Mix 2 already presents the face authentication based on its front 3D stereo camera:
Meanwhile, the smaller Android phone makers have jumped on this bandwagon even faster. Doogee Mix 2 already presents the face authentication based on its front 3D stereo camera: