Almalence compares its SuperSensor with Google Pixel 3 super resolution zoom. Both are based on multiframe image acquisition and processing:
Lists
▼
Thursday, January 31, 2019
Apple Tests Sony ToF Camera for Integration into 2020 iPhone Models
Bloomberg: Apple plans to launch iPhones with a 15-feet range 3D rear camera as soon as in 2020, according to Bloomberg sources. Apple has been in talks with Sony about testing ToF sensors for the new system. The main use case for the 3D rear camera is AR applications.
GSMArena reports that an oncoming Sony Xperia XZ4 smartphone will have 0.3MP ToF sensor in its rear camera cluster:
GSMArena reports that an oncoming Sony Xperia XZ4 smartphone will have 0.3MP ToF sensor in its rear camera cluster:
Wednesday, January 30, 2019
Samsung Agreed to Acquire Corephotonics for $155M
Globes: Samsung has agreed to acquire Corephotonics for $155M, according to Globes sources. Three weeks ago, Globes was reported that Samsung was in advanced talks to buy the company. Corephotonics was founded in 2012 and has raised a total of $50M.
Among the company's multi-camera solutions, there is a mono and color cameras image fusion promised to add 2x SNR and 1.5x resolution improvements:
Here is Corephotonics team:
Incidentally, the $155M is the same amount that Sony paid for Toshiba image sensor business in 2015, including the fabs and 1,100 employees.
Among the company's multi-camera solutions, there is a mono and color cameras image fusion promised to add 2x SNR and 1.5x resolution improvements:
Here is Corephotonics team:
Incidentally, the $155M is the same amount that Sony paid for Toshiba image sensor business in 2015, including the fabs and 1,100 employees.
Laser Components Presents SPAD Array for LiDAR Applications
Laser Components unveils the SPAD2L192, a solid-state SPAD sensor for flash LiDAR applications. With a resolution of 192 x 2 pixels, the SPAD array offers high sensitivity and high temporal resolution. The noise is below 50 cps. The in-pixel time-to-digital converter features a temporal resolution of 312.5 ps and a scale value of 1.28 μs. That enables a nominal range of up to 192 m at a resolution of 4.7 cm. The distance measurement is based on the first-photon, direct ToF principle.
Thanks to JR for the pointer!
Thanks to JR for the pointer!
Uncooled InGaAs Imagers
Spectronet publishes Sofradir's presentation of its uncooled InGaAs SWIR imagers:
SCD has a similar family of uncooled SWIR iamgers:
SCD has a similar family of uncooled SWIR iamgers:
Tuesday, January 29, 2019
IISW 2019 Final Call For Papers
International Image Sensor Workshop to be held in Snowbird, Utah, USA on June 23-27, 2019, publishes a Final Call For Papers:
"The 2019 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. Now in its 33rd year, the workshop is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is its tradition, the 2019 workshop will emphasize an open exchange of information among participants in an informal, secluded setting. The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners."
"The 2019 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. Now in its 33rd year, the workshop is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is its tradition, the 2019 workshop will emphasize an open exchange of information among participants in an informal, secluded setting. The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners."
ADI Presents ToF Camera for Smartphones and Automotive Applications
Analog Devices Youtube videos present its family of ToF sensor for a multitude of applications ranging from smartphone FaceID to automotive in-cabin driver monitoring. ADI is in a process of building a web site dedicated to its ToF solutions.
Monday, January 28, 2019
TowerJazz Starts Production of World's Smallest Global Shutter Pixel Sensor
GlobeNewswire: TowerJazz announced the products featuring its world’s smallest Global Shutter (GS) pixel ramping to production in its 300mm Uozu, Japan manufacturing facility. This achievement, based on the Company’s advanced 65nm technology platform and light pipe technology, offers superb GS performance with a variety of pixel sizes down to 2.5 µm, which is said to be the smallest pixel size in the world.
Company’s light pipe technology on its 65nm node enables this half area pixel size, as compared with the currently used pixel in the market, while still offering state of the art functionality and angular response. The Shutter Efficiency and QE are said to be outstanding even at high ray angles (low F#s) thanks to the unique light funneling properties of the per pixel micro light pipe.
World’s previous smallest GS pixel, also achieved by TowerJazz, was 2.8 µm in 2016. Several products using this pixel are currently in mass production.
“We are proud to announce the world’s smallest GS Pixel going into mass production, providing our customers with cutting edge imaging solutions, meeting today’s advanced market needs. This break-through achievement strongly demonstrates TowerJazz’s continued commitment to offer leading technology as well as unique value proposition, based on its profound technical knowledge,” said Assaf Lahav, TowerJazz Fellow and CIS expert.
TowerJazz publishes its pixel catalog:
Company’s light pipe technology on its 65nm node enables this half area pixel size, as compared with the currently used pixel in the market, while still offering state of the art functionality and angular response. The Shutter Efficiency and QE are said to be outstanding even at high ray angles (low F#s) thanks to the unique light funneling properties of the per pixel micro light pipe.
World’s previous smallest GS pixel, also achieved by TowerJazz, was 2.8 µm in 2016. Several products using this pixel are currently in mass production.
“We are proud to announce the world’s smallest GS Pixel going into mass production, providing our customers with cutting edge imaging solutions, meeting today’s advanced market needs. This break-through achievement strongly demonstrates TowerJazz’s continued commitment to offer leading technology as well as unique value proposition, based on its profound technical knowledge,” said Assaf Lahav, TowerJazz Fellow and CIS expert.
TowerJazz publishes its pixel catalog:
Sunday, January 27, 2019
Sony Looks for Image Sensor Designers in Europe and USA
Nikkei reports that Sony intends to open more development centers in the U.S. and Europe "in an effort to make up for a scarcity of engineering talent in Japan."
Sony has a development center in the U.S. with dozens of engineers adapting designs to customer specifications. The company is contemplating to open a similar R&D center in Europe in two to three years. Sony also plans to add technical support stuff in China to work with the local customers.
The image sensor market is growing 9% a year to reach $19b in 2022, according to IC Insights. 80% of Sony's image sensor business depends on smartphones. The company aims to lower the smartphone exposure to 70% by 2025 by expanding into automotive and industrial applications.
In spite of controlling 50% of the image sensor market, an unnamed Sony executive says that the company "cannot secure enough talent by looking in Japan alone." Job openings in Japan are sometimes over four times the number of applicants. Design centers in USA and Europe might solve Sony talent hiring problems.
An earlier Nikkei article portrays the hiring problem as industry-wide in Japan: "There are 256 open positions for every 100 job seekers in the semiconductor industry in Japan, up from 52 per 100 in January 2014, according to staffing agency Recruit Career. That is higher than the average for all industries."
Thanks to TG and PK for the pointer!
Sony has a development center in the U.S. with dozens of engineers adapting designs to customer specifications. The company is contemplating to open a similar R&D center in Europe in two to three years. Sony also plans to add technical support stuff in China to work with the local customers.
The image sensor market is growing 9% a year to reach $19b in 2022, according to IC Insights. 80% of Sony's image sensor business depends on smartphones. The company aims to lower the smartphone exposure to 70% by 2025 by expanding into automotive and industrial applications.
In spite of controlling 50% of the image sensor market, an unnamed Sony executive says that the company "cannot secure enough talent by looking in Japan alone." Job openings in Japan are sometimes over four times the number of applicants. Design centers in USA and Europe might solve Sony talent hiring problems.
An earlier Nikkei article portrays the hiring problem as industry-wide in Japan: "There are 256 open positions for every 100 job seekers in the semiconductor industry in Japan, up from 52 per 100 in January 2014, according to staffing agency Recruit Career. That is higher than the average for all industries."
Sony image sensor fab in Japan |
Thanks to TG and PK for the pointer!
Saturday, January 26, 2019
CSEM Develops Low Power Image Sensor for IoT
CSEM developed a fully autonomous portable camera that can be deployed quickly and easily via an adhesive patch or magnet, a world first. The Witness IOT camera consumes less than 1mW of power in active mode, fully covered by a flexible, high-efficiency photovoltaic cell with an adhesive surface. An HDR (120dB) CMOS sensor consuming less than 700uW @ 10 fps for 320x320 pixels allows triggering by scene–activity detection. The camera records fixed images at 1fps and stores them in flash memory for later USB readout.
Witness prototype measures 80 x 80 mm. Diameter of the camera button is 30 mm, its thickness is 4 mm. Forthcoming versions will thus include VGA resolution as well as embedded face recognition.
“Enabling a range of applications from unattended surveillance and camera traps to wildlife observation, Witness perfectly embodies CSEM’s technological strategy,” enthuses Alain-Serge Porret, VP Integrated and Wireless Systems at the Swiss Research and Technology Organization. “We aim to deliver autonomous, low-energy-consuming devices combining both intelligence and efficiency.”
CSEM publishes unusually high resolution pictures of the camera showing the sensor's fine layout details:
Witness prototype measures 80 x 80 mm. Diameter of the camera button is 30 mm, its thickness is 4 mm. Forthcoming versions will thus include VGA resolution as well as embedded face recognition.
“Enabling a range of applications from unattended surveillance and camera traps to wildlife observation, Witness perfectly embodies CSEM’s technological strategy,” enthuses Alain-Serge Porret, VP Integrated and Wireless Systems at the Swiss Research and Technology Organization. “We aim to deliver autonomous, low-energy-consuming devices combining both intelligence and efficiency.”
CSEM publishes unusually high resolution pictures of the camera showing the sensor's fine layout details:
IEEE Spectrum on Image Sensor Damage by Lasers
As PZ pointed in comments, IEEE Spectrum publishes its own research on the camera damage by Aeye LiDAR at CES. The Spectrum article has quite a few interesting links, including a pointer to International Laser Display Association web page talking about the laser damage to the image sensors and referring to Fraunhofer open-access paper "Laser-induced damage threshold of camera sensors and micro-optoelectromechanical systems" by Bastian Schwarz, Gunnar Ritt, Michael Koerber, and Bernd Eberle published in Optical Engineering in 2017.
"The continuous development of laser systems toward more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors, such as complementary metal–oxide–semiconductors (CMOS) and charge-coupled devices. These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, micro-optoelectromechanical systems, such as a digital micromirror device (DMD), are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light. The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources, both pulsed lasers and continuous-wave (CW)-lasers are used. The laser-induced damage threshold is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power. Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructive device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects, such as persistent dead columns or rows of pixels in the sensor image."
The paper has an interesting experimental data on laser damage at 532nm wavelength:
Then, Fraunhofer researchers determine the laser energy density needed for each type of the damage:
Another SPIE paper studies the CIS damage by 1064nm laser pulses: "Damage effect on CMOS detector irradiated by single-pulse laser" by Feng Guo, Rongzhen Zhu, Ang Wang, and Xiang’ai Cheng from National Univ. of Defense Technology (China).
"The continuous development of laser systems toward more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors, such as complementary metal–oxide–semiconductors (CMOS) and charge-coupled devices. These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, micro-optoelectromechanical systems, such as a digital micromirror device (DMD), are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light. The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources, both pulsed lasers and continuous-wave (CW)-lasers are used. The laser-induced damage threshold is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power. Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructive device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects, such as persistent dead columns or rows of pixels in the sensor image."
The paper has an interesting experimental data on laser damage at 532nm wavelength:
Then, Fraunhofer researchers determine the laser energy density needed for each type of the damage:
Another SPIE paper studies the CIS damage by 1064nm laser pulses: "Damage effect on CMOS detector irradiated by single-pulse laser" by Feng Guo, Rongzhen Zhu, Ang Wang, and Xiang’ai Cheng from National Univ. of Defense Technology (China).
Camera Image Quality Benchmarking
Wiley publishes a book on image quality "Camera Image Quality Benchmarking" by Jonathan Phillips (Google) and Henrik Eliasson (Eclipse Optics).
"The authors show how to quantitatively compare image quality of cameras used for consumer photography. This book helps to fill a void in the literature by detailing the types of objective and subjective metrics that are fundamental to benchmarking still and video imaging devices. Specifically, the book provides an explanation of individual image quality attributes and how they manifest themselves to camera components and explores the key photographic still and video image quality metrics. The text also includes illustrative examples of benchmarking methods so that the practitioner can design a methodology appropriate to the photographic usage in consideration."
I'd guess part of the book content is exposed in Eclipse Optics blog at Medium, such as "Sharpness and Resolution" post:
"The authors show how to quantitatively compare image quality of cameras used for consumer photography. This book helps to fill a void in the literature by detailing the types of objective and subjective metrics that are fundamental to benchmarking still and video imaging devices. Specifically, the book provides an explanation of individual image quality attributes and how they manifest themselves to camera components and explores the key photographic still and video image quality metrics. The text also includes illustrative examples of benchmarking methods so that the practitioner can design a methodology appropriate to the photographic usage in consideration."
I'd guess part of the book content is exposed in Eclipse Optics blog at Medium, such as "Sharpness and Resolution" post:
Friday, January 25, 2019
Event Based Vision Resources Archive
TL reminds me about University of Zurich-ETH maintains a large collection of event-based vision resources at Github that has been already mentioned a year and a half ago. Other than links to many recent papers, there is a link to a nice video explanation of event based sensor principles by University of Maryland:
Thanks to TL for the reminder about the archive!
Thanks to TL for the reminder about the archive!
Livox Announces $600 Automotive LiDAR
Spar3D: Livox starts shipping $600 LiDAR for autonomous vehicles and other applications:
"All Livox LiDAR sensors are low cost, high performance, highly reliable, compact in size, and mass-production ready. Specs (e.g., FOV, point density, and detection range) of different models are optimized for different application scenarios. In particular, the Mid-40 covers a circular FOV of 38.4 degrees with a detection range of up to 260 meters (for objects with reflectivity around 80%). Meanwhile, the Mid-100 combines three Mid-40 units internally to form an expansive horizontal FOV of approximately 100 degrees. Compared with the Mid-40, the Horizon has a similar measuring range but features a rectangular-shaped FOV that is 80 degrees horizontal and 25 degrees vertical, which is highly suitable for autonomous driving applications. The Horizon also delivers real-time point cloud data that is three times denser than the Mid series LiDAR sensors. As for the Tele-15, it features an ultra-long measuring range of 500 meters when reflectivity is above 80%. Even with 20% reflectivity, the measuring range is still up to 250 meters. In addition, the Tele-15 has a circular FOV of 15 degrees, which is narrower than the Mid-40, but delivers a point cloud that is 17 times denser. These key features enable the Tele-15 to see objects far ahead with great detail.
One major issue LiDAR manufacturers face is the need for extremely accurate laser/receiver alignments (less than 10 μm accuracy) within a spatial dimension of about 10cm or more. For most state-of-the-art LiDAR sensors that are currently available, these alignments are done manually by skilled personnel. Livox LiDAR sensors eliminate the need for this process by using unique scanning methods and the DL-Pack solution, which enables the mass production of Livox sensors."
The company is fairly open on its LiDAR performance. Mid-40/Mid-100 family offers:
Tele-15 has a narrow FoV and a longer range:
Horizon model has similar specs to Mid LiDARs:
Thanks to TG for the link!
"All Livox LiDAR sensors are low cost, high performance, highly reliable, compact in size, and mass-production ready. Specs (e.g., FOV, point density, and detection range) of different models are optimized for different application scenarios. In particular, the Mid-40 covers a circular FOV of 38.4 degrees with a detection range of up to 260 meters (for objects with reflectivity around 80%). Meanwhile, the Mid-100 combines three Mid-40 units internally to form an expansive horizontal FOV of approximately 100 degrees. Compared with the Mid-40, the Horizon has a similar measuring range but features a rectangular-shaped FOV that is 80 degrees horizontal and 25 degrees vertical, which is highly suitable for autonomous driving applications. The Horizon also delivers real-time point cloud data that is three times denser than the Mid series LiDAR sensors. As for the Tele-15, it features an ultra-long measuring range of 500 meters when reflectivity is above 80%. Even with 20% reflectivity, the measuring range is still up to 250 meters. In addition, the Tele-15 has a circular FOV of 15 degrees, which is narrower than the Mid-40, but delivers a point cloud that is 17 times denser. These key features enable the Tele-15 to see objects far ahead with great detail.
One major issue LiDAR manufacturers face is the need for extremely accurate laser/receiver alignments (less than 10 μm accuracy) within a spatial dimension of about 10cm or more. For most state-of-the-art LiDAR sensors that are currently available, these alignments are done manually by skilled personnel. Livox LiDAR sensors eliminate the need for this process by using unique scanning methods and the DL-Pack solution, which enables the mass production of Livox sensors."
The company is fairly open on its LiDAR performance. Mid-40/Mid-100 family offers:
Tele-15 has a narrow FoV and a longer range:
Horizon model has similar specs to Mid LiDARs:
Thanks to TG for the link!
Thursday, January 24, 2019
ST Reports Double Digit Growth in Image Sensing Business
SeekingAlpha: ST Q4 2018 earnings call has interesting updates on the company's image sensor business:
"...our Analog, MEMS and Sensors group, AMS, revenues totaled $988 million, an increase of 9.5% with double-digit growth in Imaging, and single-digit growth in Analog and MEMS.
...our investments in the next generation of imaging sensor technologies. These will enable us to continue our leadership in our focus technologies for personal electronics, and to address selected industrial and automotive applications in the future... we have important R&D effort to continuously improve the performance of our device, especially addressing the Time-of-Flight application and the 3D sensing.
The strategy of ST is to be a leader on, let's say, global shutter technology... We have, let's say, improved efficiency master plan in this technology to offer the best global shutter of technology to our customer. And we do believe that this technology will enable ST to continue to sustain, okay, this market. But in personal electronics, but as well in other application, like industrial and automotive in the future.
Then we have, let's say, complementary to this - let's say, emerging global shutter technologies and product, we want also to have a strong leadership on the Time-of-Flight based devices, ambient lighting, Time-of-Flight sensor, Time-of-Flight sensors for proximity sensing ranging, but also enabling virtual reality or augmented reality application. This require technology improvement."
"...our Analog, MEMS and Sensors group, AMS, revenues totaled $988 million, an increase of 9.5% with double-digit growth in Imaging, and single-digit growth in Analog and MEMS.
...our investments in the next generation of imaging sensor technologies. These will enable us to continue our leadership in our focus technologies for personal electronics, and to address selected industrial and automotive applications in the future... we have important R&D effort to continuously improve the performance of our device, especially addressing the Time-of-Flight application and the 3D sensing.
The strategy of ST is to be a leader on, let's say, global shutter technology... We have, let's say, improved efficiency master plan in this technology to offer the best global shutter of technology to our customer. And we do believe that this technology will enable ST to continue to sustain, okay, this market. But in personal electronics, but as well in other application, like industrial and automotive in the future.
Then we have, let's say, complementary to this - let's say, emerging global shutter technologies and product, we want also to have a strong leadership on the Time-of-Flight based devices, ambient lighting, Time-of-Flight sensor, Time-of-Flight sensors for proximity sensing ranging, but also enabling virtual reality or augmented reality application. This require technology improvement."
Wednesday, January 23, 2019
Jingfang Optotelectronics Acquires Anteryon for about 45m Euros
PRNewswire: Suzhou, China-based Jingfang Optotelectronics (WLOPT), in collaboration with Beauchamp Beer B.V, acquired the Dutch Anteryon. The Eindhoven-registered Anteryon was spun off from Philips in 2006 with the intention to mass produce wafer level optics and mobile phone camera modules based on that. Now, 12 years later, the company delivers optical components and related services for business and consumer markets. Its core technologies comprise of IP and a proprietary replication technique to produce high end hybrid optical lenses combined with ultraprecise glass and surface structuring, optical and mechanical coatings and opto-mechanical and electronic assemblies including Hyper Spectral Imaging.
"This acquisition is an important milestone in WLOPTs long-term growth strategy and provides WLOPT with access to key technologies for the development of miniaturized optical solutions for high-volume consumer applications, such as smartphones, next-generation security and automotive applications," said Wang Wei, WLOPT's chairman. "The Anteryon technology and renowned team of experts in micro-optics, combined with our resources, will strengthen our ability to provide radically distinctive solutions in a variety of high-growth, high-volume consumer optics applications."
"We are extremely excited about the vision, commitment, drive and innovation power of our new partner. The technology and innovation level and execution power of WLOPT is unique and guarantees a market introduction of the upcoming OptiL products in line with high-volume market demands," said Gert-Jan Bloks, CEO of Anteryon.
"This acquisition is an important milestone in WLOPTs long-term growth strategy and provides WLOPT with access to key technologies for the development of miniaturized optical solutions for high-volume consumer applications, such as smartphones, next-generation security and automotive applications," said Wang Wei, WLOPT's chairman. "The Anteryon technology and renowned team of experts in micro-optics, combined with our resources, will strengthen our ability to provide radically distinctive solutions in a variety of high-growth, high-volume consumer optics applications."
"We are extremely excited about the vision, commitment, drive and innovation power of our new partner. The technology and innovation level and execution power of WLOPT is unique and guarantees a market introduction of the upcoming OptiL products in line with high-volume market demands," said Gert-Jan Bloks, CEO of Anteryon.
Tuesday, January 22, 2019
Autosens Brussels 2018 Videos: Videantis, Blackmore
Autosens keeps publishing video presentations from its 2018 Brussels conference:
Samsung Announces 20MP 0.8um Pixel Sensor for Slim Smartphones
BusinessWire: Samsung introduces its smallest high-resolution image sensor, the ISOCELL Slim 3T2, said to be the industry’s most compact 20MP image sensor at 1/3.4-inches. The 0.8μm-pixel ISOCELL Slim 3T2 is aimed to both front and back cameras in mid-range smartphones. The 1/3.4-inch 3T2 fits into a tiny module making more space in ‘hole-in or notch display’ designs.
“The ISOCELL Slim 3T2 is our smallest and most versatile 20Mp image sensor that helps mobile device manufacturers bring differentiated consumer value not only in camera performance but also in features including hardware design,” said Jinhyun Kwon, VP of System LSI sensor marketing at Samsung Electronics. “As the demand for advanced imaging capabilities in mobile devices continue to grow, we will keep pushing the limits in image sensor technologies for richer user experiences.”
When applied in rear-facing multi-camera settings for telephoto solutions, the 3T2 adopts an RGB color filter array instead of Tetracell CFA. The small size of the image sensor also reduces the height of the tele-camera module by around 7% when compared to Samsung’s 1/3-inch 20MP sensor. Compared to a 13MP sensor with the same module height, the 20Mp 3T2 retains 60% higher effective resolution at 10x digital zoom.
The Samsung ISOCELL Slim 3T2 is expected to be in mass production in Q1 2019.
“The ISOCELL Slim 3T2 is our smallest and most versatile 20Mp image sensor that helps mobile device manufacturers bring differentiated consumer value not only in camera performance but also in features including hardware design,” said Jinhyun Kwon, VP of System LSI sensor marketing at Samsung Electronics. “As the demand for advanced imaging capabilities in mobile devices continue to grow, we will keep pushing the limits in image sensor technologies for richer user experiences.”
When applied in rear-facing multi-camera settings for telephoto solutions, the 3T2 adopts an RGB color filter array instead of Tetracell CFA. The small size of the image sensor also reduces the height of the tele-camera module by around 7% when compared to Samsung’s 1/3-inch 20MP sensor. Compared to a 13MP sensor with the same module height, the 20Mp 3T2 retains 60% higher effective resolution at 10x digital zoom.
The Samsung ISOCELL Slim 3T2 is expected to be in mass production in Q1 2019.
Monday, January 21, 2019
Graphene-based SWIR G-Imager
Emberion, Graphenea, and AMO have been approved a European Innovation Council Fast Track to Innovation (FTI) project to help bring to the market the G-IMAGER, a graphene imager based on graphene-on-wafer technology. The G-Imager is a SWIR detector for applications in semiconductor inspection, sorting systems, spectroscopy hyperspectral imaging and surveillance. The projected market size for SWIR cameras in 2020 is around $1 billion.
The G-Imager will be based on the scientifically proven operation of a graphene channel coupled to nanocrystal light absorbers. The nanocrystals serve the function of strong light absorbers for high efficiency, whereas the graphene channel efficiently transports the generated charge to electrical contacts for detection. The benefits of G-Imager compared to other SWIR detectors, apart from the dramatic cost reduction, will be a lack of cooling requirements, low noise, large dynamic range, broad spectral range and scalable pixel size. These benefits guarantee quick market uptake of the product beyond the project duration, which is 24 months.
To bring G-Imager to the market, the project consortium will have to tackle challenges related to maturing the innovation. Namely, the production process yield for the detectors meeting a specified set of quality requirements must be guaranteed at a minimum of 85%, which is an industry standard. The technology readiness level must be raised to level 8, while scaling up the production process to meet volume requirements. Graphenea will scale up production of 200mm graphene-on-wafers to 10,000 wafers/year. A key project goal is integration into semiconductor production processes, such that G-Imager production is compatible with demanding requirements of the foundries where high-volume semiconductor device production takes place, like the foundry at AMO GmbH. AMO will develop a new “graphene device foundry service” to industrialize production of the photodetector. The SWIR detector production will be finalized at Emberion, where VGA resolution imager products will be made by depositing and patterning photosensitive materials on the graphene devices. Product finalization includes encapsulation, dicing and packaging the final imager products and integrating them into Emberion’s camera core products.
The consortium expects the project to assist in raising the cumulative net income of the consortium members to 60M€ in 4 years.
In a separate announcement, Emberion is to present a cost-competitive 512× 1 pixel VIS-SWIR linear array sensor for visible light to shortwave infrared (VIS-SWIR) detection at SPIE Photonics West in San Francisco on February 5-7, 2019. The sensor provides superior and consistent responsivity with very low noise over the broad (400 – 1800 nm) spectral range. The sensor is primarily designed for spectrometry applications. The sensor comprises an array of 25 × 500 µm2 pixels monolithically built on a tailor-made CMOS ROIC.
Beyond the linear array product introduced, Emberion will offer VGA imagers in late 2019. These VGA sensors cover the same VIS-SWIR spectral range (400-1800 nm wavelength) and are well suited for night and machine vision as well as hyperspectral imaging.
A 2017 Nature paper explains how the company's graphene imagers work:
The G-Imager will be based on the scientifically proven operation of a graphene channel coupled to nanocrystal light absorbers. The nanocrystals serve the function of strong light absorbers for high efficiency, whereas the graphene channel efficiently transports the generated charge to electrical contacts for detection. The benefits of G-Imager compared to other SWIR detectors, apart from the dramatic cost reduction, will be a lack of cooling requirements, low noise, large dynamic range, broad spectral range and scalable pixel size. These benefits guarantee quick market uptake of the product beyond the project duration, which is 24 months.
To bring G-Imager to the market, the project consortium will have to tackle challenges related to maturing the innovation. Namely, the production process yield for the detectors meeting a specified set of quality requirements must be guaranteed at a minimum of 85%, which is an industry standard. The technology readiness level must be raised to level 8, while scaling up the production process to meet volume requirements. Graphenea will scale up production of 200mm graphene-on-wafers to 10,000 wafers/year. A key project goal is integration into semiconductor production processes, such that G-Imager production is compatible with demanding requirements of the foundries where high-volume semiconductor device production takes place, like the foundry at AMO GmbH. AMO will develop a new “graphene device foundry service” to industrialize production of the photodetector. The SWIR detector production will be finalized at Emberion, where VGA resolution imager products will be made by depositing and patterning photosensitive materials on the graphene devices. Product finalization includes encapsulation, dicing and packaging the final imager products and integrating them into Emberion’s camera core products.
The consortium expects the project to assist in raising the cumulative net income of the consortium members to 60M€ in 4 years.
In a separate announcement, Emberion is to present a cost-competitive 512× 1 pixel VIS-SWIR linear array sensor for visible light to shortwave infrared (VIS-SWIR) detection at SPIE Photonics West in San Francisco on February 5-7, 2019. The sensor provides superior and consistent responsivity with very low noise over the broad (400 – 1800 nm) spectral range. The sensor is primarily designed for spectrometry applications. The sensor comprises an array of 25 × 500 µm2 pixels monolithically built on a tailor-made CMOS ROIC.
Beyond the linear array product introduced, Emberion will offer VGA imagers in late 2019. These VGA sensors cover the same VIS-SWIR spectral range (400-1800 nm wavelength) and are well suited for night and machine vision as well as hyperspectral imaging.
A 2017 Nature paper explains how the company's graphene imagers work:
Oppo Unveils Triple Camera with 10x Zoom for Smartphones, Wide-zone Optical Fingerprint Sensor
PRNewswire: OPPO reveales its forthcoming smartphone 10x lossless zoom technology meets commercial standards and is ready for mass production:
"OPPO has developed a triple-camera solution consisting of "Ultra Wide Angle + Ultra Clear Master + Telephoto". The ultra-wide-angle camera has an equivalent focal range of 15.9mm, bringing a unique capability to the wide-angle viewfinder. The primary camera guarantees photo quality, and the telephoto camera, with 159mm equivalent focal range, combined with the original "peep-up structure" to support high-magnification zoom, can ensure a high-quality long-distance shot.
To maintain image quality at all ranges, OPPO has introduced dual OIS optical image stabilization on both standard and telephoto cameras."
Oppo also keeps developing its periscope-based design of 10x optical zoom camera and reports that the design passed the drop tests.
OPPO also releases a new wide zone on-screen optical fingerprint recognition technology with recognition area up to 15 times of the current mainstream optical solution. The new optical fingerprint recognition includes two-finger simultaneous entry and authentication, achieving a security level 50,000 times of that of a single fingerprint.
"OPPO has developed a triple-camera solution consisting of "Ultra Wide Angle + Ultra Clear Master + Telephoto". The ultra-wide-angle camera has an equivalent focal range of 15.9mm, bringing a unique capability to the wide-angle viewfinder. The primary camera guarantees photo quality, and the telephoto camera, with 159mm equivalent focal range, combined with the original "peep-up structure" to support high-magnification zoom, can ensure a high-quality long-distance shot.
To maintain image quality at all ranges, OPPO has introduced dual OIS optical image stabilization on both standard and telephoto cameras."
Oppo also keeps developing its periscope-based design of 10x optical zoom camera and reports that the design passed the drop tests.
OPPO also releases a new wide zone on-screen optical fingerprint recognition technology with recognition area up to 15 times of the current mainstream optical solution. The new optical fingerprint recognition includes two-finger simultaneous entry and authentication, achieving a security level 50,000 times of that of a single fingerprint.
Sofradir and ULIS to Invest €150M in French Nano 2022 Program
ALA News: Sofradir and its subsidiary ULIS announce their participation in the Nano 2022 initiative, which sees the Group invest €150M ($171M) over the period 2018-2022.
This announcement follows the European Commission’s approval on December 18, 2018 of the ‘Important Project of Common European Interest’ (IPCEI), a joint project by France, Germany, Italy and the UK to give €1.75 billion (approx. $2bn) in public support for research and innovation in microelectronics.
Nano 2022 will enable ULIS to develop the next generations of IR detectors to address trends in autonomous systems for smart buildings (workspace management, energy savings), road safety and in-cabin comfort of vehicles. It also enables Sofradir to develop the very large dimension IR detectors needed for space and astronomy observations as well as compact and light sensors that can be used in portable devices and on drones. Nano 2022 contributes to the funding of the pilot lines required for developing these technologies and products.
Update: IMVEurope: Sofradir and its subsidiary Ulis are investing €150M in the development of the next generations of IR detectors. David Billon-Lanfrey, director of strategy at Sofradir, says the company could potentially improve the performance of infrared sensors by up to 30% when the project concludes in 2022.
Sofradir aims to reduce the pixel pitch on its FPAs to enable higher resolution or, alternatively, reduce cost with smaller pixels. "We are working on our pixel structure so that we maintain the performance, even though the sensor area is getting smaller," David Billon-Lanfrey says. "We maintain the sensitivity of the pixel and we improve the resolution of the detector to improve the performance. The end result is a higher imaging range."
The second part of Sofradir’s R&D work is to increase the operating temperature of cooled FPAs from its currently -200°C.
Ulis, Sofradir’s subsidiary, will use the funding to target higher volume markets for its microbolometers.
This announcement follows the European Commission’s approval on December 18, 2018 of the ‘Important Project of Common European Interest’ (IPCEI), a joint project by France, Germany, Italy and the UK to give €1.75 billion (approx. $2bn) in public support for research and innovation in microelectronics.
Nano 2022 will enable ULIS to develop the next generations of IR detectors to address trends in autonomous systems for smart buildings (workspace management, energy savings), road safety and in-cabin comfort of vehicles. It also enables Sofradir to develop the very large dimension IR detectors needed for space and astronomy observations as well as compact and light sensors that can be used in portable devices and on drones. Nano 2022 contributes to the funding of the pilot lines required for developing these technologies and products.
Update: IMVEurope: Sofradir and its subsidiary Ulis are investing €150M in the development of the next generations of IR detectors. David Billon-Lanfrey, director of strategy at Sofradir, says the company could potentially improve the performance of infrared sensors by up to 30% when the project concludes in 2022.
Sofradir aims to reduce the pixel pitch on its FPAs to enable higher resolution or, alternatively, reduce cost with smaller pixels. "We are working on our pixel structure so that we maintain the performance, even though the sensor area is getting smaller," David Billon-Lanfrey says. "We maintain the sensitivity of the pixel and we improve the resolution of the detector to improve the performance. The end result is a higher imaging range."
The second part of Sofradir’s R&D work is to increase the operating temperature of cooled FPAs from its currently -200°C.
Ulis, Sofradir’s subsidiary, will use the funding to target higher volume markets for its microbolometers.
Sunday, January 20, 2019
TSMC Updates on CIS Process Development
On January 1st, TSMC published 2017 Annual Report and Business Overview with an info on CIS process development:
- 0.13µm SPAD technology platform speeds up customer product development of LiDAR applications. Customers can use TSMC’s SPAD platform to achieve the best time-to-market, which will accelerate LiDAR’s use in automotive and security industries.
- TSMC expanded its technology for optical fingerprint sensing, from 0.18µm and 0.11µm CMOS image sensors to collimator, enabling customers to customize their optical fingerprint sensors. Fingerprint sensing is a critical authentication scheme for many electronic communications and payment systems.
- A high-performance sub-micron pixel development was completed and made ready for mass production;
- NIR QE gained significant boost by innovative structure and usage of new material
- Pitch density of wafer bond technology was pushed higher to maintain the Company’s world-wide leading position
- Q&R worked with customers to complete stacked CIS Column Level Hybrid Bond (CLHB) process/product qualification and successfully shipped to customers in 2017
Saturday, January 19, 2019
Friday, January 18, 2019
FLIR Explains Sony 3rd Generation Pregius Features
ClearViewImaging publishes FLIR (PointGrey) presentation on image sensor technology news, explaining Sony 3rd generation Pregius sensor features among other things:
Thursday, January 17, 2019
Swiss Eye Tracking Startup Raises $1.9m
Swiss startup Eyeware develops 3D eye tracking software for depth sensing enabled consumer devices, such as Microsoft Kinect, Intel RealSense, Orbbec Astra, etc. How do they track gaze from low-resolution 3D images?
"Generally speaking: We require a low-resolution 3D pixel cloud of the face and eye regions and are agnostic to the underlying technology. It works as well for ToF sensors (i.e. non-RGB images). Our software uses that input to model the head pose and eye regions, providing the gaze vector in real-time."
Eyeware announces the closing of its seed financing round of 1.9M CHF ($1.9M USD). The seed round was led by High-Tech Gründerfonds (HTGF), in partnership with TRUMPF Venture GmbH, Swiss Startup Group, and Zurich Kantonalbank.
Eyeware is a spin-off of Idiap Research Institute and EPFL created in September 2016. Eyeware software can use automotive grade ToF cameras to estimate attention of drivers for in-cabin monitoring and infotainment systems. The capital will be used by the Eyeware team to bring the 3D eye tracking development kit ready for integration into consumer applications.
"Generally speaking: We require a low-resolution 3D pixel cloud of the face and eye regions and are agnostic to the underlying technology. It works as well for ToF sensors (i.e. non-RGB images). Our software uses that input to model the head pose and eye regions, providing the gaze vector in real-time."
Eyeware announces the closing of its seed financing round of 1.9M CHF ($1.9M USD). The seed round was led by High-Tech Gründerfonds (HTGF), in partnership with TRUMPF Venture GmbH, Swiss Startup Group, and Zurich Kantonalbank.
Eyeware is a spin-off of Idiap Research Institute and EPFL created in September 2016. Eyeware software can use automotive grade ToF cameras to estimate attention of drivers for in-cabin monitoring and infotainment systems. The capital will be used by the Eyeware team to bring the 3D eye tracking development kit ready for integration into consumer applications.
SiOnyx Receives $20m Award for US Army Night Vision Project
BusinessWire: SiOnyx announces a $19.9m award for the delivery of digital night vision cameras for the IVAS (Integrated Visual Augmentation System) program.
BBC on Camera Damage by LiDAR
BBC publishes its version of the camera damage by CES LiDAR story. It turns out that the owner of camera has quite an extensive experience with LiDARs. He sort of confirms that the problem might be specific to Aeye LiDAR:
"I have personally tested many lidar systems and taken pictures up close and [they] did not harm my camera."
"I have personally tested many lidar systems and taken pictures up close and [they] did not harm my camera."
A review of Optical Phased Array LiDAR
AutoSens starts to publish videos from its Brussels 2018 conference. One of the first videos is Michael Watts, CEO of Analog Photonics, presentation: