Phys.org: At the 2019 American Physical Society Meeting on Marc 4, 2019 in Boston, John Kolinski of EPFL in Lausanne, Switzerland, will present a new imaging technique known as the virtual frame technique that he and colleagues Samuel Dillavou and Shmuel Rubinstein of Harvard University developed that enables ordinary digital cameras to capture millions of frames per second for several seconds while maintaining high spatial resolution. He will also participate in a press conference describing the work. Information for logging on to watch and ask questions remotely is included at the end of this news release.
The virtual frame technique uses a camera sensor's bit depth, the amount of information the sensor can obtain, to dramatically increase frame rate. Cracking and many other physical processes are binary; for example, material is either cracked or not cracked. Thus, only two bits are needed to image a crack. An image sensor with a bit depth of 16 bits has more than 65,000 color or grayscale values, meaning it is possible to produce thousands of virtual frames during a single exposure. Using precise camera timing and a short pulse of intense light can increase frame rates even further. "In a recent study using the virtual frame technique, we obtain virtual frame rates exceeding 60 million per second using precise time-gating and a camera sensor with substantial bit-depth," Kolinski said.
Using the virtual frame technique, virtually any camera can directly image dynamic cracks as they form. Additionally, it can be used to study other fast physical processes that happen at interfaces between solids and fluids such as wetting that occurs when a liquid drop hits a material surface. The only requirement is that the solid be opaque, whether it's a construction material or soft substance such as a polymer. "Essentially any material could be imaged with the virtual frame technique," Kolinski said.
Archiv.org paper "Virtual Frame Technique: Ultrafast Imaging with Any Camera" by Sam Dillavou, Shmuel M Rubinstein, John M Kolinski reveals more details:
Monday, March 04, 2019
Smartphone Market News
IFNews quotes IHS Markit forecasting that Sony is expected to supply 150M sensors with 0.8um pixels. Currently, most of the 0.8um pixel product supply comes from Sony and Samsung with Omnivision expected to be ready in Q3 2019.
IHS Markit compares different triple-camera configurations found in modern smartphones:
IFNews posts Credit Suisse comparison of optical under-display fingerprint sensors with ultrasound ones:
Credit Suisse writes: "We also believe the proliferation of front-facing 3D-sensing for biometric identification will remain slow in 2019-20, given higher BOM cost and lack of other uses besides unlocking the device. Nevertheless, there will still be few new smartphone models adopting front-3D-sensing in 2019, as well as potential adoption for non-smartphone applications such as cleaning robot, smart doorbell, refrigerator, etc.
For the rear-3D-sensing with ToF camera, our checks indicate several Android smartphone brands have been developing this feature and could see a few flagship models adopting this in 2019, although there is doubt regarding the cost and end-application. However, we do not expect iPhone to adopt ToF camera in 2H19 models (it’s more likely to happen in 2H20), based on our checks on equipment delivery, supplier qualification, and capacity-build plans. We also do not expect 2H19 iPhone to adopt under-display fingerprint-sensing as there has been limited activity in the supply chain."
IHS Markit compares different triple-camera configurations found in modern smartphones:
IFNews posts Credit Suisse comparison of optical under-display fingerprint sensors with ultrasound ones:
Credit Suisse writes: "We also believe the proliferation of front-facing 3D-sensing for biometric identification will remain slow in 2019-20, given higher BOM cost and lack of other uses besides unlocking the device. Nevertheless, there will still be few new smartphone models adopting front-3D-sensing in 2019, as well as potential adoption for non-smartphone applications such as cleaning robot, smart doorbell, refrigerator, etc.
For the rear-3D-sensing with ToF camera, our checks indicate several Android smartphone brands have been developing this feature and could see a few flagship models adopting this in 2019, although there is doubt regarding the cost and end-application. However, we do not expect iPhone to adopt ToF camera in 2H19 models (it’s more likely to happen in 2H20), based on our checks on equipment delivery, supplier qualification, and capacity-build plans. We also do not expect 2H19 iPhone to adopt under-display fingerprint-sensing as there has been limited activity in the supply chain."
SiOnyx vs Hamamatsu Lawsuit
It came to my attention that, some time ago, Sionyx sued Hamamatsu for using and patenting its technology:
"Plaintiff SiOnyx, LLC alleges that it approached defendant Hamamatsu Photonics K.K. (“HPK”) concerning a potential business partnership involving the technology. The parties entered into a nondisclosure agreement and SiOnyx provided HPK with certain technical information.
SiOnyx alleges that after the approach proved unsuccessful, HPK violated the nondisclosure agreement, obtained patents on SiOnyx’s technology without naming SiOnyx personnel as inventors, and infringed other patents held by SiOnyx. HPK contends that its engineers independently developed the technology contained in its patents and practiced by its products, and that it does not infringe SiOnyx’s patents."
The court in part granted and in part denied SiOnyx claims.
Update: Here is the letter I received from the Hamamasu lawyer:
"Via Electronic Mail to image.sensors.world@gmail.com
Mr. Vladimir Koifman
....
....
Re: SiOnyx, LLC, et al. v. Hamamatsu Photonics K.K., et al.,
Civil Action No. 1:15-cv-13488-FDS
Dear Mr. Koifman:
I represent Hamamatsu Photonics K.K. and Hamamatsu Corporation. A March 4 post on your blog “Image Sensors World” entitled, “SiOnyx vs Hamamatsu Lawsuit” was recently brought to my attention.
The last line of the post in particular (“The court in part granted and in part denied SiOnyx claims”) is factually and legally inaccurate. The case is ongoing and trial is scheduled to begin next month. To date, SiOnyx has not yet obtained any favorable final ruling on the merits of the claims set forth in its complaint. The order to which the blog post links is a summary judgment order that decided only whether certain discrete issues are allowed to proceed to trial.
As I’m sure you understand, our client is concerned that your post gives the false impression that it has already been found liable for certain of SiOnyx’s claims. We therefore request that you at least modify the relevant blog post to remove the last line, if not remove it completely.
If you have any questions, please do not hesitate to contact me.
Best regards,
John D. Simmons"
"Plaintiff SiOnyx, LLC alleges that it approached defendant Hamamatsu Photonics K.K. (“HPK”) concerning a potential business partnership involving the technology. The parties entered into a nondisclosure agreement and SiOnyx provided HPK with certain technical information.
SiOnyx alleges that after the approach proved unsuccessful, HPK violated the nondisclosure agreement, obtained patents on SiOnyx’s technology without naming SiOnyx personnel as inventors, and infringed other patents held by SiOnyx. HPK contends that its engineers independently developed the technology contained in its patents and practiced by its products, and that it does not infringe SiOnyx’s patents."
Update: Here is the letter I received from the Hamamasu lawyer:
"Via Electronic Mail to image.sensors.world@gmail.com
Mr. Vladimir Koifman
....
....
Re: SiOnyx, LLC, et al. v. Hamamatsu Photonics K.K., et al.,
Civil Action No. 1:15-cv-13488-FDS
Dear Mr. Koifman:
I represent Hamamatsu Photonics K.K. and Hamamatsu Corporation. A March 4 post on your blog “Image Sensors World” entitled, “SiOnyx vs Hamamatsu Lawsuit” was recently brought to my attention.
The last line of the post in particular (“The court in part granted and in part denied SiOnyx claims”) is factually and legally inaccurate. The case is ongoing and trial is scheduled to begin next month. To date, SiOnyx has not yet obtained any favorable final ruling on the merits of the claims set forth in its complaint. The order to which the blog post links is a summary judgment order that decided only whether certain discrete issues are allowed to proceed to trial.
As I’m sure you understand, our client is concerned that your post gives the false impression that it has already been found liable for certain of SiOnyx’s claims. We therefore request that you at least modify the relevant blog post to remove the last line, if not remove it completely.
If you have any questions, please do not hesitate to contact me.
Best regards,
John D. Simmons"
Sunday, March 03, 2019
Why Velodyne LiDARs are Expensive
Silicon Valley Business Journal publishes few photos from Velodyne manufacturing line showing quite complex and labor intensive calibration procedures. The rumor is that the high-end Velodyne LiDARs require about 90 hours of calibration and alignment during and after the assembly. Assuming a technician labor cost of about $50 per hour (am I correct? is this a typical number for Silicon Valley?), it sets a floor for the unit price at about $4,500.
However, it appears that Velodyne CEO disagrees with me:
Velodyne "opened its San Jose "megafactory" in 2017 and now employs about 400 there. Founder David Hall said he plans to make his products there for the foreseeable future.
"Most of the cost of our lidars are in the parts themselves and not the labor to assemble them," he said. "San Jose has a large and available skilled labor force that, while not price competitive with anywhere in Asia, does a higher quality job than we would get by assembling the units elsewhere."
Update: Silicon Valley Business Journal publishes another article about Velodyne "Velodyne LiDAR, the inventor: ‘We aren’t a one-trick pony’" - can be seen on mobile phone.
However, it appears that Velodyne CEO disagrees with me:
Velodyne "opened its San Jose "megafactory" in 2017 and now employs about 400 there. Founder David Hall said he plans to make his products there for the foreseeable future.
"Most of the cost of our lidars are in the parts themselves and not the labor to assemble them," he said. "San Jose has a large and available skilled labor force that, while not price competitive with anywhere in Asia, does a higher quality job than we would get by assembling the units elsewhere."
![]() |
| Velodyne LiDAR puck is mounted on robotic arm for testing |
![]() |
| Velodyne employees work on laser and detector alignment for VLS-128 LiDARs |
![]() |
| Optical and mechanical accuracy checks for every components |
Update: Silicon Valley Business Journal publishes another article about Velodyne "Velodyne LiDAR, the inventor: ‘We aren’t a one-trick pony’" - can be seen on mobile phone.
Injection of Nanoantennas into Eye Extends Vision to 980nm
Cell journal publishes a paper "Mammalian Near-Infrared Image Vision through Injectable and Self-Powered Retinal Nanoantennae" by Yuqian Ma, Jin Bao, Yuanwei Zhang, Zhanjun Li, Xiangyu Zhou, Changlin Wan, Ling Huang, Yang Zhao, Gang Han, and Tian Xue from University of Science and Technology of China, Hefei, Anhui, Chinese Academy of Sciences, and University of Massachusetts.
"Mammals cannot see light over 700 nm in wavelength. This limitation is due to the physical thermodynamic properties of the photon-detecting opsins. However, the detection of naturally invisible near-infrared (NIR) light is a desirable ability. To break this limitation, we developed ocular injectable photoreceptor-binding upconversion nanoparticles (pbUCNPs). These nanoparticles anchored on retinal photoreceptors as miniature NIR light transducers to create NIR light image vision with negligible side effects. Based on single-photoreceptor recordings, electroretinograms, cortical recordings, and visual behavioral tests, we demonstrated that mice with these nanoantennae could not only perceive NIR light, but also see NIR light patterns. Excitingly, the injected mice were also able to differentiate sophisticated NIR shape patterns. Moreover, the NIR light pattern vision was ambient-daylight compatible and existed in parallel with native daylight vision. This new method will provide unmatched opportunities for a wide variety of emerging bio-integrated nanodevice designs and applications."
Unfortunately, the upconversion photon efficiency is quite low, between 1e-5 and 1e-6 depending on the 980nm source power:
The researchers publish a video explaining their paper in a plain language:
"Mammals cannot see light over 700 nm in wavelength. This limitation is due to the physical thermodynamic properties of the photon-detecting opsins. However, the detection of naturally invisible near-infrared (NIR) light is a desirable ability. To break this limitation, we developed ocular injectable photoreceptor-binding upconversion nanoparticles (pbUCNPs). These nanoparticles anchored on retinal photoreceptors as miniature NIR light transducers to create NIR light image vision with negligible side effects. Based on single-photoreceptor recordings, electroretinograms, cortical recordings, and visual behavioral tests, we demonstrated that mice with these nanoantennae could not only perceive NIR light, but also see NIR light patterns. Excitingly, the injected mice were also able to differentiate sophisticated NIR shape patterns. Moreover, the NIR light pattern vision was ambient-daylight compatible and existed in parallel with native daylight vision. This new method will provide unmatched opportunities for a wide variety of emerging bio-integrated nanodevice designs and applications."
Unfortunately, the upconversion photon efficiency is quite low, between 1e-5 and 1e-6 depending on the 980nm source power:
The researchers publish a video explaining their paper in a plain language:
Saturday, March 02, 2019
Techinsights Image Sensor Slides
Techinsights publishes "Image Sensor Subscription: Example Content" presentation with many interesting slides:
Sigma-Foveon New Sensor to be Manufactured by TSI Semiconductors
L-Rumors quotes Sigma CEO Kazuto Yamaki saying that the new full frame Foveon sensor will be manufactures by TSI Semiconductors foundry in Roseville, CA:
Friday, March 01, 2019
Techinsingths Estimates Samsung Galaxy S10+ Cameras Cost at 13.5% of BOM
Techinsights publishes a teardown report of just announced Samsung flagship phone Galaxy S10+. The cost of 3 rear cameras and 2 front cameras of the smartphone is estimated at $56.5 out of the total BOM of $420:
SPAD Imagers at High Illumination
Arxiv.org publishes University of Wisconsin-Madison 27-page long paper "High Flux Passive Imaging with Single-Photon Sensors" by Atul Ingle, Andreas Velten, and Mohit Gupta.
"We propose passive free-running SPAD (PF-SPAD) imaging, an imaging modality that uses SPADs for capturing 2D intensity images with unprecedented dynamic range under ambient lighting, without any active light source. Our key observation is that the precise inter-photon timing measured by a SPAD can be used for estimating scene brightness under ambient lighting conditions, even for very bright scenes. We develop a theoretical model for PF-SPAD imaging, and derive a scene brightness estimator based on the average time of darkness between successive photons detected by a PF-SPAD pixel. Our key insight is that due to the stochastic nature of photon arrivals, this estimator does not suffer from a hard saturation limit. Coupled with high sensitivity at low flux, this enables a PF-SPAD pixel to measure a wide range of scene brightness, from very low to very high, thereby achieving extreme dynamic range. We demonstrate an improvement of over 2 orders of magnitude over conventional sensors by imaging scenes spanning a dynamic range of 10^6:1."
"We propose passive free-running SPAD (PF-SPAD) imaging, an imaging modality that uses SPADs for capturing 2D intensity images with unprecedented dynamic range under ambient lighting, without any active light source. Our key observation is that the precise inter-photon timing measured by a SPAD can be used for estimating scene brightness under ambient lighting conditions, even for very bright scenes. We develop a theoretical model for PF-SPAD imaging, and derive a scene brightness estimator based on the average time of darkness between successive photons detected by a PF-SPAD pixel. Our key insight is that due to the stochastic nature of photon arrivals, this estimator does not suffer from a hard saturation limit. Coupled with high sensitivity at low flux, this enables a PF-SPAD pixel to measure a wide range of scene brightness, from very low to very high, thereby achieving extreme dynamic range. We demonstrate an improvement of over 2 orders of magnitude over conventional sensors by imaging scenes spanning a dynamic range of 10^6:1."
Smartsens on its ISSCC 2019 Presentation
PRNewswire: SmartSens has presented a research paper, "A Stacked Global-Shutter CMOS Imager with SC-Type Hybrid-GS Pixel and Self-Knee Point Calibration Single-Frame HDR and On-Chip Binarization Algorithm for Smart Vision Applications," at ISSCC 2019 in San Francisco. SmartSens CEO Richard Xu was the first presenter in the ISSCC's image sensor technology session which attracted over 200 attendees from leading organizations from the industry and academia.
Known as the "integrated circuit Olympics," ISSCC is said to be one of the most important global forums in the IC industry. "We are proud to be selected to present our paper at ISSCC. It's a tremendous honor for SmartSens as the committee recognizes our achievements in the field of CMOS image sensors," said Xu. "Behind this success is our commitment to developing cutting-edge image sensing technology and products for the emerging applications in the era of 5G, AI and machine vision."
In this paper, SmartSens unveils a new BSI global shutter sensor that has performance advantages such as high sensitivity, low noise, high shutter efficiency, HDR with improved PRNU performance and self-knee point calibration. The sensor integrates an ISP using stacked technology. The new sensor is said to be most suitable for smart vision applications, such as face identification, machine vision, 3D imaging and AI.
Known as the "integrated circuit Olympics," ISSCC is said to be one of the most important global forums in the IC industry. "We are proud to be selected to present our paper at ISSCC. It's a tremendous honor for SmartSens as the committee recognizes our achievements in the field of CMOS image sensors," said Xu. "Behind this success is our commitment to developing cutting-edge image sensing technology and products for the emerging applications in the era of 5G, AI and machine vision."
In this paper, SmartSens unveils a new BSI global shutter sensor that has performance advantages such as high sensitivity, low noise, high shutter efficiency, HDR with improved PRNU performance and self-knee point calibration. The sensor integrates an ISP using stacked technology. The new sensor is said to be most suitable for smart vision applications, such as face identification, machine vision, 3D imaging and AI.
Subscribe to:
Comments (Atom)














