Basler publishes an educational Youtube video explaining ToF camera basics:
Tuesday, May 31, 2016
Omnivision 12MP Stacked Sensor for High-End Smartphones
PRNewswire: OmniVision announces the OV12890, a new 1.55um big-pixel stacked sensor with 12b ADC and PDAF and HDR support for flagship smartphones. The OV12890 uses OmniVision's PureCelPlus-S pixel technology to capture full resolution 12MP images and video at 45fps, 4K2K video at 60fps, and 1080p video at 240fps via high speed D-PHY and C-PHY interfaces.
The OV12890 can fit into 10 x 10 mm modules with a z-height of 6 mm. The sensor is currently available for sampling and is expected to enter volume production in Q4 2016.
"As cameras for premium smartphones continue to improve, we see the resolution race slowing down and increasing emphasis placed on pixel performance and image sensor size as key to ever-higher quality mobile images and video," said James Liu, senior technical marketing manager at OmniVision. "The OV12890 is our newest big-pixel stacked die image sensor for the mobile market, and represents one of our strongest offerings for premium smartphones. The feature-rich OV12890 captures exceptional images and video in a compact package, making it a top-flight imaging solution for flagship mobile devices."
The OV12890 can fit into 10 x 10 mm modules with a z-height of 6 mm. The sensor is currently available for sampling and is expected to enter volume production in Q4 2016.
"As cameras for premium smartphones continue to improve, we see the resolution race slowing down and increasing emphasis placed on pixel performance and image sensor size as key to ever-higher quality mobile images and video," said James Liu, senior technical marketing manager at OmniVision. "The OV12890 is our newest big-pixel stacked die image sensor for the mobile market, and represents one of our strongest offerings for premium smartphones. The feature-rich OV12890 captures exceptional images and video in a compact package, making it a top-flight imaging solution for flagship mobile devices."
Monday, May 30, 2016
Particle and Photon Detection: Counting and Energy Measurement
The open-access Sensor Journal publishes a review paper "Particle and Photon Detection: Counting and Energy Measurement" by James Janesick and John Tower, SRI-Sarnoff.
"The challenges to extend photon counting into the visible/nIR wavelengths and achieve energy measurement in the UV with specific read noise requirements are discussed. Pixel flicker and random telegraph noise sources are highlighted along with various methods used in reducing their contribution on the sensor’s read noise floor. Practical requirements for quantum efficiency, charge collection efficiency, and charge transfer efficiency that interfere with photon counting performance are discussed. Lastly we will review current efforts in reducing flicker noise head-on, in hopes to drive read noise substantially below 1 carrier rms."
"we are trying a proprietary non-imaging gate oxide process offered by Jazz Semiconductor that claims to lower MOSFET 1/f RTN noise by 4 to 5 times compared to 0.18 um processing that is used now."
"The challenges to extend photon counting into the visible/nIR wavelengths and achieve energy measurement in the UV with specific read noise requirements are discussed. Pixel flicker and random telegraph noise sources are highlighted along with various methods used in reducing their contribution on the sensor’s read noise floor. Practical requirements for quantum efficiency, charge collection efficiency, and charge transfer efficiency that interfere with photon counting performance are discussed. Lastly we will review current efforts in reducing flicker noise head-on, in hopes to drive read noise substantially below 1 carrier rms."
"we are trying a proprietary non-imaging gate oxide process offered by Jazz Semiconductor that claims to lower MOSFET 1/f RTN noise by 4 to 5 times compared to 0.18 um processing that is used now."
Sunday, May 29, 2016
Robustness of CMOS Technology and Circuitry Outside the Imaging Core : Integrity, Variability, Reliability
Albert Theuwissen announces the 4th Harvest Imaging Forum "Robustness of CMOS Technology and Circuitry outside the Imaging Core : integrity, variability, reliability" by Harry Veendrick. The forum is to be held in December, 2016 in Voorburg (the Hague), the Netherlands. The 2016 Forum is meant to present an overview on the importance of understanding all aspects that determine the robustness of CMOS integrated circuits present “around” the imaging core of a CMOS image sensor.
CEI-Europe announces few more image sensor courses by Albert Theuwissen.
CEI-Europe announces few more image sensor courses by Albert Theuwissen.
Saturday, May 28, 2016
Omnivision Announces 3rd Generation 13MP Sensor
PRNewswire: OmniVision announces the OV13855, its third-generation 13MP 1.12um pixel sensor with significant improvements in low-light performance, color crosstalk reduction and angular response when compared with previous-generation 13MP sensors. The OV13855 is built on PureCel-Plus pixel technology with PDAF and aimed to mainstream smartphone rear-facing cameras as well as front-facing and dual cameras.
"The 13-megapixel resolution segment is becoming the minimum box specification for rear-facing cameras in mid- to low-end smartphones. Recent market research suggests that the 13-megapixel smartphone camera module market may grow at a CAGR of 16 percent through 2020," said Badri Padmanabhan, product marketing manager at OmniVision. "With its compact form factor and advanced feature set, the OV13855 provides pioneering OEMs with an excellent opportunity to integrate 13-megapixel image sensors into their high-end smartphones for front-facing or dual camera applications."
The OV13855 captures full-resolution 13MP still images at 30fps and records 4K2K video at 45fps, 1080p at 60fps, or 720p HD at 90fps. With its compact design and two-sided bond pad layout, the OV13855 can easily be integrated into 8.5 x 8.5 mm autofocus modules with z-heights of less than 5 mm for main cameras, and 7.5 x 7.5 mm fixed focus modules with z-heights of less than 4.5 mm for high-end front-facing cameras.
The OV13855 is currently available for sampling, and is expected to enter volume production in Q2 2016. The sensor is available in non-PDAF (OV13858) and monochrome (OV13355) versions to support front-facing and dual-camera applications.
"The 13-megapixel resolution segment is becoming the minimum box specification for rear-facing cameras in mid- to low-end smartphones. Recent market research suggests that the 13-megapixel smartphone camera module market may grow at a CAGR of 16 percent through 2020," said Badri Padmanabhan, product marketing manager at OmniVision. "With its compact form factor and advanced feature set, the OV13855 provides pioneering OEMs with an excellent opportunity to integrate 13-megapixel image sensors into their high-end smartphones for front-facing or dual camera applications."
The OV13855 captures full-resolution 13MP still images at 30fps and records 4K2K video at 45fps, 1080p at 60fps, or 720p HD at 90fps. With its compact design and two-sided bond pad layout, the OV13855 can easily be integrated into 8.5 x 8.5 mm autofocus modules with z-heights of less than 5 mm for main cameras, and 7.5 x 7.5 mm fixed focus modules with z-heights of less than 4.5 mm for high-end front-facing cameras.
The OV13855 is currently available for sampling, and is expected to enter volume production in Q2 2016. The sensor is available in non-PDAF (OV13858) and monochrome (OV13355) versions to support front-facing and dual-camera applications.
Friday, May 27, 2016
Invisage Announces 2K SAM Camera
BusinessWire: InVisage announces its Spark Authentication Module (SAM) NIR camera module. SAM is powered by the previously announced SparkP2 2MP NIR sensor. Sized 8.0 x 8.0 x 3.1 mm, the SAM module is custom built for authentication systems such as Microsoft Windows Hello. In addition to blocking interference from direct sunlight, SAM enables authentication at a greater distance of beyond 100cm from a tablet, laptop or phone so that users are not constrained to a small space in front of their device. Because it operates at the 940nm wavelength, SAM also eliminates an intrusive red glow from LEDs.
SAM is said to be the only system that can deliver 2K resolution in a tiny module while consuming 50 times less system power. Existing NIR cameras operate in conjunction with high wattage LEDs to overcome low CMOS sensitivity and ambient infrared in sunlight. The resulting high power consumption and heat generated by such bright LEDs has made outdoor performance a challenge for mobile face recognition systems that operate with lighter batteries. In contrast, SAM, powered by SparkP2, leverages low-power pulsed LEDs synchronized with an extremely short global shutter exposure, allowing for accurate imaging without battery drain. At 50 times lower power consumption, overall system temperature is also up to 20 degrees cooler.
“For authentication, mobile device makers demand compact modules that produce sharp images enabling smooth user verification with minimal false negatives or false positives, regardless of whether the user is indoors or out,” said Jess Lee, InVisage President and CEO. “It also needs to work within a reasonable range so that users can say ‘Hello’ without having to plant their eye or face just a few centimeters from the screen.”
With a photosensitive layer 10 times thinner than a typical silicon infrared sensor, the SparkP2 sensor powering SAM provides QE of 35% at 940nm wavelength. This greater sensitivity results in sharper images and an expanded operational radius of beyond 100 centimeters, but it also enables minimal crosstalk in a thinner, 3.1-millimeter module with a 72-deg FOV. IR in particular suffer from blur due to high levels of crosstalk, or misdetection of light in nearby pixels. Crosstalk typically limits camera thinness by requiring a minimum distance between the lens and the photosensitive layer, but the SparkP2 lens in the SAM module can be much closer to the sensor without increasing crosstalk and preserving a higher level of sharpness.
SAM and SparkP2 are optimized for authentication systems that operate at 850nm (with a visible red glow) and 940nm (invisible with a tenfold improvement in sun irradiance rejection).
Taiwan-based DIY publishes a 940nm comparison picture:
SAM is said to be the only system that can deliver 2K resolution in a tiny module while consuming 50 times less system power. Existing NIR cameras operate in conjunction with high wattage LEDs to overcome low CMOS sensitivity and ambient infrared in sunlight. The resulting high power consumption and heat generated by such bright LEDs has made outdoor performance a challenge for mobile face recognition systems that operate with lighter batteries. In contrast, SAM, powered by SparkP2, leverages low-power pulsed LEDs synchronized with an extremely short global shutter exposure, allowing for accurate imaging without battery drain. At 50 times lower power consumption, overall system temperature is also up to 20 degrees cooler.
“For authentication, mobile device makers demand compact modules that produce sharp images enabling smooth user verification with minimal false negatives or false positives, regardless of whether the user is indoors or out,” said Jess Lee, InVisage President and CEO. “It also needs to work within a reasonable range so that users can say ‘Hello’ without having to plant their eye or face just a few centimeters from the screen.”
With a photosensitive layer 10 times thinner than a typical silicon infrared sensor, the SparkP2 sensor powering SAM provides QE of 35% at 940nm wavelength. This greater sensitivity results in sharper images and an expanded operational radius of beyond 100 centimeters, but it also enables minimal crosstalk in a thinner, 3.1-millimeter module with a 72-deg FOV. IR in particular suffer from blur due to high levels of crosstalk, or misdetection of light in nearby pixels. Crosstalk typically limits camera thinness by requiring a minimum distance between the lens and the photosensitive layer, but the SparkP2 lens in the SAM module can be much closer to the sensor without increasing crosstalk and preserving a higher level of sharpness.
SAM and SparkP2 are optimized for authentication systems that operate at 850nm (with a visible red glow) and 940nm (invisible with a tenfold improvement in sun irradiance rejection).
Taiwan-based DIY publishes a 940nm comparison picture:
Thursday, May 26, 2016
Microsoft Hololens Cameras
EETimes quotes Ilan Splillinger, CVP of HoloLens and silicon at Microsoft, revealing the company's AR glasses hardware details:
"The HoloLens sensor bar (above) packs four environmental cameras for tracking head movements and gestures used to control the display. A depth sensor is a Kinect scaled to a fraction of its size and power consumption. It supports a short range mode for tracking gestures within a meter and a long range mode for mapping the room. A 2MPixel high def video camera projects images the user sees."
"The HoloLens sensor bar (above) packs four environmental cameras for tracking head movements and gestures used to control the display. A depth sensor is a Kinect scaled to a fraction of its size and power consumption. It supports a short range mode for tracking gestures within a meter and a long range mode for mapping the room. A 2MPixel high def video camera projects images the user sees."
Rumor: Sony & LG Innotek to Supply Dual Camera Module for iPhone 7
Barron's quotes Nomura Securities and Citi Research saying that all the new 5.5″ iPhones 7 coming this fall will have dual cameras. Nomura’s Chris Chang thinks Sony is now behind schedule, which means that Apple will have to go to LG Innotek for assembly:
"We think Sony may not be able to deliver its full share of dual cameras to Apple due to: (1) lower-than-expected yield, and (2) damage to its production facility from the April earthquake in Kumamoto. As a result, we think LGI will gain majority share of the initial dual camera orders from Apple...
We expect a sharp increase in camera module ASP from 2H16F as we think: 1) dual camera module is likely to command 2.5x ASP premium vs. single-cam, and 2) OIS (optical image stabilisation) will also be equipped in the new 4.7” iPhones – currently only the 5.5” model has OIS."
"We think Sony may not be able to deliver its full share of dual cameras to Apple due to: (1) lower-than-expected yield, and (2) damage to its production facility from the April earthquake in Kumamoto. As a result, we think LGI will gain majority share of the initial dual camera orders from Apple...
We expect a sharp increase in camera module ASP from 2H16F as we think: 1) dual camera module is likely to command 2.5x ASP premium vs. single-cam, and 2) OIS (optical image stabilisation) will also be equipped in the new 4.7” iPhones – currently only the 5.5” model has OIS."
Wednesday, May 25, 2016
Tuesday, May 24, 2016
DynaOptics' Free-Form Lens
Singapore-based startup DynaOptics presents Free-Form Lens:
DynaOptics is about to start a Kickstarter campaign for its Free-Form Lens add-on for smartphone:
DynaOptics is about to start a Kickstarter campaign for its Free-Form Lens add-on for smartphone:
Sony Earthquake Impact Estimated at 115b Yen
Reuters, Sony: Sony estimates the impact from the quake on its image sensor and digital camera operations would total 105 billion yen this business year. It says the impact on the company as a whole would be 115 billion yen. The devices division, which includes image sensors, is to book an operating loss of 40 billion yen, compared with the previous year's loss of 29.3 billion yen. Sony also says, without elaborating, that the expected loss at its devices segment factored in a 30 billion yen loss from cancelling development of some camera modules:
"In addition, Sony decided to terminate the development and manufacturing of high-functionality camera modules for external sale, the mass production of which was being prepared at the Kumamoto Technology Center, as a result of a reconsideration of the strategy of this business from a long-term perspective. Approximately 30 billion yen in expense is expected to be incurred due to this termination."
"In addition, Sony decided to terminate the development and manufacturing of high-functionality camera modules for external sale, the mass production of which was being prepared at the Kumamoto Technology Center, as a result of a reconsideration of the strategy of this business from a long-term perspective. Approximately 30 billion yen in expense is expected to be incurred due to this termination."
Monday, May 23, 2016
More Details on SPI Color Night Vision Sensor
SPI Infrared publishes few more details on its X27 4K color night vision sensor. Few statements from SPI Infrared site:
"Unlike other technologies, the x27 low light color security camera always images full 390-1200Nm without having to switch camera functions, the user always gets the full broadband.
The x27 low light color sensor has extremely large pixel pitch cells for high light gathering capabilities and is very sensitive in the IR spectra region. The high 5 Million equivalent ISO system has outstanding low lux capabilities with a whopping 85,000x luminance gain.
High definition 10 megapixel sensor works in the daytime and accepts a wide array of standard off the shelf commercially available lenses. The X27 low light sensor produces 4K high definition color imagery even at 1 millilux low light levels.
The x27 low light color security camera vastly outperforms any exsisting low light color technology like CMOS, Scmos, CCD, EMCCD, EBAPS and traditional military grade intensified technologies.
Current day CMOS extreme low light color sensors reach a peak maximum quantum detection limit, an inevitable quality to these sensors are RGB or color filters that must reside on the sensor to produce a nice color image, along with other filters that enhance color image quality. These filters cut down photons and sensitivity dramatically, but must be present to produce a nice color image. The Solid-state x27 color low light night vision sensor utilizes specialty video processing on chip and on the filters, as well as advanced electronic vis-nir image enhancement algorithms that allow it to collect an incredible amount of light, and retain full sensitivity without loss of a brilliant color image, furthermore the x27’s BSTFA (Broad Spectrum Thin Film Array) high fidelity, large pixel pitch sensor architecture achieves incredible bright as day, true color imagery at real time full tv frame rates, without image lag and minimal image noise or grain.
Another drawback to traditional sensors is the infrared cut filter, this filter sits in front of the sensor and cuts out all infrared wavelengths which is needed for a good color image. By cutting out the infrared spectrum from the sensor, the camera does not pick up infrared signals which have added benefits to a good night vision image and also cuts the ability to see infrared lasers, pointers, Illuminators and designators. The removal of the infrared cut filter from traditional sensors produce a pink/red image and displays a non optimal picture. The specialized x27 sensor sees well into the infrared region as well as produces a nice true brilliant color image allowing the user to see a full broadband extended dynamic range image that includes visible to infrared wavelengths. The x27 outperforms any low light technology in existence today within the visible to SWIR spectral region.
Back side illumination of chip technology is another area that allows the chip to output higher performance in low light, the x27’s color night vision detectors bsi backside illumination is yet another aspect that makes it desirable for Imaging at never before seen extreme low light conditions. The x27 ultra extreme low light color night vision complementary metal oxide semiconductor (CMOS) integrated circuit (IC) is a vital proven technology."
Preliminary Technical Specifications:
Here is one of the recent Youtube demos showing the sensor's night vision capabilities:
"Unlike other technologies, the x27 low light color security camera always images full 390-1200Nm without having to switch camera functions, the user always gets the full broadband.
The x27 low light color sensor has extremely large pixel pitch cells for high light gathering capabilities and is very sensitive in the IR spectra region. The high 5 Million equivalent ISO system has outstanding low lux capabilities with a whopping 85,000x luminance gain.
High definition 10 megapixel sensor works in the daytime and accepts a wide array of standard off the shelf commercially available lenses. The X27 low light sensor produces 4K high definition color imagery even at 1 millilux low light levels.
The x27 low light color security camera vastly outperforms any exsisting low light color technology like CMOS, Scmos, CCD, EMCCD, EBAPS and traditional military grade intensified technologies.
Current day CMOS extreme low light color sensors reach a peak maximum quantum detection limit, an inevitable quality to these sensors are RGB or color filters that must reside on the sensor to produce a nice color image, along with other filters that enhance color image quality. These filters cut down photons and sensitivity dramatically, but must be present to produce a nice color image. The Solid-state x27 color low light night vision sensor utilizes specialty video processing on chip and on the filters, as well as advanced electronic vis-nir image enhancement algorithms that allow it to collect an incredible amount of light, and retain full sensitivity without loss of a brilliant color image, furthermore the x27’s BSTFA (Broad Spectrum Thin Film Array) high fidelity, large pixel pitch sensor architecture achieves incredible bright as day, true color imagery at real time full tv frame rates, without image lag and minimal image noise or grain.
Another drawback to traditional sensors is the infrared cut filter, this filter sits in front of the sensor and cuts out all infrared wavelengths which is needed for a good color image. By cutting out the infrared spectrum from the sensor, the camera does not pick up infrared signals which have added benefits to a good night vision image and also cuts the ability to see infrared lasers, pointers, Illuminators and designators. The removal of the infrared cut filter from traditional sensors produce a pink/red image and displays a non optimal picture. The specialized x27 sensor sees well into the infrared region as well as produces a nice true brilliant color image allowing the user to see a full broadband extended dynamic range image that includes visible to infrared wavelengths. The x27 outperforms any low light technology in existence today within the visible to SWIR spectral region.
Back side illumination of chip technology is another area that allows the chip to output higher performance in low light, the x27’s color night vision detectors bsi backside illumination is yet another aspect that makes it desirable for Imaging at never before seen extreme low light conditions. The x27 ultra extreme low light color night vision complementary metal oxide semiconductor (CMOS) integrated circuit (IC) is a vital proven technology."
Preliminary Technical Specifications:
- Sensor & Parameters: Maintenance free, no moving parts, Solid State non intensified BSTFA Extreme low light color FPA w/column amplification
- Large Format, large pixel pitch architecture w/5,000,000 equivalent ISO
- Backside Illuminated for light utilization efficiency
- Extreme low SNR – High Dynamic Range, Photoconductive & photoresponse gain
- Very high ISO w/Extremely Low read Noise
- Auto Black Level Calibration
- Auto Exposure w/excellent color fidelity
- Excellent image uniformity
- Auto hot pixel correction
- Frame Rate: 60 FPS / optional 120 FPS
- Day Night Mode: Auto Imaging/Auto Switching
- Bright light/Blooming compensation: Automatic
- Photodetector Array Size: 10 Megapixel / HD 4320 x 2432
- Temperature Range: -30C to +80C
- Wavelength: 390-1200 Um broadband Extreme High Sensitivity
- IR Response: Yes
Here is one of the recent Youtube demos showing the sensor's night vision capabilities:
Sunday, May 22, 2016
Difference Between Binning and Averaging
Albert Theuwissen publishes a blog post explaining a difference between different ways to bin pixels and average them:
"Conclusion : charge domain binning is more efficient in increasing the signal-to-noise ratio compared to binning/averaging in the voltage domain or binning in the digital domain. The explanation of binning and averaging as well as the discussion about signal-to-noise ratio in this blog takes into account that the noise content of the pixel output signals is dominated by readout noise. The story becomes slightly different is the signals are shot-noise limited. This will be explained next time."
"Conclusion : charge domain binning is more efficient in increasing the signal-to-noise ratio compared to binning/averaging in the voltage domain or binning in the digital domain. The explanation of binning and averaging as well as the discussion about signal-to-noise ratio in this blog takes into account that the noise content of the pixel output signals is dominated by readout noise. The story becomes slightly different is the signals are shot-noise limited. This will be explained next time."
NHK on Future Image Sensor Technologies
NHK publishes a flyer toward its Open House event traditionally held at the end of May:
The research on back-illuminated small-size image sensors is jointly being conducted with Shizuoka University. The pixel-parallel processing three-dimensional integrated imaging devices research is jointly being conducted with the University of Tokyo. The organic image sensors research is jointly being conducted with Kochi University of Technology.
The research on back-illuminated small-size image sensors is jointly being conducted with Shizuoka University. The pixel-parallel processing three-dimensional integrated imaging devices research is jointly being conducted with the University of Tokyo. The organic image sensors research is jointly being conducted with Kochi University of Technology.
Sharp Changes Reporting
Sharp reports the results of its fiscal 2015 year, ended on March 31, 2016. One of the largest product categories "CCD/CMOS Imagers" in the previous reports, now disappears. The camera modules group is reported instead:
Friday, May 20, 2016
CNN on Image Sensor
Nuit Blanche: Rice and Cornell Universities publish a paper on CNN integration onto the image sensor:
"ASP Vision: Optically Computing the First Layer of Convolutional Neural Networks using Angle Sensitive Pixels" by Huaijin Chen, Suren Jayasuriya, Jiyue Yang, Judy Stephen, Sriram Sivaramakrishnan, Ashok Veeraraghavan, Alyosha Molnar.
Abstract: Deep learning using convolutional neural networks (CNNs) is quickly becoming the state-of-the-art for challenging computer vision applications. However, deep learning's power consumption and bandwidth requirements currently limit its application in embedded and mobile systems with tight energy budgets. In this paper, we explore the energy savings of optically computing the first layer of CNNs. To do so, we utilize bio-inspired Angle Sensitive Pixels (ASPs), custom CMOS diffractive image sensors which act similar to Gabor filter banks in the V1 layer of the human visual cortex. ASPs replace both image sensing and the first layer of a conventional CNN by directly performing optical edge filtering, saving sensing energy, data bandwidth, and CNN FLOPS to compute. Our experimental results (both on synthetic data and a hardware prototype) for a variety of vision tasks such as digit recognition, object recognition, and face identification demonstrate 97% reduction in image sensor power consumption and 90% reduction in data bandwidth from sensor to CPU, while achieving similar performance compared to traditional deep learning pipelines.
"ASP Vision: Optically Computing the First Layer of Convolutional Neural Networks using Angle Sensitive Pixels" by Huaijin Chen, Suren Jayasuriya, Jiyue Yang, Judy Stephen, Sriram Sivaramakrishnan, Ashok Veeraraghavan, Alyosha Molnar.
Abstract: Deep learning using convolutional neural networks (CNNs) is quickly becoming the state-of-the-art for challenging computer vision applications. However, deep learning's power consumption and bandwidth requirements currently limit its application in embedded and mobile systems with tight energy budgets. In this paper, we explore the energy savings of optically computing the first layer of CNNs. To do so, we utilize bio-inspired Angle Sensitive Pixels (ASPs), custom CMOS diffractive image sensors which act similar to Gabor filter banks in the V1 layer of the human visual cortex. ASPs replace both image sensing and the first layer of a conventional CNN by directly performing optical edge filtering, saving sensing energy, data bandwidth, and CNN FLOPS to compute. Our experimental results (both on synthetic data and a hardware prototype) for a variety of vision tasks such as digit recognition, object recognition, and face identification demonstrate 97% reduction in image sensor power consumption and 90% reduction in data bandwidth from sensor to CPU, while achieving similar performance compared to traditional deep learning pipelines.
Thursday, May 19, 2016
e2v Reports Strong Image Sensor Sales
Optics.org: e2v reportes 17% sales growth for its imaging division in its annual report for the year ending March 31, 2016. The company CEO Steve Blair declared himself “delighted” with overall results. At £103.5M, the imaging division’s sales were up from £88.7M in the prior year, growing at a much faster rate than e2v’s other divisions.
CEO Steve Blair and chairman Neil Johnson noted a sharp improvement in profit margins for the imaging division. Recent changes have included the summer 2014 acquisition of Anafocus, and the sale of e2v’s thermal imaging unit.
A year ago e2v said that it had ten “problem contracts” in space imaging, of which four have now been completed and five are due for delivery within 12 months. “We have a strong position in Europe, particularly in CCD sensors, and our offering remains attractive to customers due to its long proven performance in flight,” said the e2v executives.
The increased use of sensors for industrial automation has brought on board some new customers. “We are well positioned to take advantage of the five year plan in China for automation to support the quality drive to 'made in China',” added Blair and Johnson.
Looking to the future, CEO Blair sounded a note of caution regarding the broader macroeconomic environment, but told investors that he still expected solid growth from the imaging division.
CEO Steve Blair and chairman Neil Johnson noted a sharp improvement in profit margins for the imaging division. Recent changes have included the summer 2014 acquisition of Anafocus, and the sale of e2v’s thermal imaging unit.
A year ago e2v said that it had ten “problem contracts” in space imaging, of which four have now been completed and five are due for delivery within 12 months. “We have a strong position in Europe, particularly in CCD sensors, and our offering remains attractive to customers due to its long proven performance in flight,” said the e2v executives.
The increased use of sensors for industrial automation has brought on board some new customers. “We are well positioned to take advantage of the five year plan in China for automation to support the quality drive to 'made in China',” added Blair and Johnson.
Looking to the future, CEO Blair sounded a note of caution regarding the broader macroeconomic environment, but told investors that he still expected solid growth from the imaging division.
Autonomous Driving Vision Challenges
As mentioned in comments, there is a nice video lecture by Mobileye algorithm group leader Uri Rokni on challenges in vision algorithms for autonomous driving:
Wednesday, May 18, 2016
Tessera FotoNation Partners with Kyocera on Automotive Camera Technology
BusinessWire: FotoNation, a wholly owned subsidiary of Tessera, partners with Kyocera to develop intelligent vision solutions for automotive applications. As part of the partnership and using FotoNation technology as a foundation, the two companies will jointly develop advanced computer vision solutions for the automotive market.
“FotoNation is focused on delivering complex computational imaging solutions for automotive applications, and together we will develop technologies that will transform the future of driving,” said Norio Okuda, Manager, Kyocera.
“Increasing interest from the automotive industry for vision systems to enhance vehicle safety represents an opportunity for significant growth for FotoNation, driven mainly by adoption of our advanced imaging systems by tier-one automotive suppliers and OEMs,” stated Sumat Mehra, SVP of marketing and business development at FotoNation. “Kyocera has a strong reputation as a leading technology innovator, and we are pleased to be working with them as a valued technology partner to bring these cutting-edge vision solutions to market.”
“FotoNation is focused on delivering complex computational imaging solutions for automotive applications, and together we will develop technologies that will transform the future of driving,” said Norio Okuda, Manager, Kyocera.
“Increasing interest from the automotive industry for vision systems to enhance vehicle safety represents an opportunity for significant growth for FotoNation, driven mainly by adoption of our advanced imaging systems by tier-one automotive suppliers and OEMs,” stated Sumat Mehra, SVP of marketing and business development at FotoNation. “Kyocera has a strong reputation as a leading technology innovator, and we are pleased to be working with them as a valued technology partner to bring these cutting-edge vision solutions to market.”
Anitoa Demos Chemiluminescence Sensor based on Ultra Low-Light CIS Technology
Yahoo: Anitoa, a Menlo Park, CA startup, demonstrates its low cost and portable chemiluminescence reader and applied it successfully with chemiluminescence immunoassay (CLIA). This CMOS-based solution is capable of detecting as low as a few ng/ML of analyte, such as protein macromolecules, in a sample. Applications of this technology ranges from clinical diagnostics to food safety and environmental monitoring.
A key component of Anitoa's portable chemiluminescence reader is its proprietary ULS24 CMOS bio-optical sensor chip with extreme low-light sensitivity. The ULS24 CMOS sensor also eliminates the need for a multi-sites scanning mechanism commonly used by most chemiluminescence readers. At a 5mmX5mm footprint and consumes only 30mW of power, Anitoa's ULS24 is especially well suited for point-of-care diagnostics applications.
Released in September 2014, Anitoa's ULS24 ultra-low light CMOS bio-optical sensor is said to be the first and only commercially available CMOS sensor that has the needed sensitivity to substitute the bulky and expensive photon multiplier tubes (PMT) and cooled-CCDs in a wide range of medical and scientific instruments. The ultra-low light sensitivity (3e-6 lux) of Anitoa's CMOS sensor is crucial for achieving good SNR in imaging molecular interactions based on fluorescent or chemiluminescence signaling principles.
"We are very pleased to see the test results coming back from our lab and our partners' labs showing the effectiveness of CMOS bio-optical sensors in chemiluminescence imaging. This not only validates the CMOS bio-optical sensor's ultra-low-light sensitivity, but also opens up the possibility for a new generation of low cost and portable molecular testing platforms", said Anitoa CEO Zhimin Ding.
A key component of Anitoa's portable chemiluminescence reader is its proprietary ULS24 CMOS bio-optical sensor chip with extreme low-light sensitivity. The ULS24 CMOS sensor also eliminates the need for a multi-sites scanning mechanism commonly used by most chemiluminescence readers. At a 5mmX5mm footprint and consumes only 30mW of power, Anitoa's ULS24 is especially well suited for point-of-care diagnostics applications.
Released in September 2014, Anitoa's ULS24 ultra-low light CMOS bio-optical sensor is said to be the first and only commercially available CMOS sensor that has the needed sensitivity to substitute the bulky and expensive photon multiplier tubes (PMT) and cooled-CCDs in a wide range of medical and scientific instruments. The ultra-low light sensitivity (3e-6 lux) of Anitoa's CMOS sensor is crucial for achieving good SNR in imaging molecular interactions based on fluorescent or chemiluminescence signaling principles.
"We are very pleased to see the test results coming back from our lab and our partners' labs showing the effectiveness of CMOS bio-optical sensors in chemiluminescence imaging. This not only validates the CMOS bio-optical sensor's ultra-low-light sensitivity, but also opens up the possibility for a new generation of low cost and portable molecular testing platforms", said Anitoa CEO Zhimin Ding.
ARM Buys Apical
ARM has acquired Apical. Apical is one of the UK’s fastest-growing technology companies and its imaging software are used in more than 1.5 billion smartphones and approximately 300 million other consumer/industrial devices including IP cameras, digital stills cameras and tablets.
The acquisition, closed for a cash consideration of $350 million, supports ARM’s growth strategy by enabling new imaging products for next generation vehicles, security systems, robotics, mobile and any consumer, smart building, industrial or retail application where intelligent image processing is needed. Apical has been founded in 2002 and employs approximately 100 people, mainly at R&D center in Loughborough, UK.
“Computer vision is in the early stages of development and the world of devices powered by this exciting technology can only grow from here,” said Simon Segars, CEO, ARM. “Apical is at the forefront of embedded computer vision technology, building on its leadership in imaging products that already enable intelligent devices to deliver amazing new user experiences. The ARM partnership is solving the technical challenges of next generation products such as driverless cars and sophisticated security systems. These solutions rely on the creation of dedicated image computing solutions and Apical’s technologies will play a crucial role in their delivery.”
“Apical has led the way with new imaging technologies based on extensive research into human vision and visual processing,” said Michael Tusch, CEO and founder, Apical. “The products developed by Apical already enable cameras to understand their environment and to act on the most relevant information by employing intelligent processing. These technologies will advance as part of ARM, driving value for its partners as they push deeper into markets where visual computing will deliver a transformation in device capabilities and the way humans interact with machines.”
Thanks to JC for the link!
The acquisition, closed for a cash consideration of $350 million, supports ARM’s growth strategy by enabling new imaging products for next generation vehicles, security systems, robotics, mobile and any consumer, smart building, industrial or retail application where intelligent image processing is needed. Apical has been founded in 2002 and employs approximately 100 people, mainly at R&D center in Loughborough, UK.
“Computer vision is in the early stages of development and the world of devices powered by this exciting technology can only grow from here,” said Simon Segars, CEO, ARM. “Apical is at the forefront of embedded computer vision technology, building on its leadership in imaging products that already enable intelligent devices to deliver amazing new user experiences. The ARM partnership is solving the technical challenges of next generation products such as driverless cars and sophisticated security systems. These solutions rely on the creation of dedicated image computing solutions and Apical’s technologies will play a crucial role in their delivery.”
“Apical has led the way with new imaging technologies based on extensive research into human vision and visual processing,” said Michael Tusch, CEO and founder, Apical. “The products developed by Apical already enable cameras to understand their environment and to act on the most relevant information by employing intelligent processing. These technologies will advance as part of ARM, driving value for its partners as they push deeper into markets where visual computing will deliver a transformation in device capabilities and the way humans interact with machines.”
Thanks to JC for the link!
Mobileye & ST Partner on Self-Driving Processor
GlobeNewsWire: Mobileye and STMicroelectronics announce that they are co-developing the next (5th) generation of Mobileye's SoC, the EyeQ5, to act as the central computer performing sensor fusion for Fully Autonomous Driving (FAD) vehicles starting in 2020.
To meet power consumption and performance targets, the EyeQ5 will be designed in advanced 10nm or below FinFET technology node and will feature eight multithreaded CPU cores coupled with eighteen cores of Mobileye's next-generation vision processors. These enhancements will increase performance 8x times over the current 4th generation EyeQ4. The EyeQ5 will produce more than 12 Tera operations per second, while keeping power consumption below 5W, to maintain passive cooling at extraordinary performance. Engineering samples of EyeQ5 are expected to be available by first half of 2018.
"EyeQ5 is designed to serve as the central processor for future fully-autonomous driving for both the sheer computing density, which can handle around 20 high-resolution sensors and for increased functional safety," said Prof. Amnon Shashua, cofounder, CTO and Chairman of Mobileye. "The EyeQ5 continues the legacy Mobileye began in 2004 with EyeQ1, in which we leveraged our deep understanding of computer vision processing to develop highly optimized architectures to support extremely intensive computations at power levels below 5W to allow passive cooling in an automotive environment."
"Each generation of the EyeQ technology has proven its value to drivers and ST has proven its value to Mobileye as a manufacturing, design, and R&D partner since beginning our cooperation on the EyeQ1," said Marco Monti, EVP and GM of Automotive and Discrete Group, STM. "With our joint commitment to the 5th-generation of the industry's leading Advanced Driver Assistance System (ADAS) technology, ST will continue to provide a safer, more convenient smart driving experience."
EyeQ5's proprietary accelerator cores are optimized for a wide variety of computer-vision, signal-processing, and machine-learning tasks, including deep neural networks. Autonomous driving requires fusion processing of dozens of sensors, including high-resolution cameras, radars, and LiDARs. The sensor-fusion process has to simultaneously grab and process all the sensors' data. For this purpose, the EyeQ5's dedicated IOs support at least 40Gbps data bandwidth.
Engineering samples of EyeQ5 are expected to be available by first half of 2018. First development hardware with the full suite of applications and SDK are expected by the second half of 2018.
To meet power consumption and performance targets, the EyeQ5 will be designed in advanced 10nm or below FinFET technology node and will feature eight multithreaded CPU cores coupled with eighteen cores of Mobileye's next-generation vision processors. These enhancements will increase performance 8x times over the current 4th generation EyeQ4. The EyeQ5 will produce more than 12 Tera operations per second, while keeping power consumption below 5W, to maintain passive cooling at extraordinary performance. Engineering samples of EyeQ5 are expected to be available by first half of 2018.
"EyeQ5 is designed to serve as the central processor for future fully-autonomous driving for both the sheer computing density, which can handle around 20 high-resolution sensors and for increased functional safety," said Prof. Amnon Shashua, cofounder, CTO and Chairman of Mobileye. "The EyeQ5 continues the legacy Mobileye began in 2004 with EyeQ1, in which we leveraged our deep understanding of computer vision processing to develop highly optimized architectures to support extremely intensive computations at power levels below 5W to allow passive cooling in an automotive environment."
"Each generation of the EyeQ technology has proven its value to drivers and ST has proven its value to Mobileye as a manufacturing, design, and R&D partner since beginning our cooperation on the EyeQ1," said Marco Monti, EVP and GM of Automotive and Discrete Group, STM. "With our joint commitment to the 5th-generation of the industry's leading Advanced Driver Assistance System (ADAS) technology, ST will continue to provide a safer, more convenient smart driving experience."
EyeQ5's proprietary accelerator cores are optimized for a wide variety of computer-vision, signal-processing, and machine-learning tasks, including deep neural networks. Autonomous driving requires fusion processing of dozens of sensors, including high-resolution cameras, radars, and LiDARs. The sensor-fusion process has to simultaneously grab and process all the sensors' data. For this purpose, the EyeQ5's dedicated IOs support at least 40Gbps data bandwidth.
Engineering samples of EyeQ5 are expected to be available by first half of 2018. First development hardware with the full suite of applications and SDK are expected by the second half of 2018.
Monday, May 16, 2016
Lattice Announces Programmable Interface Bridge for Mobile Image Sensors
BusinessWire: Lattice announces the industry’s first CrossLink programmable bridging device that supports leading protocols for mobile image sensors and displays. Systems with embedded cameras and displays often do not have the right type or number of interfaces, which can be resolved using a bridge.
The CrossLink device’s features include:
“The latest wave of image capture and display technology, including drones and VR, is creating real industry excitement. Combining these new technologies with a global base of 3.7 billion smartphones and tablets that’s set to rise more than 30 percent by 2020, all equates to a wide variety of interfaces that must be integrated to ensure compatibility,” said Carl Hibbert, associate director of entertainment content and delivery, Futuresource Consulting. “The ability to manage these interfaces through a low cost, low power and small footprint bridging solution is essential.”
The CrossLink device’s features include:
- World’s fastest MIPI D-PHY bridging device that delivers up to 4K UHD resolution at 12 Gbps bandwidth.
- Supports popular mobile, camera, display and legacy interfaces such as MIPI D-PHY, MIPI CSI-2, MIPI DSI, MIPI DPI, CMOS, and SubLVDS, LVDS and more.
- Industry’s smallest package size with a 6 mm2 option.
- Lowest power programmable bridging solution in active mode.
- Built-in sleep mode.
“The latest wave of image capture and display technology, including drones and VR, is creating real industry excitement. Combining these new technologies with a global base of 3.7 billion smartphones and tablets that’s set to rise more than 30 percent by 2020, all equates to a wide variety of interfaces that must be integrated to ensure compatibility,” said Carl Hibbert, associate director of entertainment content and delivery, Futuresource Consulting. “The ability to manage these interfaces through a low cost, low power and small footprint bridging solution is essential.”
Friday, May 13, 2016
Himax CMOS Sensor Business Bottomed Out
Himax Q1 2016 earnings report updates on the company's image sensor business:
"It is also worth highlighting that our CMOS image sensor product line bottomed out in the first quarter, rebounding from its trough in 2015.
Looking into the second quarter, there will be mass production of several design wins for notebooks and increased shipments for multimedia applications. In recent press releases and the last earnings call, we briefly introduced our new smart sensor product lines targeting new applications across smartphones, tablets, AR/VR devices, IoT and artificial intelligence. These include the ultra-low-power QVGA CMOS image sensor and the Diffractive Optical Element (“DOE”) integrated WLO laser diode collimator to be paired with a Near Infrared (NIR) sensor.
We believe the former is by far the lowest power CIS in the industry with similar resolution. It can be applied in a constant state of operation, enabling “always on”, contextually aware, computer vision capabilities.
Regarding DOE integrated WLO laser diode collimator with NIR sensor, we believe this is the most effective total solution for 3D sensing and detection in the smallest form factor. This breakthrough allows 3D image sensing feature to be easily integrated into next-generation consumer electronics. Currently, we are making good progress and have seen encouraging and increasing customer responses. We will report the developments in this new territory in due course."
"It is also worth highlighting that our CMOS image sensor product line bottomed out in the first quarter, rebounding from its trough in 2015.
Looking into the second quarter, there will be mass production of several design wins for notebooks and increased shipments for multimedia applications. In recent press releases and the last earnings call, we briefly introduced our new smart sensor product lines targeting new applications across smartphones, tablets, AR/VR devices, IoT and artificial intelligence. These include the ultra-low-power QVGA CMOS image sensor and the Diffractive Optical Element (“DOE”) integrated WLO laser diode collimator to be paired with a Near Infrared (NIR) sensor.
We believe the former is by far the lowest power CIS in the industry with similar resolution. It can be applied in a constant state of operation, enabling “always on”, contextually aware, computer vision capabilities.
Regarding DOE integrated WLO laser diode collimator with NIR sensor, we believe this is the most effective total solution for 3D sensing and detection in the smallest form factor. This breakthrough allows 3D image sensing feature to be easily integrated into next-generation consumer electronics. Currently, we are making good progress and have seen encouraging and increasing customer responses. We will report the developments in this new territory in due course."
Nidec Copal Announces Small 16MP Camera Module
Nikkei: Nidec Copal claims ti has developed the world's smallest, thinnest and most lightweight 16MP camera module for mobile devices measuring 8.5 x 8.5 x 4.2 mm and weghting about 0.57g. The module is based on a 16MP 1/3.1-inch CMOS sensor with 1.0μm pixels. The module has an AF and F1.9 aperture. Nidec Copal plans to start volume production in the fall of 2016.
Pixart Sales Down
Pixart Q1 2016 report shows that the company's sales are down 5.8% YoY. The net income is down by 69.9% YoY.
From CCDinosaurs to APS Century
Aphesa publishes its presentation at Caeleste Workshop,a part of SPIE Photonics Europe in Brussels in April 2016. A few slides from the presentation:
Sony Kumamoto TEC Status, 4th Update
Sony publishes the 4th update on its recovery from Kumamoto fab damage:
- Operations at Sony Kumamoto Technology Center (located in Kikuchi Gun, Kumamoto Prefecture), which is the primary manufacturing site of image sensors for digital cameras and security cameras, had been suspended due to the impact of the earthquakes.
- However, as of May 9, 2016, testing operations, which are one of the back-end processes carried out on the upper layer of the building, have resumed and other back-end processes, such as assembly, are also expected to restart sequentially beginning May 17, 2016.
- Wafer processing operations located on the lower layer of the building are expected to restart sequentially beginning May 21, 2016.
- Although there was a delay in the supply of components to Sony from certain third-party suppliers that also have manufacturing facilities in the Kumamoto region, inventory adjustments have been made and a timeframe for regaining supply levels is now in place, so no material impact is anticipated on Sony's business operations.
- The impact on Sony's consolidated results due to the effect of the earthquakes, including from opportunity losses, as well as expenses for recovery and reinforcement work, continues to be evaluated.
Thursday, May 12, 2016
Imec Publishes Combined CCD-CMOS TDI Imager
Imec publishes flyer of Argus, a BSI TDI embedded CCD imager prototype featuring >90% QE and monolithic integration of a CCD module on CMOS for machine vision, remote sensing, life sciences, and scientific applications:
Wednesday, May 11, 2016
Imec Presents High-QE BSI Global Shutter Imager
Imec publishes a preliminary data for its dual shutter Mantis image sensor utilizing high QE BSI process:
Soft Reset Noise Analysis
Open-source Sensors journal publishes paper "Analysis of Subthreshold Current Reset Noise in Image Sensors" by Nobukazu Teranishi. The analysis also includes a tapered reset with a feedback amplifier:
Chronocam Aims to Automotive Market
Nikkei reports that Bosch-baked event-driven sensor developer Chronocam has already been employed by major automakers based in Europe and the US for, as a starter, ADAS and is expected to debut in the market in 2019. The company says that its technology makes it easy to match stereo images taken by the two cameras to obtain range information at a high speed and with a low power consumption:
Chronocam stereo camera prototype for automotive applications |
Tuesday, May 10, 2016
Vision Positioning in DJI Phantom 4
DJI publishes a Youtube video showing the Phantom 4 capabilities provided by its cameras and vision processing system:
Continental Integrates ToF Camera into Steering Wheel
PRNewswire: Continental unveils a project that adds the detection zone of gestures on the steering wheel. This is possible due to a TOF sensor, which is integrated into the instrument cluster. Using this approach, the solution minimizes driver distraction and further enhances the development of the holistic human-machine interface.
By swiping up and down, the driver scrolls through the board computer menu |
To select submenus of an app or to find the favorite song, the driver swipes horizontally on the gesture panel. |
By thumb double typing on the gesture panel, the music starts playing. |
With a little wave of the hand the driver can accept an incoming call. A wave with the other hand rejects it. |
Monday, May 09, 2016
Analog Electronics for Radiation Detection Book
CRC Press publishes "Analog Electronics for Radiation Detection" book edited by Renato Turchetta.
Some of the interesting chapters on image sensing:
Some of the interesting chapters on image sensing:
- Analog Electronics for HVCMOS Sensors
Ivan Peric - Analog Electronics for Radiation Detection
Juan A. Leñero-Bardallo and Ángel Rodríguez-Vázquez - Low-Noise Detectors through Incremental Sigma-Delta ADCs
Adi Xhakoni and Georges Gielen - Time-to-Digital Conversion
Yasuo Arai - Designing Photon-Counting, Wide-Spectrum Optical Radiation Detectors in CMOS-Compatible Technologies
Edoardo Charbon and Chockalingam Veerappan - CMOS Image Sensors for Radiation Detection
Nicola Guerrini
Rambus LSS Videos
Rambus publishes a couple of Youtube videos on its Lensless image sensors (LSS). The first one is about applications while the second one demos a thermal LSS operation:
CCD vs CMOS Infographic
CEI-Europe posts a nice iconographic comparing CCD with CMOS sensor. As it's too large to post it here, there are just few cuts:
Thanks to AL for the link!
Thanks to AL for the link!
Subscribe to:
Posts (Atom)