Analog Devices introduces ADSP-BF608 and ADSP-BF609 Blackfin DSPs featuring a high performance video analytics accelerator, the Pipelined Vision Processor (PVP). The PVP is comprised of a set of configurable processing blocks designed to accelerate up to five concurrent image algorithms, enabling a very high level of analytics performance. These processors are ideal for many applications such as automotive advanced driver assistance systems (ADAS), industrial machine vision, and security/surveillance systems.
BF609 PVP supports HD resolution, while BF608 PVP only supports VGA.
Saturday, March 31, 2012
Friday, March 30, 2012
Omnivision's Oslo Center Grows
LinkedIn records show that Omnivision's Oslo, Norway center grew to 9 employees. All employees are from Aptina's design center in Oslo, according to LinkedIn.
Thursday, March 29, 2012
Caeleste Whitepaper on X-Ray Photon Counting
Caeleste published a white paper on "Photon counting and color X-ray imaging in standard CIS technology".
The 100x100um color X-ray pixel has quite complex functionality:
The counters are charge-based analog ones and the pixel layout is quite compact with 80% metal limited fill factor:
The 100x100um color X-ray pixel has quite complex functionality:
The counters are charge-based analog ones and the pixel layout is quite compact with 80% metal limited fill factor:
Wednesday, March 28, 2012
Aptina Announces 3 New Security Sensors, Janus Reference Design
Business Wire: Aptina announces MT9M031 and MT9M021 image sensors integrating its smallest, high performance global shutter technology into a 1/3-inch optical format HD device. The 3.75 um global shutter pixel is said to have exceptional low light performance hand has no artifacts typically associated with rolling shutter pixels.
"With over a decade of research on optimizing and shrinking global shutter pixels we are proud to unveil our latest advances in high performance global shutter technology," said David Zimpfer, GM of Aptina’s Automotive Industrial Business Unit. "By shrinking the global shutter pixel to 3.75-microns we are able to provide high-speed motion capture capability in stunning HD resolution in the standard 1/3-inch optical format."
The 1.2MP sensors can operate at 45 fps at full 1280x960 pixel resolution or at 60fps speed at 720pHD resolution (reduced FOV). The power consumption is 270mW in 720p60 mode. The dynamic range is 83.5dB - quite big for the global shutter sensor. The responsivity at 550nm is 8.5 V/lux-sec.
The only difference between MT9M031 and MT9M021 sensors appears to the the package type. Both sensors are currently sampling with full production start expected in Q2 2012.
Business Wire: Another Aptina sensor announced today is 1/3-inch 1.2MP AR0130CS made with conventional rolling shutter. The sensor features extended IR performance in 850-900nm range. Its QE at 830nm is 26.80%. The responsivity at 550nm is 5.5V/lux-sec. SNRmax is 44dB.
Other than the pixel parameters and the rolling shutter, the AR0130CS appears to be identical to the newly announced global shutter counterparts. The same 83.5dB DR is stated, same power of 270mW in 720p60 mode, same speed of 45 fps at full 1280x960 resolution.
"The AR0130CS provides the surveillance market with a path to upgrade legacy CCTV cameras to high resolution 600-1000 TV line CCTV, or move directly to an HD IP camera solution," says David Zimpfer.
Business Wire: GEO Semiconductor announces the availability of its new security camera reference design jointly created by GEO and Aptina. Code-named Janus, the reference design features GEO’s AnyView technology.
Janus is a complete 1080p60 camera system uses Aptina’s AR0331 HDR sensor and Image Co-Processor ICP-HD with GEO’s sxW2 IC. Janus enables designers to implement advanced capabilities and features collectively referred to as AnyView:
"With over a decade of research on optimizing and shrinking global shutter pixels we are proud to unveil our latest advances in high performance global shutter technology," said David Zimpfer, GM of Aptina’s Automotive Industrial Business Unit. "By shrinking the global shutter pixel to 3.75-microns we are able to provide high-speed motion capture capability in stunning HD resolution in the standard 1/3-inch optical format."
The 1.2MP sensors can operate at 45 fps at full 1280x960 pixel resolution or at 60fps speed at 720pHD resolution (reduced FOV). The power consumption is 270mW in 720p60 mode. The dynamic range is 83.5dB - quite big for the global shutter sensor. The responsivity at 550nm is 8.5 V/lux-sec.
The only difference between MT9M031 and MT9M021 sensors appears to the the package type. Both sensors are currently sampling with full production start expected in Q2 2012.
Business Wire: Another Aptina sensor announced today is 1/3-inch 1.2MP AR0130CS made with conventional rolling shutter. The sensor features extended IR performance in 850-900nm range. Its QE at 830nm is 26.80%. The responsivity at 550nm is 5.5V/lux-sec. SNRmax is 44dB.
Other than the pixel parameters and the rolling shutter, the AR0130CS appears to be identical to the newly announced global shutter counterparts. The same 83.5dB DR is stated, same power of 270mW in 720p60 mode, same speed of 45 fps at full 1280x960 resolution.
"The AR0130CS provides the surveillance market with a path to upgrade legacy CCTV cameras to high resolution 600-1000 TV line CCTV, or move directly to an HD IP camera solution," says David Zimpfer.
Business Wire: GEO Semiconductor announces the availability of its new security camera reference design jointly created by GEO and Aptina. Code-named Janus, the reference design features GEO’s AnyView technology.
Janus is a complete 1080p60 camera system uses Aptina’s AR0331 HDR sensor and Image Co-Processor ICP-HD with GEO’s sxW2 IC. Janus enables designers to implement advanced capabilities and features collectively referred to as AnyView:
- Elimination of multiple cameras by the selection and display of multiple (1-8) views of any size from the fish eye input view while performing real-time full HD De-warping with independent dynamic pan tilt and zoom capabilities in each of the windows;
- “Zero Pixel Loss” panoramic views of the fish eye input at full HD (1080p60) resolution;
- Ability to auto calibrate and correct for lens distortions;
- Stitching together of multiple sensor image streams, to provide ultra-wide panoramic views;
- Reduction of system cost by use of inexpensive optics and plastic lenses; and
- Maximization of image capture area through the use of elliptical and custom lenses.
Tuesday, March 27, 2012
Sensors and Beer
Joachim Linkemann from Basler AG, Germany came with a nice presentation comparing pixels with a glass of beer:
While we are at Basler, it recently published a whitepaper on "Are More Pixels Better?" in security cameras.
While we are at Basler, it recently published a whitepaper on "Are More Pixels Better?" in security cameras.
Micron Italy Presentation on Small Pixels
Gianluca Testa from Micron Italian fab presented "Moving to next generation sub-micron
CMOS image sensor devices" at Sapienza Università di Roma Micro and Nano-Electronics workshop on Sept. 29-30, 2011.
CMOS image sensor devices" at Sapienza Università di Roma Micro and Nano-Electronics workshop on Sept. 29-30, 2011.
Himax Expands Wafer-Level Optics Capacity
PR Newswire: Himax has placed a repeat order for EV Group's (EVG) IQ Aligner UV nanoimprint lithography (UV-NIL) system. The IQ Aligner will be used to support Himax's capacity increase in the production of wafer-level cameras for mobile phones, notebooks and other consumer electronic devices, as well as to support the increasingly stringent manufacturing requirements for wafer-level cameras demanded by Himax's customer base. The IQ Aligner will be shipped and installed at Himax's manufacturing facility in Tainan, Taiwan.
"This repeat order from Himax further extends our market and technology leadership in lens molding, with nearly every major wafer-level optics manufacturer having adopted our suite of solutions," stated Paul Lindner, executive technology director, EV Group.
"This adds to our already advanced manufacturing capabilities for CMOS image sensors, and provides us with a key competitive edge by enabling us to offer a complete manufacturing solution to the mobile handset market", said HC Chen, fab director at Himax.
"This repeat order from Himax further extends our market and technology leadership in lens molding, with nearly every major wafer-level optics manufacturer having adopted our suite of solutions," stated Paul Lindner, executive technology director, EV Group.
"This adds to our already advanced manufacturing capabilities for CMOS image sensors, and provides us with a key competitive edge by enabling us to offer a complete manufacturing solution to the mobile handset market", said HC Chen, fab director at Himax.
SK Hynix to Invest More in Image Sensors
Hynix got an investment from South Korean SK Group, changed its name to SK Hynix, and declared its "plans to further strengthen its mobile business such as ... CMOS Image Sensor following the new IT trend in view of the application shift from PC-based to the mobile-centered".
Monday, March 26, 2012
QIS Presentation Slides On-Line
As Eric Fossum mentioned in comments, his presentation from Image Sensors 2012 Conference is available on-line here. The presentation talks about possible paradigm shifts in image sensors, including QIS idea:
Image Sensor Tutorials
Blake Jacquot published a series of image sensor video tutorials at Youtube: Image Sensors Introduction and Image Sensor Noise Sources. Here is a small subset of them:
Noise of single pixel 1 (Old version):
Noise Single Pixel 2 (Old version):
Noise in an array of pixels:
Noise of single pixel 1 (Old version):
Noise Single Pixel 2 (Old version):
Noise in an array of pixels:
Image Sensor 2012 Conference Report, Part 4
Albert Theuwissen concludes his excellent reports from London, UK Image Sensor 2012 Conference. The last part covers Sony presentation on 1.12um pixel generation and a presentation on Quanta Image Sensor by Eric Fossum.
Lattice Extends Sony, Panasonic Sensors Support
MarketWire: Lattice announces that it has released a bridge design to interface Sony IMX036/IMX136 image sensor to parallel input ISPs.
The Lattice MachXO2-1200 FPGA interfaces directly to the subLVDS I/Os of the Sony IMX136, and no external discrete components are required. The image sensor bridge application can support full HD 1080p resolution at 60fps with a 12-bit ISP interface. The design code in the MachXO2 device can also be modified easily to accommodate support for the full 1080p120 capability of the Sony IMX136 for customers who need this functionality.
MarketWire: Some time ago Lattice announced Panasonic MN34041 1080p support in its HDR-60 Video Camera Development Kit. The sensor is fully supported with a 60 fps color ISP pipeline implemented on a LatticeECP3 FPGA within the Lattice HDR-60 Kit.
The integrated ISP IP pipeline provides end-to-end ISP support from sensor to displayable image and incorporates sensor interfacing, defective pixel correction and 2D noise reduction. Other features include high-quality DeBayer - Color Correction Matrix, Fast Auto Exposure, Auto White Balance, Gamma Correction and Overlay for both character and graphics. Lattice HDMI PHY IP enables output to HDMI/DVI monitors.
The Lattice MachXO2-1200 FPGA interfaces directly to the subLVDS I/Os of the Sony IMX136, and no external discrete components are required. The image sensor bridge application can support full HD 1080p resolution at 60fps with a 12-bit ISP interface. The design code in the MachXO2 device can also be modified easily to accommodate support for the full 1080p120 capability of the Sony IMX136 for customers who need this functionality.
Sony Sub-LVDS-to-Parallel Sensor Interface Bridging |
MarketWire: Some time ago Lattice announced Panasonic MN34041 1080p support in its HDR-60 Video Camera Development Kit. The sensor is fully supported with a 60 fps color ISP pipeline implemented on a LatticeECP3 FPGA within the Lattice HDR-60 Kit.
The integrated ISP IP pipeline provides end-to-end ISP support from sensor to displayable image and incorporates sensor interfacing, defective pixel correction and 2D noise reduction. Other features include high-quality DeBayer - Color Correction Matrix, Fast Auto Exposure, Auto White Balance, Gamma Correction and Overlay for both character and graphics. Lattice HDMI PHY IP enables output to HDMI/DVI monitors.
Sony UK Comments on Nokia 41MP Camera, Nokia Responds
TechRadar published Sony UK's Paul Genge comments on Nokia 808 camera-phone announcement:
"It's quite clear it's a development announcement more than a retailable proposition, the technology is not new, it's only what our cameras have done for about a year now."
Sony cameras use "pixel digital zoom", which groups pixels together for increased sensitivity.
"In that respect, it's not especially stand out, but within the mobile sector, yes it is, so I can understand why it's drawn an awful lot of attention," Genge said. "But, it is still only a technological announcement, it's not a plausible retail solution yet."
Nokia camera group leader Damian Dinning responds in comments:
"1. I am delighted to say (as per previous the information we disclosed during the 808s announcement) the Nokia 808 PureView IS a product that will be available during Q2 of this year.
2. The algorithms we needed to develop to provide the incredible detail the 808 PureView captures and creates in just 5mpix easy to share images were developed by Nokia and are the basis of Nokia proprietary technology.
3. We know of no other camera that uses a high resolution [41mpix] sensor in the unique ways we do to provide the following benefits:
i) 5mpix images which contain far higher levels of detail than cameras with far higher [than 5mpix] resolution sensors.
ii) despite the high levels of detail, file sizes are far smaller (because the pixels are purer) and therefore faster and easier to upload straight from the device. Which of course our devices have had the capability to do for many years.
iii) LOSSLESS zoom in full HD video and stills. There is NO upscaling used in ANY way in the 808 PureView. Unlike many digital cameras which rely on upscaling for digital zoom. Whilst some digital zoom implementations simply crop the sensor to provide a feeling of zoom. In our case when we are cropping (unless at full zoom) we have an abundance of pixels. We put those pixels to extremely effective use by oversampling the data from those pixels.
iv) One of the most important benefits of Nokia's proprietary pixel oversampling is that it retains the information you want (the detail), whilst filtering out most of the information you dont (noise). This is most noticeable in low light. Pixel oversampling is NOT the same as pixel binning. Others may be using binning but Nokia is not in the case of pixel oversampling. We are also NOT interpolating to create pixels that represent completely false information. As said we only oversample information originally captured by our super high resolution sensor and optics. The level of oversampling is as high as 16:1 in the case of full HD video. No other device I know of has such capability.
v) Using this method of zoom not only provides high image quality in a compact device but it also provides a silent zoom as well as allows the maximum aperture to be used even at full zoom."
"It's quite clear it's a development announcement more than a retailable proposition, the technology is not new, it's only what our cameras have done for about a year now."
Sony cameras use "pixel digital zoom", which groups pixels together for increased sensitivity.
"In that respect, it's not especially stand out, but within the mobile sector, yes it is, so I can understand why it's drawn an awful lot of attention," Genge said. "But, it is still only a technological announcement, it's not a plausible retail solution yet."
Nokia camera group leader Damian Dinning responds in comments:
"1. I am delighted to say (as per previous the information we disclosed during the 808s announcement) the Nokia 808 PureView IS a product that will be available during Q2 of this year.
2. The algorithms we needed to develop to provide the incredible detail the 808 PureView captures and creates in just 5mpix easy to share images were developed by Nokia and are the basis of Nokia proprietary technology.
3. We know of no other camera that uses a high resolution [41mpix] sensor in the unique ways we do to provide the following benefits:
i) 5mpix images which contain far higher levels of detail than cameras with far higher [than 5mpix] resolution sensors.
ii) despite the high levels of detail, file sizes are far smaller (because the pixels are purer) and therefore faster and easier to upload straight from the device. Which of course our devices have had the capability to do for many years.
iii) LOSSLESS zoom in full HD video and stills. There is NO upscaling used in ANY way in the 808 PureView. Unlike many digital cameras which rely on upscaling for digital zoom. Whilst some digital zoom implementations simply crop the sensor to provide a feeling of zoom. In our case when we are cropping (unless at full zoom) we have an abundance of pixels. We put those pixels to extremely effective use by oversampling the data from those pixels.
iv) One of the most important benefits of Nokia's proprietary pixel oversampling is that it retains the information you want (the detail), whilst filtering out most of the information you dont (noise). This is most noticeable in low light. Pixel oversampling is NOT the same as pixel binning. Others may be using binning but Nokia is not in the case of pixel oversampling. We are also NOT interpolating to create pixels that represent completely false information. As said we only oversample information originally captured by our super high resolution sensor and optics. The level of oversampling is as high as 16:1 in the case of full HD video. No other device I know of has such capability.
v) Using this method of zoom not only provides high image quality in a compact device but it also provides a silent zoom as well as allows the maximum aperture to be used even at full zoom."
Saturday, March 24, 2012
DxOMark Prizes Nikon D800 Sensor
DxOMark, possibly the biggest sensor database in the world, proclaimed Nikon D800 sensor being the best one that DxOMark ever analysed. The 4.7um-pixel, 36.8MP full-frame sensor has shown 14.4 stops DR, color reproduction comparable with medium-format sensors and won score of 95 - the highest ever score in the database.
Comparing with Canon's older generation 6.4um pixel 5D Mark II, the 4.7um D800 shows notably higher SNR:
Comparing with Canon's older generation 6.4um pixel 5D Mark II, the 4.7um D800 shows notably higher SNR:
Friday, March 23, 2012
Image Sensors 2012 Report, Part 3
Albert Theuwissen published third part of his report on IS2012 presentations. NHK, Rutherford Labs and ESO presentations are covered.
Pixpolar Presents MIG Pixel Simulations and Measurements
Pixpolar presented its MIG pixel simulation and measurement results. In this Youtube video the y-axis is the measured voltage and the x-axis is time:
Thursday, March 22, 2012
Image Sensors 2012 Report, Day 2
Albert Theuwissen continues reporting from from London, UK Image Sensors 2012 Conference. The report covers CEA-LETI, Aptina and Sony[-Ericsson] presentations.
Research and Markets Updates its Global and Chinese CMOS Camera Module Industry Report
Research and Markets' updated Global and Chinese CMOS Camera Module Industry Report, 2011-2012 is composed of four parts: image sensor, Lens, Camera module and VCM.
Image sensor offers the highest price among the four parts, with the cost roughly accounting for 30-50% of the entire CMOS camera module.
In the lens domain, all manufacturers are confronted with profit straits. This is a labor-intensive industry and veteran employees are more than ever in shortage, which is especially obvious in the production bases located in Chinese mainland. On the one hand, the labor cost is increasing; and on the other hand, the raw materials upstream saw price hike in 2011. By contrast, the lens below 8MP witnessed drop in price because of tough competition.
In 2011, the revenue of medium-and small-scale lens manufacturers fell to varied extent, with their profits plummeting. But it was not the case for Taiwan-based Largan Precision and Genius Electronics Optical, both of which saw soaring increase in revenue. The two firms contracted all the lens business for Apple’s camera module. In particular, Largan Precision dominated the high-end market, while Genius Electronics Optical occupied middle-and low-end market. Genius Electronics Optical, the revenue of which grew by 136% in 2011, is the leading iPad camera module lens provider and the only supplier of iPhone VGA camera module lens. Nonetheless, the gross margin of the two declined.
In the camera module field, benefiting from “Apple effect”, the business of LG Innotek, as the major provider of camera module for Apple, grew by more than 100% compared to less than 10% in the entire industry in 2011. The three camera module suppliers approved by Apple include LG Innotek, Sharp, and Primax.
Sharp, also as the major supplier of Nokia, is the second supplier of Apple. For the Taiwan-based Primax Electronics Ltd focuses on low-end products. Furthermore, Vistapoint under Flextronics once served as Apple’s supplier. But the rising wages in Chinese Mainland forced it to sell its plant located in Zhuhai and narrow the business scale in March 2012. Vistapoint has been excluded in the supplier list published by Apple in 2012.
Image sensor offers the highest price among the four parts, with the cost roughly accounting for 30-50% of the entire CMOS camera module.
In the lens domain, all manufacturers are confronted with profit straits. This is a labor-intensive industry and veteran employees are more than ever in shortage, which is especially obvious in the production bases located in Chinese mainland. On the one hand, the labor cost is increasing; and on the other hand, the raw materials upstream saw price hike in 2011. By contrast, the lens below 8MP witnessed drop in price because of tough competition.
In 2011, the revenue of medium-and small-scale lens manufacturers fell to varied extent, with their profits plummeting. But it was not the case for Taiwan-based Largan Precision and Genius Electronics Optical, both of which saw soaring increase in revenue. The two firms contracted all the lens business for Apple’s camera module. In particular, Largan Precision dominated the high-end market, while Genius Electronics Optical occupied middle-and low-end market. Genius Electronics Optical, the revenue of which grew by 136% in 2011, is the leading iPad camera module lens provider and the only supplier of iPhone VGA camera module lens. Nonetheless, the gross margin of the two declined.
In the camera module field, benefiting from “Apple effect”, the business of LG Innotek, as the major provider of camera module for Apple, grew by more than 100% compared to less than 10% in the entire industry in 2011. The three camera module suppliers approved by Apple include LG Innotek, Sharp, and Primax.
Sharp, also as the major supplier of Nokia, is the second supplier of Apple. For the Taiwan-based Primax Electronics Ltd focuses on low-end products. Furthermore, Vistapoint under Flextronics once served as Apple’s supplier. But the rising wages in Chinese Mainland forced it to sell its plant located in Zhuhai and narrow the business scale in March 2012. Vistapoint has been excluded in the supplier list published by Apple in 2012.
Cambridge Mechatronics Presents Smart Metal OIS Lens Barrel Shift (SOS)
Cambridge Mechatronics Ltd (CML) announced that it has made working prototypes of its latest Optical Image Stabilisation (OIS) and Continuous Autofocus (CAF) lens actuator design.
CML's first OIS related announcement unveiled its Smart metal OIS camera module Tilt or SOT architecture. That approach, which the company (with its partners) has developed to the point whereby it is scheduled for mass production later this year, enables the standard 8.5mm square camera footprint and is optimised for camera performance and time-to-market.
The most recent prototypes are based on an architecture called Smart metal OIS lens barrel Shift or SOS, also ensure the 8.5mm square footprint but are optimised for low camera z-height and cost. CML sees SOS entering mass production in late 2013.
The benefits of this new Barrel Shift OIS camera are:
Both architectures facilitate devices that provide high quality rapid CAF allowing for point and click image capture at 13 MPixels and above. Smart metal technology also consumes significantly less power than the VCM most often found moving lenses in today's smartphone AF cameras.
CML believes that its two architectures will co-exist. SOT will always provide the best OIS performance across the whole image, as much as 4 optical stops of handshake suppression even in the corners. However it will add 0.3mm of z-height to the camera. Alternatively, SOS will add nothing to the overall z-height of the camera. This means that with the latest wide field of view lenses a camera height of 4.0mm can be achieved, almost 2.0mm lower than current smartphone cameras. At this dimension the camera will no longer be dictating the thickness of the handset. As SOS is mechanically simpler than SOT, the manufacturing cost of the camera will be lower.
CML built the SOS prototypes using parts injection moulded by one of its manufacturing licensees, Actuator Solutions GmbH (ASG). CML is currently optimizing the micro-electronic control of the SOS actuators and building fully functional SOS camera systems. ASG and Seiko Instruments Inc (SII) (another publicly announced manufacturing licensee of CML) are working with multiple major camera module makers, including Foxconn, to deliver SOT and rapid CAF cameras into mass production before the end of 2012.
Thanks to DW for sending me the link!
CML's first OIS related announcement unveiled its Smart metal OIS camera module Tilt or SOT architecture. That approach, which the company (with its partners) has developed to the point whereby it is scheduled for mass production later this year, enables the standard 8.5mm square camera footprint and is optimised for camera performance and time-to-market.
The most recent prototypes are based on an architecture called Smart metal OIS lens barrel Shift or SOS, also ensure the 8.5mm square footprint but are optimised for low camera z-height and cost. CML sees SOS entering mass production in late 2013.
The benefits of this new Barrel Shift OIS camera are:
- OIS functionality adds nothing to AF camera footprint – remains at 8.5mm smartphone standard. Supports M6.5 lens and 1/3.2” image sensor
- OIS functionality adds nothing to camera z-height. New Barrel Shift OIS actuator is 2.5mm in height which easily supports a 4mm overall camera height Camera z-height only depends on the lens choice
- The actuator structure is simplified and camera integration process is straightforward. This will result in a lower cost OIS camera
- All the above means that CML is targeting this OIS camera for mainstream smartphones
Both architectures facilitate devices that provide high quality rapid CAF allowing for point and click image capture at 13 MPixels and above. Smart metal technology also consumes significantly less power than the VCM most often found moving lenses in today's smartphone AF cameras.
CML believes that its two architectures will co-exist. SOT will always provide the best OIS performance across the whole image, as much as 4 optical stops of handshake suppression even in the corners. However it will add 0.3mm of z-height to the camera. Alternatively, SOS will add nothing to the overall z-height of the camera. This means that with the latest wide field of view lenses a camera height of 4.0mm can be achieved, almost 2.0mm lower than current smartphone cameras. At this dimension the camera will no longer be dictating the thickness of the handset. As SOS is mechanically simpler than SOT, the manufacturing cost of the camera will be lower.
CML built the SOS prototypes using parts injection moulded by one of its manufacturing licensees, Actuator Solutions GmbH (ASG). CML is currently optimizing the micro-electronic control of the SOS actuators and building fully functional SOS camera systems. ASG and Seiko Instruments Inc (SII) (another publicly announced manufacturing licensee of CML) are working with multiple major camera module makers, including Foxconn, to deliver SOT and rapid CAF cameras into mass production before the end of 2012.
VCM OIS 12 MPixel (left) v SOS 13 MPixel (right) |
Thanks to DW for sending me the link!
Sony Announces IPELA - New HD + HDR ISP for Security Applications
Sony announces "IPELA ENGINE", capable of the industry’s first 130dB WDR in full HD quality at 30fps speed. The "IPELA Engine" is the general term of Sony's integrated signal processing system for high picture quality which combines the company's unique signal processing and video analytics technologies. The “IPELA Engine” is composed mainly of the four components below:
1. View-DR:
This is Sony name for locally tuning contrast and adaptively correcting tone for light and dark areas through the combination of images taken at varying shutter speeds within a single frame.
2. High Frame Rate:
Full HD (1920 x 1080) video is possible at 60fps.
3. DEPA Advanced:
The functions for detection of moving objects, humans and any objects blocking the view among others have been enhanced through an alarm detection function using image processing.
4. XDNR (Excellent Dynamic Noise Reduction):
The detection and removal of noise within a single frame is combined with the reduction of noise from differential signals in the consecutive frames.
The "IPELA ENGINE" will be equipped into new security camera products from Fall 2012.
1. View-DR:
This is Sony name for locally tuning contrast and adaptively correcting tone for light and dark areas through the combination of images taken at varying shutter speeds within a single frame.
2. High Frame Rate:
Full HD (1920 x 1080) video is possible at 60fps.
3. DEPA Advanced:
The functions for detection of moving objects, humans and any objects blocking the view among others have been enhanced through an alarm detection function using image processing.
4. XDNR (Excellent Dynamic Noise Reduction):
The detection and removal of noise within a single frame is combined with the reduction of noise from differential signals in the consecutive frames.
The "IPELA ENGINE" will be equipped into new security camera products from Fall 2012.
Yet Another Fast Sensor Application
MIT Camera Culture group published a paper proposing a way to see around the corners. MIT 3D range camera is said to be able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimeter lateral precision. Youtube video shows its principle:
Wednesday, March 21, 2012
Fujifilm Redesigns Sensors Due to Excessive Blooming
Imaging Resource reports that Fujifilm intends to re-design its latest sensors due to excessive blooming. The sensors are used in Fujifilm X10 and Fujifilm X-S1 cameras. Fujifilm USA said that this redesigned sensor should arrive in late May. A nice picture below shows 33-pixel sized blooming orbs in some conditions:
Thanks to KP for sending me the link!
Thanks to KP for sending me the link!
Image Sensors 2012 Conference Report, Day 1
Albert Theuwissen reports from Intertech-Pira Image Sensors 2012 conference being held these days in London, UK.
Nobukazu Teranishi of Panasonic talked about "Dark Current and White Blemishes" in sensors. After talking about different gettering techniques : internal gettering, external gettering and proximity gettering, Nobukazu-san presented a new dark current generation model for the pinned photodiodes. His conclusion was that the best dark current is achieved by using channel-stop diffusions in place of LOCOS and STI isolation.
Nobukazu Teranishi of Panasonic talked about "Dark Current and White Blemishes" in sensors. After talking about different gettering techniques : internal gettering, external gettering and proximity gettering, Nobukazu-san presented a new dark current generation model for the pinned photodiodes. His conclusion was that the best dark current is achieved by using channel-stop diffusions in place of LOCOS and STI isolation.
e2v Increases Sensors Production at TowerJazz
PR Newswire: e2v and TowerJazz announce the increased production of e2v's Ruby low light CMOS image sensors and the ELiiXA+ high speed multi-line scan camera using TowerJazz's CIS technology.
Officially introduced in Q4 2011, the ELiiXA+ camera range and the Ruby image sensor family are demonstrating the successful relationship between e2v and TowerJazz. The new product families join several of e2v's products already running in volume production at TowerJazz's Fab 2, including sensors for industrial, medical, scientific and space applications. With a strong relationship of over six years, e2v has been progressively increasing production at TowerJazz to match demand for these advanced sensor solutions.
According to Yole Development's Image Sensor market research, the machine vision market is expected to be $88M by 2015 with a CAGR of 23%.
Globes' market source told: "The increase in production by this customer will boost Tower's revenue by $10 million a year within two years."
Thanks to JG for sending me the news!
Officially introduced in Q4 2011, the ELiiXA+ camera range and the Ruby image sensor family are demonstrating the successful relationship between e2v and TowerJazz. The new product families join several of e2v's products already running in volume production at TowerJazz's Fab 2, including sensors for industrial, medical, scientific and space applications. With a strong relationship of over six years, e2v has been progressively increasing production at TowerJazz to match demand for these advanced sensor solutions.
According to Yole Development's Image Sensor market research, the machine vision market is expected to be $88M by 2015 with a CAGR of 23%.
Globes' market source told: "The increase in production by this customer will boost Tower's revenue by $10 million a year within two years."
Thanks to JG for sending me the news!
e-con Launches Stereo Camera Reference Design with TI OMAP and Aptina WVGA
PRWeb: e-con Systems an embedded design services company specializing in development of advanced camera solutions announces what is says to be the world’s first Stereo Vision camera reference design based on TI’s OMAP/DM37x family of processors and Aptina's 1/3-inch Global Shutter monochrome WVGA image sensors MT9V024. The Capella reference design aims to machine vision, robotics, 3D object recognition and other applications.
Youtube demo tells more about the reference design:
Youtube demo tells more about the reference design:
Monday, March 19, 2012
Samsung Call to Universities
The Global Research Outreach Program is Samsung Electronics annual call for best and novel ideas. In image sensors, 6 areas of interest are identified:
1. High Resolution Computational Imaging based on Camera 2.0
Scope
We are interested in high resolution computational imaging with the following properties
Subject 2: Development of High Performance in Sub-1um Pixel and Simulation Environment
Scope
Challenges that significantly advance the state-of-the-art in pixel technologies include:
Subject 3: Energy Efficient Column Parallel Two Step ADCs for High Speed Imaging
Scope
We are interested in a two step ADC regarding the embodiment of low power & high speed CIS:
Subject 4: Resolution Enhancement of Image from Low Resolution Image
Scope
Challenges that significantly advance the state-of-the-art in image upscaling technologies include:
Subject 5: Smart Image Sensor
Scope
Subject 6: Si Photonic Biosensor for Healthcare
Scope
Interesting to note that 2012 is the first year program having a large image sensor section. The previous years programs had no CIS content.
1. High Resolution Computational Imaging based on Camera 2.0
Scope
We are interested in high resolution computational imaging with the following properties
- Methods to refocus and estimate depth from computational (plenoptic) camera and Camera 2.0 platform
- Methods to get high resolution output image and/or depth from plenoptic camera.
- Methods to operate computational camera in low power-consumption
- When implementing plenoptic camera as very thin camera module, are there any problem expected?
- There may be trade-off between output resolution and refocusing performance. How can we increase output resolution while keeping refocusing performance?
- Can super-resolution techniques increase output resolution? Are there any artifact or resource limitation from super-resolution?
Subject 2: Development of High Performance in Sub-1um Pixel and Simulation Environment
Scope
Challenges that significantly advance the state-of-the-art in pixel technologies include:
- New microlens structure to reduce diffraction and enhance light gathering efficiency in the submicron pixel sensor
- New methods to enhance light absorption power in the submicron pixel sensor
- New methods include new material as well as new optical structure
- Novel concepts (e.g. surface plasmon, multiple electron generation, etc.) are also welcomed.
- New color filter material and structure to improve SNR with good color accuracy
- Simulation method to increase speed and accuracy
- How can we control the diffraction of the lens between pixels to reduce the crosstalk in the submicron pixel sensor?
- How can the light gathering power of microlens be improved in the sub-micron pixels?
- How to remove loss of power in sub-micron pixels?
- Additional Structures to enhance the absorption power in the submicron pixel sensor.
- What is the ideal color filter spectrum to improve SNR?
- What is the best simulation method to improve speed and accuracy?
Subject 3: Energy Efficient Column Parallel Two Step ADCs for High Speed Imaging
Scope
We are interested in a two step ADC regarding the embodiment of low power & high speed CIS:
- Optimization of CIS readout architecture to overcome the two step ADC’s weakness
- ADC type to improve the productivity as well as size, speed, and power
- Structure innovation for energy-efficient two-step ADC having ultra low noise
- How can we get over all the obstacles, especially the trade-off relation between power, speed, and noise when designing a next CIS ADC?
- How can we secure the uniformity and productivity as well as IP’s performance?
- How can we get over the size competitiveness of original single slope ADC as well as power efficiency?
Subject 4: Resolution Enhancement of Image from Low Resolution Image
Scope
Challenges that significantly advance the state-of-the-art in image upscaling technologies include:
- Image upscaling based on self-similarity of an input image.
- Reducing artifacts and improve naturalness of the upscaled image
- Reducing required line memory and computational complexity of upscaling algorithm
- How to reduce artificial and unnatural representation of upscaled image to complex textured input image?
- How to define similarity among similar patterns? If we have similarity measure, how to use this similarity to stitch and upscale high resolution image?
- How to reduce computational redundancy in cascaded processing of self-similarity based upscaling?
Subject 5: Smart Image Sensor
Scope
- Advanced smart functional imaging technologies, especially in the field of health-care, natural user interface, virtual reality, etc.
- Pixel, circuit, image signal processing, optics, module and any other system level architectures covering the above mentioned area.
- Unprecedented pixel, circuit, and system level core technologies, such as three dimensional imaging, cognitive imaging, imaging in non-visible wavelength range, infrared-to-visible converting imaging, single transistor CMOS imaging, etc.
- Image signal processing algorithm which accounts for effectiveness in smart functionalities, such as smart pattern and motion recognition, etc.
- Methodology of analysis and characterization in pixel and system level for advanced smart imaging devices.
- Why do the new smart functionalities of proposition bear technological impact and possibly open new consumer electronics markets in regards of image sensor?
- How can the proposition be realized with new architectures?
- How can the proposition be realized in practice? For instance, is the proposition achievable with current CMOS technologies? Would the power of electrical consumption and operational speed be acceptable?
- Why do the methodology of analysis and characterization of proposition bear academic and technological importance and effectiveness?
Subject 6: Si Photonic Biosensor for Healthcare
Scope
- Smart biosensor technologies, especially in the area of disease detection such as cancer, virus, glucose and DNA sequencing for health-care
- Biosensing element and bio-processing, circuit and system level architectures covering the above mentioned area.
- Biomarker discovery for lung cancer diagnosis
- Photonics integrated circuits for biosensor, such as micro-optical spectrometer, WDM devices and optical ring resonator, etc.
- Circuit architectures, such as resonant wavelength sensing readout with low noise, etc.
- Methodology of analysis and characterization in bio-processing and system level for advanced smart biosensor.
- Measurement of the shift in resonant wavelength
- Miniaturized photonic components and biosensor element
- Compatibility with standard CMOS process
- Bio data processing algorithm which accounts for effectiveness in detection and DNA sequencing.
- Why the new functionalities of proposition should be realized with new architecture?
- How effectively the proposition can be realized in practice?
Interesting to note that 2012 is the first year program having a large image sensor section. The previous years programs had no CIS content.
Saturday, March 17, 2012
Samsung SVP on Sensor Development Process
Imaging Resource published an interview with Samsung Imaging Business execs. Few interesing quotes:
Byungdeok Nam, SVP, R&D Team, Digital Imaging Business says: "It usually takes about a year and a half to two years to develop sensors, and we have what are called test vehicles, where on a wafer we can try different samples of sensors with different technologies. Of all these different sensors, we see which is most strong, appropriate or optimal for us, and then we concentrate our development of that technology with that sensor. So in the beginning, we would have many different samples of sensors, and we would then do the evaluation , and decide on one sensor, and then do the development on that sensor."
Byungdeok Nam responds on smartphones vs. digital cameras question: "Well, basically, the OS for cameras and the OS for smartphones are different. Right now, phones have more processing power and they have more memory, So semiconductor companies are providing products that are needed by the smartphone companies, but I think that the same goes for cameras. I guess that in a year or two, cameras can have the same processing power or memory as smartphones."
Byungdeok Nam, SVP, R&D Team, Digital Imaging Business says: "It usually takes about a year and a half to two years to develop sensors, and we have what are called test vehicles, where on a wafer we can try different samples of sensors with different technologies. Of all these different sensors, we see which is most strong, appropriate or optimal for us, and then we concentrate our development of that technology with that sensor. So in the beginning, we would have many different samples of sensors, and we would then do the evaluation , and decide on one sensor, and then do the development on that sensor."
Byungdeok Nam responds on smartphones vs. digital cameras question: "Well, basically, the OS for cameras and the OS for smartphones are different. Right now, phones have more processing power and they have more memory, So semiconductor companies are providing products that are needed by the smartphone companies, but I think that the same goes for cameras. I guess that in a year or two, cameras can have the same processing power or memory as smartphones."
NHK Interview on HiVision Progress
Image Sensors 2012 interviews Hiroshi Shimamoto who will present NHK 8K/120fps sensor on the upcoming conference. Few quotes:
"Super Hi-Vision (SHV) is a future broadcast system that will give viewers a great sensation of reality. SHV consists of an extremely high-resolution (16 times of HDTV) imagery system and 22.2 channel super surround multi-channel sound system.
We are now proposing to extend its frame frequency from 60 Hz to 120 Hz to improve the motion picture quality, and to have a wide-gamut colorimetry for better color reproduction. We call this new SHV system "full-spec SHV"."
"Super Hi-Vision (SHV) is a future broadcast system that will give viewers a great sensation of reality. SHV consists of an extremely high-resolution (16 times of HDTV) imagery system and 22.2 channel super surround multi-channel sound system.
We are now proposing to extend its frame frequency from 60 Hz to 120 Hz to improve the motion picture quality, and to have a wide-gamut colorimetry for better color reproduction. We call this new SHV system "full-spec SHV"."
Friday, March 16, 2012
Chipworks Finds Omnivision Sensors Inside the New iPad
Chipworks was quick revealing Omnivision OV5650 5MP BSI sensor inside the main camera of the new iPad. This is the same 1.75um pixel sensor used in iPhone 4:
The secondary VGA camera uses Omnivision's OV297AA 3um pixel sensor. The same sensor is used in iPod Nano and iPad 2:
Thanks to EK for sending me the news! Looks like Needham was wrong predicting Samsung sensors in iPad in January.
The secondary VGA camera uses Omnivision's OV297AA 3um pixel sensor. The same sensor is used in iPod Nano and iPad 2:
Thanks to EK for sending me the news! Looks like Needham was wrong predicting Samsung sensors in iPad in January.
TI Licenses Apical's HDR ISP Cores
EETimes: Apical announced a licensing agreement with TI in which TI will use Apical’s iridix ISP IP cores in future products.
iridix acts as a central component in high dynamic range imaging and also helps address several imaging design challenges for converged mobile imaging devices.
The iridix image processing IP cores will be integrated into TI products targeting digital imaging and display applications. Apical also licensed its ISP IP to Samsung in 2009, Hynix in 2010, HiSense in 2011, and HiSilicon in 2012.
iridix acts as a central component in high dynamic range imaging and also helps address several imaging design challenges for converged mobile imaging devices.
The iridix image processing IP cores will be integrated into TI products targeting digital imaging and display applications. Apical also licensed its ISP IP to Samsung in 2009, Hynix in 2010, HiSense in 2011, and HiSilicon in 2012.
Thursday, March 15, 2012
Microsoft Research Presented Depth-Mapping Webcam
Microsoft Research had held a TechFest event on March 6, 2012 where it presented a new 3D-mapping webcams, shown in this Youtube video (a higher resolution version directly from Microsoft site is here):
Microsoft says: "This project presents next-generation webcam hardware and software prototypes. The new prototype webcam has an extremely wider view angle than traditional webcams and can capture stereo movie and high-accuracy depth images simultaneously."
Update: The zoomed face of Microsoft's 3D Webcam from the slide above (click to enlarge):
Another zoomed version under slightly different angle and better resolution:
Microsoft says: "This project presents next-generation webcam hardware and software prototypes. The new prototype webcam has an extremely wider view angle than traditional webcams and can capture stereo movie and high-accuracy depth images simultaneously."
Update: The zoomed face of Microsoft's 3D Webcam from the slide above (click to enlarge):
Another zoomed version under slightly different angle and better resolution:
Wednesday, March 14, 2012
IMS: Apple Needs to Embrace Vision-Based User Interface Technology
IMS Research believes Apple will need to embrace embedded vision-based technologies in its next product releases in order for the company to maintain its competitive edge.
Competitors such as Samsung and Microsoft have steadily begun integrating these technologies in recent releases and several more have products slated for debut in the next year, as competitive differentiators to employ against Apple. These technologies will also become commonplace in the years to come.
Apple’s competitors are also more aggressively deploying camera-based gesture recognition applications. Microsoft has already shown its commitment to gesture control with the Xbox 360 and upcoming Windows 8 platforms, along with gesture-friendly common interfaces across devices. Windows 8-based laptops and tablets incorporating gesture control with either standard or enhanced front-facing cameras are debuting this year. Android-based smartphones and tablets incorporating gesture control will debut in volume in late 2012. In the home video arena, where Apple has significant aspirations, Samsung is only the first of several major consumer electronics companies to debut camera-based gesture recognition this year in its Smart TVs. Vision-based applications are thus expected to be a competitive differentiator going forward.
Competitors such as Samsung and Microsoft have steadily begun integrating these technologies in recent releases and several more have products slated for debut in the next year, as competitive differentiators to employ against Apple. These technologies will also become commonplace in the years to come.
Apple’s competitors are also more aggressively deploying camera-based gesture recognition applications. Microsoft has already shown its commitment to gesture control with the Xbox 360 and upcoming Windows 8 platforms, along with gesture-friendly common interfaces across devices. Windows 8-based laptops and tablets incorporating gesture control with either standard or enhanced front-facing cameras are debuting this year. Android-based smartphones and tablets incorporating gesture control will debut in volume in late 2012. In the home video arena, where Apple has significant aspirations, Samsung is only the first of several major consumer electronics companies to debut camera-based gesture recognition this year in its Smart TVs. Vision-based applications are thus expected to be a competitive differentiator going forward.
Mixel and Graphin Demo M-PHY MIPI Products
Business Wire: Mixel and Graphin announce what they call the world’s first end-to-end video transmission over a MIPI M-PHY link. In 2010, the two companies established a strategic partnership to address the emerging M-PHY and to produce a “Golden M-PHY” IC to be used in Graphin’s evaluation system. As a result of that collaboration, Mixel achieved first-silicon success with its M-PHY test chip supporting all use cases, and was the first and only IP provider to demonstrate that capability in the MIPI face-to-face meeting in Copenhagen in June 2011. The companies will now be demonstrating end-to-end video transmission using the Mixel chip in the MIPI Alliance face-to-face meeting in Seoul, Korea on March 13th.
Mixel’s M-PHY IP supports both TYPE I and TYPE II operation, A and B data rates, and all current and future MIPI M-PHY use-cases, such as DigRF v4, UniProSM 1.4, CSI-3, LLI, and JEDEC’s UFS. The MXL-MIPI-M-PHY-HSG2 supports High-Speed (HS) Gear1 (G1), Gear2 (G2), as well as Low-Speed Gear 0 (LS-G0) through LS-G7. The IP supports 1.0 version of the M-PHY specifications, and has been silicon proven for over a year now.
Update: Youtube video of the demo:
Mixel’s M-PHY IP supports both TYPE I and TYPE II operation, A and B data rates, and all current and future MIPI M-PHY use-cases, such as DigRF v4, UniProSM 1.4, CSI-3, LLI, and JEDEC’s UFS. The MXL-MIPI-M-PHY-HSG2 supports High-Speed (HS) Gear1 (G1), Gear2 (G2), as well as Low-Speed Gear 0 (LS-G0) through LS-G7. The IP supports 1.0 version of the M-PHY specifications, and has been silicon proven for over a year now.
Update: Youtube video of the demo:
Tuesday, March 13, 2012
TowerJazz Announced 0.11um Pixel Platform, Easy Migration from 0.18um Process
PR Newswire: TowerJazz announces its TS11IS hybrid CIS process, a combination of 0.11um and 0.16um platform. The TS11IS combines TowerJazz's 0.16um CMOS for periphery circuits and its 0.11um pushed design rules for the pixel array. The process is targeted for applications in high end photography, machine vision, 3D imaging and security sensors.
The new platform, based on Tower's 0.16um CMOS shrink process, will allow easy re-use of existing customers' 0.18um circuit IP which will save them from investing in resources to redesign existing blocks, and increase the probability for first time success. The TS11IS offers improved pixel performance, smaller pixel pitch, higher resolution, improved sensitivity, and improved angular response. It allows up to 50% reduction of pixel size, mainly for high-end global shutter pixels.
The platform includes a new local interconnect layer to allow denser metallization routing in pixels while maintaining good QE. It also includes tighter design rules for all metal layers and implant layers as well as provides a "Bathtub" option for lower stack height, improving the sensors' angular response.
"By allowing significantly smaller pixels, higher resolution and enhanced pixel performance, our new platform ideally serves our customers' needs for the professional CIS markets, allowing them to create new business opportunities, expand the span of applications accessible for their designs, and enlarge their market share," said Jonathan Gendler, Director of CIS Marketing. "We have received enthusiastic feedback from all of our customers on the opportunity to keep working with our established process environment and reuse their design block IP, while being able to shrink the pixel array and die size. This new platform not only improves the cost model of their products, but at the same time enhances device performance."
The new hybrid CIS process platform will be offered for prototyping for select customers in Q3 2012, and for production towards the end of 2012. The new process and other advances will be showcased at the Image Sensors (IS) conference in London on March 20-22, 2012.
According to Yole Development, the forecast for high-end CMOS image sensors is expected to be ~$2B in 2015 with a CAGR of 13%.
The new platform, based on Tower's 0.16um CMOS shrink process, will allow easy re-use of existing customers' 0.18um circuit IP which will save them from investing in resources to redesign existing blocks, and increase the probability for first time success. The TS11IS offers improved pixel performance, smaller pixel pitch, higher resolution, improved sensitivity, and improved angular response. It allows up to 50% reduction of pixel size, mainly for high-end global shutter pixels.
The platform includes a new local interconnect layer to allow denser metallization routing in pixels while maintaining good QE. It also includes tighter design rules for all metal layers and implant layers as well as provides a "Bathtub" option for lower stack height, improving the sensors' angular response.
"By allowing significantly smaller pixels, higher resolution and enhanced pixel performance, our new platform ideally serves our customers' needs for the professional CIS markets, allowing them to create new business opportunities, expand the span of applications accessible for their designs, and enlarge their market share," said Jonathan Gendler, Director of CIS Marketing. "We have received enthusiastic feedback from all of our customers on the opportunity to keep working with our established process environment and reuse their design block IP, while being able to shrink the pixel array and die size. This new platform not only improves the cost model of their products, but at the same time enhances device performance."
The new hybrid CIS process platform will be offered for prototyping for select customers in Q3 2012, and for production towards the end of 2012. The new process and other advances will be showcased at the Image Sensors (IS) conference in London on March 20-22, 2012.
According to Yole Development, the forecast for high-end CMOS image sensors is expected to be ~$2B in 2015 with a CAGR of 13%.
Color Shading Discussion
Albert Theuwissen discusses color shading measurements in his latest post in "How to measure..." series: "Even if the shading component is small, it can result in (minor) changes in spectral response across the sensor. These type of errors can have a severe effect on colour shading in a colour sensor and can make colour reconstruction pretty complicated. So it absolutely worthwhile to check out the shading under light conditions."
Monday, March 12, 2012
SPIE HDR Course
There is an HDR course planned for SPIE Defense, Security+Sensing Conference in Baltimore, MD on April 23-27, 2012. "High Dynamic Range Imaging: Sensors and Architectures" 4-hour course by Arnaud Darmont, Aphesa "describes various sensor and pixel architectures to achieve high dynamic range imaging as well as software approaches to make high dynamic range images out of lower dynamic range sensors or image sets".
Sunday, March 11, 2012
Primesense Lays Off 50 out of its 190 Employees
Globes: PrimeSense is firing 50 of its 190 employees. The company is holding hearings for employees today, ahead of sending them pink slips later this week. PrimeSense is cutting its workforce in all departments: marketing, R&D, and operations.
Update: Hebrew-language newspaper Calcalist adds few details about Primesense status (Google translation here). Few key points from the article:
Update: Hebrew-language newspaper Calcalist adds few details about Primesense status (Google translation here). Few key points from the article:
- Primesense was profitable in 2011, but it is not clear if it turns a profit in 2012
- Kinect sales are heading down, albeit no numbers were quoted
- Primesense has secured design wins for two more generations of Kinect, up to 2014. After that point Microsoft's plans are unknown.
- So far Primesense sold 20 million 3D cameras for Kinect
Friday, March 09, 2012
Fraunhofer Lateral Drift PD Used for ToF Imaging
Laser Focus World: Fraunhofer Institute for Microelectronic Circuits and Systems (IMS; Duiburg, Germany) lateral-drift-field photodiode (LDPD) achieves a complete charge transfer from the pixels into the readout node in just 30ns - quite an achievement for 40 sq.um-large pixel. The researches used LDPD to create 128 × 96 pixel ToF sensor and a human arm was easily imaged in 3D using the sensor within a standard camera setup in conjunction with a 905 nm pulsed source (with a pulse duration of 30 ns) operated at 10 kHz. Responsivity of the LDPD was 230 μV/W/m2 and dynamic range was about 60 dB. The sensor is made in 0.35um process. The pixel fill factor is 38%.
Update: Fraunhofer's Annual Report gives more information about the LDPD pixel and ToF sensor:
"The photodiode is divided in two main parts: a pinned surface one and a part which resembles a buried CCD cell, as it can be observed in Fig. 1. The pixels and the entire sensor have been fabricated in the 2P4M automotive certified 0.35 μm CMOS technology at the Fraunhofer IMS with the addition of an extra surface-pinned n-well yielding a non-uniform lateral doping profile, as shown in Fig. 1 (upper picture). The doping
concentration gradient of the extra n-well was chosen in such a way that it induces an intrinsic lateral drift field parallel to the Si-surface in the direction of the pixel readout node (x-axis in Fig. 1) as well as from the periphery of the n-well in the direction of the n-well centre (y-axis in Fig. 1).
The potential distribution within this intrinsic lateral drift-field photodiode (LDPD) n-well resembles a hopper leading the photogenerated charge directly to the assigned readout nodes. It remains fully depleted during operation, sandwiched between the substrate and a grounded p+ pinning layer on top of it (see Fig. 1). In this manner, the almost noiseless reset and readout operations of the photodetector are enabled.
A buried collection-gate (CG) is fabricated at the one end of the n-well, which remains biased at a certain voltage VCG. It induces an additional electrostatic potential maximum in the system and enables the proper and symmetrical distribution of the signal charge among the readout nodes. Each of the four transfer-gates (TX) plays two main roles:
1) it serves to create a potential barrier in the well to prevent the collected charge to be transferred into any of the three “floating” diffusions (FD) aimed at pixel readout or the so called “draining” diffusion (DD) permanently biased at a reset potential
2) to facilitate the transport of the photocharge into a desired FD or the DD."
Update: Fraunhofer's Annual Report gives more information about the LDPD pixel and ToF sensor:
Fig. 1. LDPD ToF pixel cross-section |
"The photodiode is divided in two main parts: a pinned surface one and a part which resembles a buried CCD cell, as it can be observed in Fig. 1. The pixels and the entire sensor have been fabricated in the 2P4M automotive certified 0.35 μm CMOS technology at the Fraunhofer IMS with the addition of an extra surface-pinned n-well yielding a non-uniform lateral doping profile, as shown in Fig. 1 (upper picture). The doping
concentration gradient of the extra n-well was chosen in such a way that it induces an intrinsic lateral drift field parallel to the Si-surface in the direction of the pixel readout node (x-axis in Fig. 1) as well as from the periphery of the n-well in the direction of the n-well centre (y-axis in Fig. 1).
The potential distribution within this intrinsic lateral drift-field photodiode (LDPD) n-well resembles a hopper leading the photogenerated charge directly to the assigned readout nodes. It remains fully depleted during operation, sandwiched between the substrate and a grounded p+ pinning layer on top of it (see Fig. 1). In this manner, the almost noiseless reset and readout operations of the photodetector are enabled.
A buried collection-gate (CG) is fabricated at the one end of the n-well, which remains biased at a certain voltage VCG. It induces an additional electrostatic potential maximum in the system and enables the proper and symmetrical distribution of the signal charge among the readout nodes. Each of the four transfer-gates (TX) plays two main roles:
1) it serves to create a potential barrier in the well to prevent the collected charge to be transferred into any of the three “floating” diffusions (FD) aimed at pixel readout or the so called “draining” diffusion (DD) permanently biased at a reset potential
2) to facilitate the transport of the photocharge into a desired FD or the DD."
Thursday, March 08, 2012
Homemade 3D Camera
HackEngineer site presents do-it-yourself structured light 3D camera. TI DLP Pico Projector is used to project the light pattern, observed by a cheap VGA camera on the receive side:
The resulting depth map pictures look very nice:
Via Vision Systems Design.
The resulting depth map pictures look very nice:
Via Vision Systems Design.
Wednesday, March 07, 2012
The Story Behind the Nokia 41MP Camera
The official Nokia Conversations blog published a post by Damian Dinning on the journey to the 41MP camera phone. Some quotes:
"the innovation and news is NOT the number of pixels but rather HOW those pixels are used."
"For some of our team, it’s taken over five years to bring this to the market..."
"After developing several optical zoom modules, we were still seeing significant performance trade-offs caused by optical zoom: performance in low light; image sharpness at both ends of the zoom range; audible noise problems; slow zooming speed and lost focus when zooming during video. We became convinced this could never be the great experience we once hoped. You’d need to accept a bigger, more expensive device with poor f no., a small and noisy image sensor and lower optical resolution just to be able to zoom."
"We had often debated that, for the vast majority, 5-megapixels completely fulfils their real world needs, but the market for many years has been pixels, pixels, pixels. It’s hard to block that out. Our friends at Carl Zeiss believed the same."
"This is, without doubt, our most complex imaging project to date."
"the innovation and news is NOT the number of pixels but rather HOW those pixels are used."
"For some of our team, it’s taken over five years to bring this to the market..."
"After developing several optical zoom modules, we were still seeing significant performance trade-offs caused by optical zoom: performance in low light; image sharpness at both ends of the zoom range; audible noise problems; slow zooming speed and lost focus when zooming during video. We became convinced this could never be the great experience we once hoped. You’d need to accept a bigger, more expensive device with poor f no., a small and noisy image sensor and lower optical resolution just to be able to zoom."
"We had often debated that, for the vast majority, 5-megapixels completely fulfils their real world needs, but the market for many years has been pixels, pixels, pixels. It’s hard to block that out. Our friends at Carl Zeiss believed the same."
"This is, without doubt, our most complex imaging project to date."
Tuesday, March 06, 2012
Baird: Omnivision to Supply Sensors for iPad 3 and iPad mini
Reuters: Baird expects Omnivision to supply a 5MP sensor for the rear camera and a 1MP sensor for the front camera of the iPad3. Baird also said Omnivision may supply sensors for an upcoming iPad mini.
The brokerage expects Sony to remain the rear-camera supplier of the iPhone5, but believes OmniVision could be a potential second supplier.
The brokerage expects Sony to remain the rear-camera supplier of the iPhone5, but believes OmniVision could be a potential second supplier.
Omnivision, ST, Toshiba, Samsung, Sony Modules Reverse Engineered
Nantes, France-based SystemPlus Consulting published a number of camera module reverse engineering reports.
Nokia 2330 camera phone reverse engineering revealed that it uses two suppliers for its VGA camera module: ST with Heptagon wafer-scale optics and Toshiba with Anteryon wafer optics. Both ST and Toshiba sensors have 2.2um pixels and use 0.18um process.
Omnivision's OVM7692 VGA CameraCube report flyer shows its 1.75um pixel layout at the poly and diffusion level showing quite nice fill factor and no SEL transistor:
Another Omnivision's 1.75um pixel is used in 5MP OV5650 taken from iPhone 4. That one is using 0.11um BSI process.
iPhone 4S camera has also been reverse engineered and Sony's 8MP BSI IMX145 sensor identified, made in 90nm process. For both iPhone back and front cameras, Tong Hsing (former ImPac) was identified as ceramic packaging supplier and LG Innotek as the module vendor.
Samsung Galaxy S II 8MP camera uses Samsung's S5K3H2Y sensor based on 1.4um BSI pixels in 90nm process.
Sony Ericsson S006 camera-phone module features Sony-made 16MP array of 1.12um BSI pixels. The module uses piezoelectric AF and has quite densely packed optics:
Nokia 2330 camera phone reverse engineering revealed that it uses two suppliers for its VGA camera module: ST with Heptagon wafer-scale optics and Toshiba with Anteryon wafer optics. Both ST and Toshiba sensors have 2.2um pixels and use 0.18um process.
Omnivision's OVM7692 VGA CameraCube report flyer shows its 1.75um pixel layout at the poly and diffusion level showing quite nice fill factor and no SEL transistor:
Another Omnivision's 1.75um pixel is used in 5MP OV5650 taken from iPhone 4. That one is using 0.11um BSI process.
iPhone 4S camera has also been reverse engineered and Sony's 8MP BSI IMX145 sensor identified, made in 90nm process. For both iPhone back and front cameras, Tong Hsing (former ImPac) was identified as ceramic packaging supplier and LG Innotek as the module vendor.
Samsung Galaxy S II 8MP camera uses Samsung's S5K3H2Y sensor based on 1.4um BSI pixels in 90nm process.
Sony Ericsson S006 camera-phone module features Sony-made 16MP array of 1.12um BSI pixels. The module uses piezoelectric AF and has quite densely packed optics:
sCMOS in 2 min
Photonics Online published Youtube video of CIS1021 sCMOS sensor presentation by Fairchild Imaging - BAE systems:
Aptina Announces CSP Version of Native 1080p Sensor
Aptina announces the cost-effective 2.2um pixel AR0330CS sensor providing 3.2MP still image captures, and and HD video for consumer pocket digital video (DV), camcorder, web, sports action and video conferencing and other camera applications.
The only difference from AR0330 sensor announced a year ago appears to be a 6.28mm x 6.65mm CSP package and a slower frame rate of 30fps in 1080p mode, whereas the last year's sensor had ceramic package and was capable of 60fps at 1080p resolution.
The AR0330CS is currently sampling with mass production expected in Q2 CY2012.
The only difference from AR0330 sensor announced a year ago appears to be a 6.28mm x 6.65mm CSP package and a slower frame rate of 30fps in 1080p mode, whereas the last year's sensor had ceramic package and was capable of 60fps at 1080p resolution.
The AR0330CS is currently sampling with mass production expected in Q2 CY2012.
Sunday, March 04, 2012
Casio Depth Sensing
Casio depth sensing patent application US20120050713 describes quite a similar idea to Microsoft's et. al double helical proposal. Differently from Microsoft, Casio uses two projectors 121 and 122 with diffractive patterns 131 and 132:
The projectors light beams generate the distance-dependent rotating pattern similar of Microsoft's one:
The angle between the dots gives depth information to the sensor 140.
The projectors light beams generate the distance-dependent rotating pattern similar of Microsoft's one:
The angle between the dots gives depth information to the sensor 140.
Omnivision Stresses Sensors, Apple Splits Them
Omnivision's patent application US20120038014 proposes a stress film to passivate the backside surface:
"For a BSI CMOS image sensor, dark currents may be a particular problem. A typical BSI CMOS image sensor has dark current levels that are over 100 times greater than that of a front side illuminated sensor.
A BSI image sensor's backside surface stress may affect its dark current level. The present application discloses utilizing structures and methods to adjust the stress on a CMOS image sensor's backside silicon surface, thereby reducing the dark current effect by facilitating the movement of photo generated charge carriers away from the backside surface.
Stress on a backside silicon surface may be adjusted by forming a stress loaded layer on the surface. A stress loaded layer may include materials such as metal, organic compounds, inorganic compounds, or otherwise. For example, the stress loaded layer may include a silicon oxide (SiO2) film, a silicon nitride (SiNx) film, a silicon oxynitride (SiOxNy) film, or a combination thereof."
Apple parent application US20120044328 proposes to split image sensor into three - one luminance sensor and two chrominance ones:
"Typically, the luminance portion of a color image may have a greater influence on the overall image resolution than the chrominance portion. This effect can be at least partially attributed to the structure of the human eye, which includes a higher density of rods for sensing luminance than cones for sensing color.
While an image sensing device that emphasizes luminance over chrominance generally does not perceptibly compromise the resolution of the produced image, color information can be lost if the luminance and chrominance sensors are connected to separate optical lens trains, and a “blind” region of the luminance sensor is offset from the “blind” region of the chrominance sensor. One example of such a blind region can occur due to a foreground object occluding a background object. Further, the same foreground object may create the blind region for both the chrominance and luminance sensors, or the chrominance blind region created by one object may not completely overlap the luminance blind region created by a second object. In such situations, color information may be lost for the “blind” regions of the chrominance sensor, thereby compromising the resolution of the composite color image."
So Apple proposes to use two chrominance sensors and the following processing flow:
"For a BSI CMOS image sensor, dark currents may be a particular problem. A typical BSI CMOS image sensor has dark current levels that are over 100 times greater than that of a front side illuminated sensor.
A BSI image sensor's backside surface stress may affect its dark current level. The present application discloses utilizing structures and methods to adjust the stress on a CMOS image sensor's backside silicon surface, thereby reducing the dark current effect by facilitating the movement of photo generated charge carriers away from the backside surface.
Stress on a backside silicon surface may be adjusted by forming a stress loaded layer on the surface. A stress loaded layer may include materials such as metal, organic compounds, inorganic compounds, or otherwise. For example, the stress loaded layer may include a silicon oxide (SiO2) film, a silicon nitride (SiNx) film, a silicon oxynitride (SiOxNy) film, or a combination thereof."
Apple parent application US20120044328 proposes to split image sensor into three - one luminance sensor and two chrominance ones:
"Typically, the luminance portion of a color image may have a greater influence on the overall image resolution than the chrominance portion. This effect can be at least partially attributed to the structure of the human eye, which includes a higher density of rods for sensing luminance than cones for sensing color.
While an image sensing device that emphasizes luminance over chrominance generally does not perceptibly compromise the resolution of the produced image, color information can be lost if the luminance and chrominance sensors are connected to separate optical lens trains, and a “blind” region of the luminance sensor is offset from the “blind” region of the chrominance sensor. One example of such a blind region can occur due to a foreground object occluding a background object. Further, the same foreground object may create the blind region for both the chrominance and luminance sensors, or the chrominance blind region created by one object may not completely overlap the luminance blind region created by a second object. In such situations, color information may be lost for the “blind” regions of the chrominance sensor, thereby compromising the resolution of the composite color image."
So Apple proposes to use two chrominance sensors and the following processing flow:
Friday, March 02, 2012
Tessera DOC Gets into Mass Manufacturing Business
Business Wire, PR Newswire: Tessera subsidiary, DigitalOptics Corporation (DOC), acquires "certain assets" of Vista Point Technologies, a Tier One qualified camera module manufacturing business, from Flextronics.
DOC will pay approximately $23M in cash for "certain assets" of Flextronics's camera module business located in Zhuhai, China. The transaction, which is expected to close in Q3 2012, if not sooner, includes existing customer contracts and a lease to an approximately 135,000-square-foot facility. The transaction also includes an intellectual property assignment and license agreement, and a transition services agreement. DOC intends to offer employment to a portion of the existing work force of the Flextronics camera module business in Zhuhai, China. DOC anticipates that the business will have a capacity to manufacture approximately 50M camera module units per year.
Flextronics will retain a portion of Vista Point Technologies assets, but repurpose them and focus engineering talent toward "strengthening its ability to deliver manufacturing services".
"The Zhuhai Camera Module Business will allow us to drive rapid market introduction of DOC's next-generation technology in a manner that complements our existing collaborations with camera module makers. We believe our approach is the best way to address the requirements of Tier One OEM manufacturers, which require that camera modules be delivered through dual sourcing from high-volume manufacturing facilities," said Robert A. Young, Tessera CEO.
"This transaction is a critical step in our strategy of transforming DOC from an optical and image enhancement software and components business into a Tier One qualified, vertically integrated supplier of next-generation camera modules to the $9-billion market for mobile cameras," Young continued. "In parallel, we continue to have active discussions with multiple Tier One OEM manufacturers of mobile phones regarding our MEMS autofocus product, and remain on track to obtain a design win in the first half of 2012 and to begin high-volume manufacturing in the fourth quarter of 2012," said Young.
"These assets will enable DigitalOptics Corporation to significantly increase sales of the imaging technologies we've acquired and developed over the past five years. Our strategy is to combine our breakthrough autofocus solutions with our other proprietary technologies so that DOC will become a leading supplier of integrated camera modules in the mobile phone market," said Bob Roohparvar, president of DigitalOptics Corporation.
DOC has been developing its capacity to oversee the high-volume manufacturing operations required by mobile phone makers. DOC's steps in the past year have included hiring more than a dozen executives and managers who have experience in engineering scale-up as well as in manufacturing at similar facilities.
Thanks to SF for sending me the news!
DOC will pay approximately $23M in cash for "certain assets" of Flextronics's camera module business located in Zhuhai, China. The transaction, which is expected to close in Q3 2012, if not sooner, includes existing customer contracts and a lease to an approximately 135,000-square-foot facility. The transaction also includes an intellectual property assignment and license agreement, and a transition services agreement. DOC intends to offer employment to a portion of the existing work force of the Flextronics camera module business in Zhuhai, China. DOC anticipates that the business will have a capacity to manufacture approximately 50M camera module units per year.
Flextronics will retain a portion of Vista Point Technologies assets, but repurpose them and focus engineering talent toward "strengthening its ability to deliver manufacturing services".
"The Zhuhai Camera Module Business will allow us to drive rapid market introduction of DOC's next-generation technology in a manner that complements our existing collaborations with camera module makers. We believe our approach is the best way to address the requirements of Tier One OEM manufacturers, which require that camera modules be delivered through dual sourcing from high-volume manufacturing facilities," said Robert A. Young, Tessera CEO.
"This transaction is a critical step in our strategy of transforming DOC from an optical and image enhancement software and components business into a Tier One qualified, vertically integrated supplier of next-generation camera modules to the $9-billion market for mobile cameras," Young continued. "In parallel, we continue to have active discussions with multiple Tier One OEM manufacturers of mobile phones regarding our MEMS autofocus product, and remain on track to obtain a design win in the first half of 2012 and to begin high-volume manufacturing in the fourth quarter of 2012," said Young.
"These assets will enable DigitalOptics Corporation to significantly increase sales of the imaging technologies we've acquired and developed over the past five years. Our strategy is to combine our breakthrough autofocus solutions with our other proprietary technologies so that DOC will become a leading supplier of integrated camera modules in the mobile phone market," said Bob Roohparvar, president of DigitalOptics Corporation.
DOC has been developing its capacity to oversee the high-volume manufacturing operations required by mobile phone makers. DOC's steps in the past year have included hiring more than a dozen executives and managers who have experience in engineering scale-up as well as in manufacturing at similar facilities.
Thanks to SF for sending me the news!
Nemotek Presents Most Advanced VGA Wafer-Level Camera
Sensors Magazine: Morocco-based Nemotek Technologie debuts what it says the world's first two element wafer-level camera, Exiguus H12-A2. The Exiguus H12-A2 features high resolution and less than 0.5% overall distortion all in an 1/10-inch form factor.
The Exiguus H12-A2 is reflowable, and offers sophisticated camera functions, such as auto exposure control, auto white balance, black level calibration, noise reduction, flicker detection and avoidance, color correction and saturation and lens shading correction.
"Today we unveil the first camera that successfully incorporates a two element wafer-lens and is technically more complex while providing better resolution than any current wafer-level offering on the market to date," said Hatim Limati, VP of sales and marketing for Nemotek Technologie. "The Exiguus H12-A2 produces extraordinarily clear, sharp pictures which make it the perfect choice for a wide range of applications. With this new achievement, we are able to further showcase our position as the industry's leader in innovation and design."
In addition, Nemotek marks its debut into the High End VGA market with another new camera based on a 720P High End sensor.
Samples of Nemotek's Exiguus H12-A2 and its High End VGA camera are currently available.
The Exiguus H12-A2 is reflowable, and offers sophisticated camera functions, such as auto exposure control, auto white balance, black level calibration, noise reduction, flicker detection and avoidance, color correction and saturation and lens shading correction.
"Today we unveil the first camera that successfully incorporates a two element wafer-lens and is technically more complex while providing better resolution than any current wafer-level offering on the market to date," said Hatim Limati, VP of sales and marketing for Nemotek Technologie. "The Exiguus H12-A2 produces extraordinarily clear, sharp pictures which make it the perfect choice for a wide range of applications. With this new achievement, we are able to further showcase our position as the industry's leader in innovation and design."
In addition, Nemotek marks its debut into the High End VGA market with another new camera based on a 720P High End sensor.
Samples of Nemotek's Exiguus H12-A2 and its High End VGA camera are currently available.
Thursday, March 01, 2012
IC Insights: DSC Market Stable, Camera Phones Growing
IC Insights' report "IC Market Drivers" gives an outlook of DSC vs Camera Phone markets:
DSC unit shipments increased by a compound average growth rate (CAGR) of slightly more than 37% in the 2000-2005 period, but slowed to 9% per year between 2005 and 2010, and are projected to rise by merely 2.1% annually from 2010 through 2015.
DSC unit shipments increased by a compound average growth rate (CAGR) of slightly more than 37% in the 2000-2005 period, but slowed to 9% per year between 2005 and 2010, and are projected to rise by merely 2.1% annually from 2010 through 2015.
Eric Fossum Talks About Sensor Technology Trends
Image Sensors 2012 Conference published an interview with Eric Fossum answering on sensor technology questions. Some quotes:
Q: What new disruptive technologies do you see on the horizon?
A: Well of course my own pet project - Quanta Image Sensing (QIS)- could become a major disruption. I don't expect lightning to strike twice but as I like to say, you can't win the lottery if you don't buy a ticket. Computational imaging is getting interesting but it might be a few years before Moore's Law catches up to the aspirations of computational imaging and enables its full potential. I think computational imaging combined with the QIS could become a major paradigm shift but it is still early in that game. I think use of non-silicon materials could be disruptive if any of them work out. But, silicon is an amazing material and manufacturing and noise issues with non-silicon materials are non-trivial. Meanwhile, the rate of continuous improvement is so large that emerging technologies have to mature rapidly to have enough compelling advantage that they can grab a toehold in the marketplace once they get there. To that end, even a few years of continuous improvement can look disruptive to the user community.
Q: What new disruptive technologies do you see on the horizon?
A: Well of course my own pet project - Quanta Image Sensing (QIS)- could become a major disruption. I don't expect lightning to strike twice but as I like to say, you can't win the lottery if you don't buy a ticket. Computational imaging is getting interesting but it might be a few years before Moore's Law catches up to the aspirations of computational imaging and enables its full potential. I think computational imaging combined with the QIS could become a major paradigm shift but it is still early in that game. I think use of non-silicon materials could be disruptive if any of them work out. But, silicon is an amazing material and manufacturing and noise issues with non-silicon materials are non-trivial. Meanwhile, the rate of continuous improvement is so large that emerging technologies have to mature rapidly to have enough compelling advantage that they can grab a toehold in the marketplace once they get there. To that end, even a few years of continuous improvement can look disruptive to the user community.
Subscribe to:
Posts (Atom)