CVG: Xbox 360 hardware VP, Ilan Spillinger said: "PrimeSense has delivered an important component to the technology, helping us deliver revolutionary controller-free entertainment experiences in the living room".
Update: Slashgear publishes an extended article on Primesense technology, including the official PR.
Update #2: Official PR appears on Business Wire site.
Primesense published a brief system spec:
Wednesday, March 31, 2010
Kodak Officially Announces W-RGB CCDs for HD Video
An official PR on Kodak site announces that it is deploying the TRUESENSE W-RGB Color Filter on the 1080p/60fps format KODAK KAI-02150 CCD, which is said to enable "the capture of color images with the sensitivity typically associated with monochrome cameras... users can realize a 2x to 4x increase in light sensitivity (from one to two photographic stops) compared to a standard Bayer color sensor". Kodak also sent me a pdf file with high-resolution comparison pictures that support this claim (click on pictures to expand):
Kodak is also expanding its HD image sensor portfolio with addition of a new 1/2-inch 720p/130fps format CCD based on 5.5um pixels - also available with the TRUESENSE Color Filter Pattern. In addition, both the 720p and 1080p image sensors will be available in a smaller, CLCC package configuration to enable development of more compact camera designs.
To assist manufacturers in modifying their camera software to work with this new color filter pattern, Kodak has developed a set of tools that document the new image path used by sensors that deploy this technology. Providing both an overview as well as a practical implementation of the software algorithms used, these tools can be used not only to modify an existing image path to work with the new color filter pattern data, but also to compare the results of that implementation to a reference image path provided by Kodak.
Engineering Grade devices of the KAI-02150 with the KODAK TRUESENSE Color Filter Pattern are available today, and Standard Grade devices are scheduled to be available in Q2, 2010. The KAI-01150 will be available in monochrome, Bayer Color, and KODAK TRUESENSE Color Filter Pattern configurations. Engineering Grade devices of the KAI-01150 are scheduled to be available beginning in Q4, 2010, with Standard Grade devices scheduled to be available Q1, 2011.
Kodak is also expanding its HD image sensor portfolio with addition of a new 1/2-inch 720p/130fps format CCD based on 5.5um pixels - also available with the TRUESENSE Color Filter Pattern. In addition, both the 720p and 1080p image sensors will be available in a smaller, CLCC package configuration to enable development of more compact camera designs.
To assist manufacturers in modifying their camera software to work with this new color filter pattern, Kodak has developed a set of tools that document the new image path used by sensors that deploy this technology. Providing both an overview as well as a practical implementation of the software algorithms used, these tools can be used not only to modify an existing image path to work with the new color filter pattern data, but also to compare the results of that implementation to a reference image path provided by Kodak.
Engineering Grade devices of the KAI-02150 with the KODAK TRUESENSE Color Filter Pattern are available today, and Standard Grade devices are scheduled to be available in Q2, 2010. The KAI-01150 will be available in monochrome, Bayer Color, and KODAK TRUESENSE Color Filter Pattern configurations. Engineering Grade devices of the KAI-01150 are scheduled to be available beginning in Q4, 2010, with Standard Grade devices scheduled to be available Q1, 2011.
Tuesday, March 30, 2010
Imatest IS Edition to Work with Toshiba Sensors
PR Newswire: Up until now, Imatest Image Sensor (IS) Edition has been only interfaced with sensors from Aptina and OmniVision. Now Toshiba and Imatest have teamed up to enhance Imatest's IS Edition for configuring Toshiba image sensors.
Nemotek Expands its WLC and WLO Production Base
PR Newswire: Morocco-based Tessera's licensee Nemotek has placed a repeat order for EV Group's bonding and UV nanoimprint lithography (UV-NIL) systems – the EVG520IS and IQ Aligner. EVG's IQ Aligner is said to be the only industry-proven, high-volume manufacturing solution for wafer lens molding and stacking available today.
Monday, March 29, 2010
Kodak Announces W-RGB CCDs (Again?)
Kodak Plugged-In blog has a post about advantages of CCDs with W-RGB filters. The post also mentions that two such CCDs were announced today: 1080p KAI-02150 and 720p KAI-01150 (no formal PR, as of now). At least one of them was already demoed about half a year ago.
Saturday, March 27, 2010
More (of the same) about Invisage
IEEE Spectrum published a general article on quantum dot potential for IR sensing, written by Invisage CTO Ted Sargent. Somehow the article does not mention Invisage. In addition Spectrum also published a video report from DEMO event explaining Invisage concept and showing its 1.1um pixel demo, including the demo board:
Another Youtube video shows Jess Lee demonstrating a wafer with sensors and a sputtering machine to make the sensing layer:
Update: Optics.org published Invisage article titled "Timer set for silicon sensor switchover".
Another Youtube video shows Jess Lee demonstrating a wafer with sensors and a sputtering machine to make the sensing layer:
Update: Optics.org published Invisage article titled "Timer set for silicon sensor switchover".
Friday, March 26, 2010
Omnivision Diversifies
PR Newswire: Omnivision acquired Aurora Systems, a privately-held company in California. Aurora is a supplier of LCoS (Liquid Crystal on Silicon) devices for use in mobile projection applications and high definition home theater projection systems.
OmniVision acquired all outstanding shares of Aurora for approximately $5.0M. Of this amount, $0.5M has been placed in escrow for a period of up to one year for purposes of compensating OmniVision for certain specified damages that it may incur.
"We are very excited by the growing popularity of image projection systems in consumer devices. With the acquisition of Aurora, we expect to capitalize on this trend in the emerging video-centric consumer market, expand our product portfolio and offer even more innovative and comprehensive imaging solutions to our customers," said Shaw Hong, president and chief executive officer of OmniVision.
Update: Incidentally, in May 2009 Micron too acquired a maker of LCOS displays DisplayTech.
OmniVision acquired all outstanding shares of Aurora for approximately $5.0M. Of this amount, $0.5M has been placed in escrow for a period of up to one year for purposes of compensating OmniVision for certain specified damages that it may incur.
"We are very excited by the growing popularity of image projection systems in consumer devices. With the acquisition of Aurora, we expect to capitalize on this trend in the emerging video-centric consumer market, expand our product portfolio and offer even more innovative and comprehensive imaging solutions to our customers," said Shaw Hong, president and chief executive officer of OmniVision.
Update: Incidentally, in May 2009 Micron too acquired a maker of LCOS displays DisplayTech.
Thursday, March 25, 2010
Invisage Shows 1.1um B&W Pixel
As someone mentioned in comments to Invisage patents post, there is a 6+ min Invisage video from the DEMO event showing its QuantumFilm sensor working, albeit B&W only. In the video Jess Lee says that the pixel size is 1.1um. Here it is:
NocturnalVision Start-Up Presents at Image Sensors Europe Conference
CNET publishes notes from Image Sensors Europe conference being held in London these days. One of the articles talks about Swedish start-up called NocturnalVision developing spatio-temporal filtering of video stream to improve low-light performance. Henrik Malm, a professor at Sweden-based Lund University and co-founder of the start-up presented his technique. In a nutshell, it analyzes what's going on across each frame of an image--the spatial component--and what's going on from one frame to the next-the temporal component-to try to intelligently direct the noise reduction process.
This is the same technique that New Scientist talked about two months ago. So far Toyota spent 6M SEK supporting the research and has rights to the technology for automotive applications, Malm said, and NocturnalVision has rights elsewhere. The company is seeking venture capital and also is in talks with Sony Ericsson, Malm said.
Because of multi-frame nature of the noise reduction algorithm there's a delay of about six frames before the image processing can kick in. Also, the algorithm is quite resource-hungry, so the company uses Nvidia GForce 880GTX for parallel processing and is still limited by 6fps speed for VGA image, but Malm expects to improve it to 30fps with new hardware.
Another CNET note from the conference talks about efforts to standartize camera phone image quality and DxO Labs work in that direction.
This is the same technique that New Scientist talked about two months ago. So far Toyota spent 6M SEK supporting the research and has rights to the technology for automotive applications, Malm said, and NocturnalVision has rights elsewhere. The company is seeking venture capital and also is in talks with Sony Ericsson, Malm said.
Because of multi-frame nature of the noise reduction algorithm there's a delay of about six frames before the image processing can kick in. Also, the algorithm is quite resource-hungry, so the company uses Nvidia GForce 880GTX for parallel processing and is still limited by 6fps speed for VGA image, but Malm expects to improve it to 30fps with new hardware.
Another CNET note from the conference talks about efforts to standartize camera phone image quality and DxO Labs work in that direction.
PTC and RTS, Tutorial Goes On
Albert Theuwissen continues his excellent series of PTC tutorials, now adding RTS noise. This becomes the biggest image sensor tutorial available for free on the net!
Wednesday, March 24, 2010
Tessera Licenses FotoNation FaceTracker Solution to Samsung
Business Wire: Tessera announced that Samsung Electronics has licensed Tessera’s FotoNation FaceTracker, which provides face detection and tracking capabilities. Samsung Electronics also licensed Tessera’s FotoNation SmileCheck and BlinkCheck extension modules, which indicate when faces are smiling or blinking.
Tessera recorded an initial fee from Samsung under this license agreement in the fourth quarter of 2009. Tessera’s expects receiving initial royalty payments under the agreement is approximately 12-15 months.
Tessera recorded an initial fee from Samsung under this license agreement in the fourth quarter of 2009. Tessera’s expects receiving initial royalty payments under the agreement is approximately 12-15 months.
Altera and Apical Demo FPGA-based WDR ISP for Aptina MT9M033
Business Wire: Altera and Apical announced HD WDR ISP solution for video-surveillance cameras. Altera is demonstrating the solution with Aptina MT9M033 WDR sensor at the International Security Conference (ISC) West Expo on March 24-26 in Las Vegas.
The sensor processing design implemented in the FPGA is provided by Apical. The IP from Apical includes the full ISP, which performs the auto-exposure, auto-gain, and auto-white balancing, 2D or 3D noise reduction and also incorporates Apical's iridix local tone mapping engine. The Cyclone III and Cyclone IV families of FPGAs perform all of these functions at high clock rates, logic utilization, and low power consumption.
A nice video showing the WDR platform at work is here.
The sensor processing design implemented in the FPGA is provided by Apical. The IP from Apical includes the full ISP, which performs the auto-exposure, auto-gain, and auto-white balancing, 2D or 3D noise reduction and also incorporates Apical's iridix local tone mapping engine. The Cyclone III and Cyclone IV families of FPGAs perform all of these functions at high clock rates, logic utilization, and low power consumption.
A nice video showing the WDR platform at work is here.
Market Research on Camera Module Industry, 2009-2010
ReportLinker announces "Global and China CMOS Camera Module Industry Report, 2009-2010" from Research In China portal. Some interesting quotes:
"FUJINON ... enjoys over 50% shares in 3-megapixel (or above) mobile phone camera market."
"Both of Largan and Asia Optical are supported by Japanese manufacturers in technology. Particularly, Japan's Sekon, Olympus, Ricoh, Nikon and Sony have given technical guidance to Asia Optical or established joint ventures with them. Largan is closely related to smart camera manufacturers; the three smart phone giants APPLE, HTC and RIM are Largan’s loyal clients."
"Before the end of 2007, Toshiba entrusted most of camera module business to other manufacturers. After 2008, it has always completed its camera module business in the semiconductor plant in Iwate.
Sharp is a big producer of CCD camera module, with rich experience and monthly output of about 9.8 million units."
"FUJINON ... enjoys over 50% shares in 3-megapixel (or above) mobile phone camera market."
"Both of Largan and Asia Optical are supported by Japanese manufacturers in technology. Particularly, Japan's Sekon, Olympus, Ricoh, Nikon and Sony have given technical guidance to Asia Optical or established joint ventures with them. Largan is closely related to smart camera manufacturers; the three smart phone giants APPLE, HTC and RIM are Largan’s loyal clients."
"Before the end of 2007, Toshiba entrusted most of camera module business to other manufacturers. After 2008, it has always completed its camera module business in the semiconductor plant in Iwate.
Sharp is a big producer of CCD camera module, with rich experience and monthly output of about 9.8 million units."
Pixelplus Reports Q4 2009 Results
PR Newswire: Pixelplus revenue for the Q4 2009 was 6.0 billion Korean won (US$5.1 million), compared to 3.6 billion Korean won (US$3.1 million) in the third quarter of fiscal 2009, and 4.5 billion Korean won (US$3.9 million) in the fourth quarter of fiscal 2008.
Net income in the fourth quarter of fiscal 2009 was 2.0 billion Korean won (US$1.7 million), compared to a net income of 0.17 billion Korean won (US$0.15 million), in the third quarter of fiscal 2009, and a net loss of 5.8 billion Korean won (US$5.0 million), in the fourth quarter of fiscal 2008.
The company sold roughly 4.0 million image sensors in the fourth quarter of 2009, which represented an increase of about 1.5 million from its sale of around 2.5 million units in the third quarter of 2009. Separately, the company furnished approximately 0.1 million image sensors arising from its supply of services to a leading Japanese module maker in the fourth quarter of 2009, which represented a decrease of about 0.2 million units from its supply of around 0.3 million units in the third quarter of 2009.
Gross margin for the fourth quarter of fiscal 2009 was 44.5%, compared to 32.2% in the third quarter of fiscal 2009. The company's increase in gross margin was mainly due to its sale of high margin-driven products and excess inventory accrued in the fourth quarter of fiscal 2008.
Net income in the fourth quarter of fiscal 2009 was 2.0 billion Korean won (US$1.7 million), compared to a net income of 0.17 billion Korean won (US$0.15 million), in the third quarter of fiscal 2009, and a net loss of 5.8 billion Korean won (US$5.0 million), in the fourth quarter of fiscal 2008.
The company sold roughly 4.0 million image sensors in the fourth quarter of 2009, which represented an increase of about 1.5 million from its sale of around 2.5 million units in the third quarter of 2009. Separately, the company furnished approximately 0.1 million image sensors arising from its supply of services to a leading Japanese module maker in the fourth quarter of 2009, which represented a decrease of about 0.2 million units from its supply of around 0.3 million units in the third quarter of 2009.
Gross margin for the fourth quarter of fiscal 2009 was 44.5%, compared to 32.2% in the third quarter of fiscal 2009. The company's increase in gross margin was mainly due to its sale of high margin-driven products and excess inventory accrued in the fourth quarter of fiscal 2008.
Himax Offers Wafer Level Optics
As Himax mentioned in its 2009 earning release, it has started offering wafer level optics products, similarly to Heptagon, Anteryon and Tessera. The company's WLO page lists VGA and 2MP wafer level lenses as the standard offerings.
With Himax claim that its WLO is "well-received by a number of the world's first-tier CMOS image sensor and camera module makers", it looks like the competition in wafer scale optics is intensifying.
With Himax claim that its WLO is "well-received by a number of the world's first-tier CMOS image sensor and camera module makers", it looks like the competition in wafer scale optics is intensifying.
Tuesday, March 23, 2010
Altasens and Apical Announce HD Video WDR Sensor
PR Newswire: AltaSens and Apical, a provider of advanced hardware IP cores and software libraries for WDR imaging, announce the development of 1/3-inch 1080p60 HD WDR sensor.
AltaSens' A3372E3-4T WDR sensor uses patent-pending dual exposures in a single frame to create more than 100dB of wide dynamic range for 1080p60 HD imaging. Each exposure is independently adjustable to light levels in any scene. Single-frame dual exposures eliminate the need for a dedicated frame buffer in the camera and provide video, devoid of motion artifacts. The PR also pitch uncompromized low-light performance of the sensor.
AltaSens will be demonstrating live WDR 1080p60 HD video streams during the 2010 ISC West Security Show in Las Vegas on March 24-26.
AltaSens' A3372E3-4T WDR sensor uses patent-pending dual exposures in a single frame to create more than 100dB of wide dynamic range for 1080p60 HD imaging. Each exposure is independently adjustable to light levels in any scene. Single-frame dual exposures eliminate the need for a dedicated frame buffer in the camera and provide video, devoid of motion artifacts. The PR also pitch uncompromized low-light performance of the sensor.
AltaSens will be demonstrating live WDR 1080p60 HD video streams during the 2010 ISC West Security Show in Las Vegas on March 24-26.
Invisage Technology in Patents
With Invisage technology making so much noise recently, I tried to understand what they really achieved looking at the patent applications. So far I was able to find only two of them published.
US2009152664 published only at European PO site (but not at USPTO site) is an impressive document: it has 218 pages, 99 pages with figures. A same or similar patent is filed as PCT WO/2008/131313 (nice number) has an impressive list of 25 inventors, including Keith Fife, and appears to be a combination of few US provisional patents. Another application is US20100044676 of more modest size of 46 pages with many performance graphs.
The two application have a lot of data and possibly can shed some light on the sensor operation and performance. Here is what I was able to grasp in some limited time:
First, the pixel schematic seems to be a regular 3T structure:
Invisage would need Keith Fife's Smalcamera magic to reduce kTC noise by feedback. Assuming the pixel layout in the application is made in TSMC 0.11um process and has 0.11um contact size, the pixel size looks to be close to 1um, quite competitive number:
Pixel sharing is also metioned in the application:
Next figure is supposed to show that Invisage team knows how to control photoconductor time constant. As a byproduct, photoconductive gain too should change with the time constant, but this is not shown on the picture:
The carrier lifetime depends on illumination, suggesting there is some non-linearity in photoconductive gain:
It appears that QE in vicinity of 60% was achieved for PbS-based quantum dot sensors:
For PbSe quantum dot devices the QE reaches quite respectable 70%:
The applications also talk about CuGaSe2, CuInSe2 and Cu(GaIn)Se2 quantum dot materials. Multi-layered Foveon-like sensors are also described in great details - this can be a real advantage if Invisage is able to make such sensors:
All in all, my impression is that Invisage did great research and design work to make use of its quantum dot photoconductor idea. The performance numbers given in the patents look quite promising and competitive.
Said all this, I still have some reservations about suitability of photoconductor principle for consumer photography applications. The problem is that in comparison with photodiodes, photoconductors have an additional and potentially significant noise source - recombination process. Photodiodes are simple devices in that respect - once the photocarriers are generated they are pulled apart by electric field and by this the detection process ends. Photoconductors, on the other hand, are infinitely more complex - the photogenerated carriers just give start to a long and complex process which ends with their eventual recombination. The recombination, being a random process, adds its component to the shot noise of the sensor.
Now, assuming recombination noise is about the same as photogeneration shot noise, photoconductors have an innate disadvantage in SNR10 figure: for the same QE, color crosstalk and very low dark noise, photoconductors have twice worse SNR10 figure. For that reason alone I doubt photoconductive devices can compete with photodiode ones, not in consumer imaging, anyway.
US2009152664 published only at European PO site (but not at USPTO site) is an impressive document: it has 218 pages, 99 pages with figures. A same or similar patent is filed as PCT WO/2008/131313 (nice number) has an impressive list of 25 inventors, including Keith Fife, and appears to be a combination of few US provisional patents. Another application is US20100044676 of more modest size of 46 pages with many performance graphs.
The two application have a lot of data and possibly can shed some light on the sensor operation and performance. Here is what I was able to grasp in some limited time:
First, the pixel schematic seems to be a regular 3T structure:
Invisage would need Keith Fife's Smalcamera magic to reduce kTC noise by feedback. Assuming the pixel layout in the application is made in TSMC 0.11um process and has 0.11um contact size, the pixel size looks to be close to 1um, quite competitive number:
Pixel sharing is also metioned in the application:
Next figure is supposed to show that Invisage team knows how to control photoconductor time constant. As a byproduct, photoconductive gain too should change with the time constant, but this is not shown on the picture:
The carrier lifetime depends on illumination, suggesting there is some non-linearity in photoconductive gain:
It appears that QE in vicinity of 60% was achieved for PbS-based quantum dot sensors:
For PbSe quantum dot devices the QE reaches quite respectable 70%:
The applications also talk about CuGaSe2, CuInSe2 and Cu(GaIn)Se2 quantum dot materials. Multi-layered Foveon-like sensors are also described in great details - this can be a real advantage if Invisage is able to make such sensors:
All in all, my impression is that Invisage did great research and design work to make use of its quantum dot photoconductor idea. The performance numbers given in the patents look quite promising and competitive.
Said all this, I still have some reservations about suitability of photoconductor principle for consumer photography applications. The problem is that in comparison with photodiodes, photoconductors have an additional and potentially significant noise source - recombination process. Photodiodes are simple devices in that respect - once the photocarriers are generated they are pulled apart by electric field and by this the detection process ends. Photoconductors, on the other hand, are infinitely more complex - the photogenerated carriers just give start to a long and complex process which ends with their eventual recombination. The recombination, being a random process, adds its component to the shot noise of the sensor.
Now, assuming recombination noise is about the same as photogeneration shot noise, photoconductors have an innate disadvantage in SNR10 figure: for the same QE, color crosstalk and very low dark noise, photoconductors have twice worse SNR10 figure. For that reason alone I doubt photoconductive devices can compete with photodiode ones, not in consumer imaging, anyway.
Tessera Licenses its EDoF to AzureWave
Business Wire: Tessera announced that it has licensed it OptiML EDoF technology to Taiwan-nased camera module maker AzureWave. Tessera recorded an initial fee from AzureWave under this license agreement in the Q4 2009. The timing between signing an OptiML license agreement and Tessera’s receipt of initial royalty payments under the agreement is approximately 12-15 months.
Monday, March 22, 2010
InVisage Demos QuantumFilm Sensor
EETimes, Venture Beat, WSJ, NY Times, CNET, Invisage PR: A venture-backed start-up Invisage announced QuantumFilm, promising to deliver 4x higher performance, 2x higher dynamic range and professional camera features not yet found in mobile image sensors:
QuantumFilm covers 100% of each pixel. The material is applied in liquid form to the top of a spinning disk, then it is annealed, or baked. It is a lot like adding a layer of photoresist to a chip wafer, and it uses the same equipment. The wafers are standard 110nm wafers produced by TSMC.
The first samples of QuantumFilm camera chips will be available in the fourth quarter, and products using them will likely launch next year. The QuantumFilm is based on the research of Invisage CTO and professor of electrical and computer engineering and the University of Toronto, Ted Sargent. He worked on the technology for several years at the University of Toronto. Then he secured the rights to the technology and founded InVisage Technologies in October, 2006.
Tetsuo Omori, a TSR analyst, estimates that the image sensor companies spend about $1 billion for each new generation of sensor technology, and each time they get a single-digit percentage increase in performance. A four-fold improvement is unheard of, and so Omori thinks QuantumFilm will change the competitive landscape in the image sensor market, which had $5 billion in revenue in 2009.
Ken Salsman, the director of new technologies at Aptina, conceded that silicon-based sensors had proved tough to advance. But he said that Aptina had managed to improve its technology through some novel techniques, and that InVisage might be “in for a very rude surprise.”
InVisage has 30 employees and has raised more than $30M from RockPort Capital, Charles River Ventures, InterWest Partners and OnPoint Technologies. Its technology is protected by 21 patents and patents pending. Invisage CEO, Jess Lee, used to be Omnivision's VP of the Mainstream Business Unit for four years.
QuantumFilm covers 100% of each pixel. The material is applied in liquid form to the top of a spinning disk, then it is annealed, or baked. It is a lot like adding a layer of photoresist to a chip wafer, and it uses the same equipment. The wafers are standard 110nm wafers produced by TSMC.
The first samples of QuantumFilm camera chips will be available in the fourth quarter, and products using them will likely launch next year. The QuantumFilm is based on the research of Invisage CTO and professor of electrical and computer engineering and the University of Toronto, Ted Sargent. He worked on the technology for several years at the University of Toronto. Then he secured the rights to the technology and founded InVisage Technologies in October, 2006.
Tetsuo Omori, a TSR analyst, estimates that the image sensor companies spend about $1 billion for each new generation of sensor technology, and each time they get a single-digit percentage increase in performance. A four-fold improvement is unheard of, and so Omori thinks QuantumFilm will change the competitive landscape in the image sensor market, which had $5 billion in revenue in 2009.
Ken Salsman, the director of new technologies at Aptina, conceded that silicon-based sensors had proved tough to advance. But he said that Aptina had managed to improve its technology through some novel techniques, and that InVisage might be “in for a very rude surprise.”
InVisage has 30 employees and has raised more than $30M from RockPort Capital, Charles River Ventures, InterWest Partners and OnPoint Technologies. Its technology is protected by 21 patents and patents pending. Invisage CEO, Jess Lee, used to be Omnivision's VP of the Mainstream Business Unit for four years.
Saturday, March 20, 2010
All BSI Symposium Presentations are On-Line Now
Now all 7 presentations from IISW 2009 Symposium on BSI are on-line. The last published was one by Bedebrata Pain.
Thanks to Eric Fossum for updating on this!
Thanks to Eric Fossum for updating on this!
Thursday, March 18, 2010
OmniVision to Increase Contract Orders
Taiwan Economic News: Omnivision CIS modules are heavily ordered. The company has placed its orders for 0.18mm, 0.13mm, 0.11mm and 65nm process with TSMC, and is going to increase contract orders for downstream process with Taiwanese suppliers in the line starting the second quarter of this year. Taiwanese semiconductor testing and packaging companies, such as VisEra, Xintec, Siliconware and King Yuan are likely to benefit from OmniVision.
Wednesday, March 17, 2010
CCD Nobel Prize - Commentary and Color by Dana Seccombe
Dana Seccombe has an interesting comment on CCD Nobel Prize post. I'm re-posting it on the front page, thanks Dana!
Nobel Prize - Commentary and Color
As it happened, I worked as a cooperative student from MIT at Bell Laboratories in 1968 (Homdel), and 1969 and 1970 (Murray Hill). My roommate, and fraternity brother, Thomas W. Liu worked in Andy Bobeck’s group, which worked on magnetic bubbles-thought to be a replacement for plated wire memories , and core memories. I occasionally attended presentations in the Murray Hill auditorium discussing progress and status. These informal meetings were typically attended by 15 to 40 people. Bobeck and his group were an outgoing and gregarious group, and enthusiastic about the prospects for magnetic bubbles—which could store data nonvolatily and also could perform some logic (bubbles which interacted at selected points could perform NAND and NOR functions, for example).
I also attended a similar BTL internal meeting, presenting Boyle and Smith’s recent invention of the CCD. This meeting was the first discussion of CCD’s outside the immediate management structure of Boyle and Smith, and was announcing the discovery of the phenomenon/structure of CCD’s. Boyle and Smith did, indeed, say in the meeting that they had been threatened by Bobeck’s discoveries, and weren’t about to let the magnetic group discoveries go unchallenged. Boyle and Smith went on to say that their CCD approach offered three unique advantages:
At the time, the technologies used 5 micron geometries (though, most people used the alternative measure .2 mils to designate line sizes) It was unclear what the relative future of either technology would be, and both proponents wanted to appear to be developing the superior technology. The initial impetus for both technologies, though, was clearly memory applications—and bubbles appeared, momentarily, superior for that application because it was both nonvolatile, and radiation hardened . However, it was recognized and emphasized in Boyle and Smith’s talk that CCD’s had the additional potentials of image acquisition and complex analog and digital data processing.
By chance, in the summer of 1971, I worked at RCA David Sarnoff Research Laboratories, in Princeton, New Jersey. I worked for Paul Weimer, an inventor of the Silicon Vidicon, and a highly regarded individual at RCA. His lab was solely devoted to successor technologies to the silicon vidicon—including CdSe sensors that were evaporatively deposited in complex arrays by 500 line shadow masks—and CCD and bucket brigade sensors (mostly silicon). As I arrived at that lab, they had already developed CCD line and array image sensors. In Weimer’s group, I worked for Mike Kovac who almost single handedly fabricated masks, processed wafers, did circuit design, and tested the products. There was another group (not reporting to Weimer) also working on CCD’s, with two key contributors, Walter Kosonocky and Jim Carnes. At the end of the summer, Carnes departed for Fairchild where he was to have a responsible position on CCD’s. It is clear to me that Weimer and Kovac definitely thought CCD’s were for imaging. We even made a camera using an area sensor that was the size of a cigarette pack, whose output was fed to a “Z axis modulated ‘scope” to display the output. I still have some of the output photos we made.
My point in bring this history up is to point out that there were many others who were aware of the potential of CCD’s as image sensors, and who had made quite a bit of progress too. The invention of the frame buffer/image transfer section for CCD images was an important, but not foundational part of the technology. It is important for many sensors—particularly where there is no other form of shutter, but there are plenty of CCD sensors that don’t use the frame buffer.
In the early days, perhaps the biggest limiter was smear due to some of the charge being “left behind”, charging and discharging traps. Early sensors would transfer perhaps 99% of their charge per transfer. Today sensors transfer in excess of 99.99% of their charge.
RCA Sarnoff was, however, in financial trouble because of their unsuccessful entry into the computer business, and RCA was beginning to lay off people. As I recall, Weimer’s people were given lower priority, and work on the CCD suffered greatly.
The usefulness of CCD’s for imaging took over 30 years of advances of literally 100’s of people. To be truly useful, CCD’s required at least the following major advances:
CCD development was similar. Many things were incrementally developed.
In my case, work on CCD’s affected my later career at hp, where I eventually ran hp’s Inkjet Supplies Business Unit. Since my goal was to sell more ink, I was interested in developing applications that use more ink. Photography uses perhaps 150% coverage of ink on a page (some each of cyan, magenta, yellow, and black), whereas text is typically 5% dense. I therefore subsidized hp laboratories in developing digital cameras and applications. Later Canon and Epson, with similar printing interests, also invested in digital cameras. Other camera manufacturers could see that cameras would soon all go digital. Hence, the availability of high resolution digital printing drove a huge interest in digital image capture—and helped to drive investment in CCD’s for consumer applications.
As part of the invention tapestry, while at Bell Laboratories, I did my MIT masters thesis on characterizing upconverting phosphors. The goal was to develop a blue light source from phosphors irradiated by high intensity infra-red light. Rare earth oxides (typically Yttrium) were doped with other rare earth oxides (for example, erbium, or thulium, with another oxide, typically ytterbium). Infrared light pumped the ytterbium to an intermediate level, where energy was transferred non-radiatively, to an adjacent thulium or erbium ion. That ion was subsequently pumped a second or third time, to higher levels, ultimately resulting in a radiative decay in the visible (blue or green), though with very low efficiency (in the case of blue, about .1%) Years later, though, these same phosphors are being used to produce white light from UV and visible stimulus. While completely impractical in 1970, improved IR and UV pump sources in conjunction with phosphors in 2010 are making it possible to generate color balanced white solid state lights.
A similar scenario happened with regard to flash memory. Once regarded as an impractical backwater (slower and more expensive than DRAM), flash memory has, 40 years after its initial development, become a paradigm shifting technology. However, it took developments such as radical scaling, series cell memories, multilevel storage, and an explosion in huge, portable applications to make a difference.
Magnetics eventually lost out to electronics, even though it was non-volatile, because it couldn’t scale gracefully. As geometries decreased, the energy density in the bubble wall would have to increase above what is available with realizable materials. Also, generating the required in plane rotating magnetic field was power hungry and bulky. However, today “spintronics” –magnetic effects on an atomic level—may be the basis for the resurgence of magnetism.
Who should get the Nobel Prize? It is up to the commission—and lets recognize that it won’t ever be without controversy. Some (Nobel Peace Prize) winners are reviled as murderers in some countries, while hailed in others. Some scientists received the prize for a single piece of insight. Others worked doggedly for years plowing untilled ground, ultimately developing accurate and groundbreaking, though controversial theories, and then going for additional years ignored or scorned before ultimately being recognized (i.e.,Einstein, Curie).
In the case of CCD’s, much of the insight for how a CCD works was very well understood in 1970. The use of capacitors “in deep depletion” to test doping and trapping, and detecting light, had been standard proceedure for years to characterize semiconductors. In fact, in my first summer at Murray Hill, under Dan Rode, (who reported to John Copeland, the inventor of the Copeland inverse profiler—which used this effect to profile doping densities in semiconductors, including GaAs) I, as a student developed a test setup using hp plotters to do C-V plots. It was well known that charge under the capacitors could be moved (that is how MOS transistors work)—but Boyle and Smith had a flash of insight that a series combination of such transistors could result in a useful, if transient, memory device. That insight wouldn’t have developed if it weren’t for the advent of the bubble memory—something that ultimately had no direct impact.
The fact that it took at least 30 years for CCD’s to become a major factor in the electronic world attests to the fact that many individuals made many important contributions to remove limitations, and provide an environment where CCD’s could contribute. Many of those people have some legitimate claim to a part of the success of CCD’s, though perhaps not rising to the level of the, probably overemphasized, Nobel Prize.
See also Boyle and Smith video fro 1978—admitting that it was a “flash of genius” type discovery: http://www2.alcatel-lucent.com/blog/2009/10/2009-nobel-prize-in-physics-boyle-and-smith-present-the-ccd-in-this-1978-video/.
The video also maintains, as I remember, that from the beginning Boyle and Smith envisioned that CCD's could be used for imaging and digital and analog data processing (typically through the use of tapped delay lines).
Nobel Prize - Commentary and Color
As it happened, I worked as a cooperative student from MIT at Bell Laboratories in 1968 (Homdel), and 1969 and 1970 (Murray Hill). My roommate, and fraternity brother, Thomas W. Liu worked in Andy Bobeck’s group, which worked on magnetic bubbles-thought to be a replacement for plated wire memories , and core memories. I occasionally attended presentations in the Murray Hill auditorium discussing progress and status. These informal meetings were typically attended by 15 to 40 people. Bobeck and his group were an outgoing and gregarious group, and enthusiastic about the prospects for magnetic bubbles—which could store data nonvolatily and also could perform some logic (bubbles which interacted at selected points could perform NAND and NOR functions, for example).
I also attended a similar BTL internal meeting, presenting Boyle and Smith’s recent invention of the CCD. This meeting was the first discussion of CCD’s outside the immediate management structure of Boyle and Smith, and was announcing the discovery of the phenomenon/structure of CCD’s. Boyle and Smith did, indeed, say in the meeting that they had been threatened by Bobeck’s discoveries, and weren’t about to let the magnetic group discoveries go unchallenged. Boyle and Smith went on to say that their CCD approach offered three unique advantages:
- Memory applications
- Imaging applications
- Data processing and logic applications
At the time, the technologies used 5 micron geometries (though, most people used the alternative measure .2 mils to designate line sizes) It was unclear what the relative future of either technology would be, and both proponents wanted to appear to be developing the superior technology. The initial impetus for both technologies, though, was clearly memory applications—and bubbles appeared, momentarily, superior for that application because it was both nonvolatile, and radiation hardened . However, it was recognized and emphasized in Boyle and Smith’s talk that CCD’s had the additional potentials of image acquisition and complex analog and digital data processing.
By chance, in the summer of 1971, I worked at RCA David Sarnoff Research Laboratories, in Princeton, New Jersey. I worked for Paul Weimer, an inventor of the Silicon Vidicon, and a highly regarded individual at RCA. His lab was solely devoted to successor technologies to the silicon vidicon—including CdSe sensors that were evaporatively deposited in complex arrays by 500 line shadow masks—and CCD and bucket brigade sensors (mostly silicon). As I arrived at that lab, they had already developed CCD line and array image sensors. In Weimer’s group, I worked for Mike Kovac who almost single handedly fabricated masks, processed wafers, did circuit design, and tested the products. There was another group (not reporting to Weimer) also working on CCD’s, with two key contributors, Walter Kosonocky and Jim Carnes. At the end of the summer, Carnes departed for Fairchild where he was to have a responsible position on CCD’s. It is clear to me that Weimer and Kovac definitely thought CCD’s were for imaging. We even made a camera using an area sensor that was the size of a cigarette pack, whose output was fed to a “Z axis modulated ‘scope” to display the output. I still have some of the output photos we made.
My point in bring this history up is to point out that there were many others who were aware of the potential of CCD’s as image sensors, and who had made quite a bit of progress too. The invention of the frame buffer/image transfer section for CCD images was an important, but not foundational part of the technology. It is important for many sensors—particularly where there is no other form of shutter, but there are plenty of CCD sensors that don’t use the frame buffer.
In the early days, perhaps the biggest limiter was smear due to some of the charge being “left behind”, charging and discharging traps. Early sensors would transfer perhaps 99% of their charge per transfer. Today sensors transfer in excess of 99.99% of their charge.
RCA Sarnoff was, however, in financial trouble because of their unsuccessful entry into the computer business, and RCA was beginning to lay off people. As I recall, Weimer’s people were given lower priority, and work on the CCD suffered greatly.
The usefulness of CCD’s for imaging took over 30 years of advances of literally 100’s of people. To be truly useful, CCD’s required at least the following major advances:
- Silicon gate processing technology (to reduce the “gap” between electrodes that was present in metal gate processes)
- Radically scaled geometries, including gate oxide thickness (to increase charge), and electrode geometries (to get large enough arrays)
- Buried channels to reduce surface state induced lag in transfer, and ion implantation to make those channels
- Consumer demand for still and motion imaging—enabled by a host of still other technologies including: the advanced microprocessor, low cost disk and RAM, related software, design tools, etc.
CCD development was similar. Many things were incrementally developed.
In my case, work on CCD’s affected my later career at hp, where I eventually ran hp’s Inkjet Supplies Business Unit. Since my goal was to sell more ink, I was interested in developing applications that use more ink. Photography uses perhaps 150% coverage of ink on a page (some each of cyan, magenta, yellow, and black), whereas text is typically 5% dense. I therefore subsidized hp laboratories in developing digital cameras and applications. Later Canon and Epson, with similar printing interests, also invested in digital cameras. Other camera manufacturers could see that cameras would soon all go digital. Hence, the availability of high resolution digital printing drove a huge interest in digital image capture—and helped to drive investment in CCD’s for consumer applications.
As part of the invention tapestry, while at Bell Laboratories, I did my MIT masters thesis on characterizing upconverting phosphors. The goal was to develop a blue light source from phosphors irradiated by high intensity infra-red light. Rare earth oxides (typically Yttrium) were doped with other rare earth oxides (for example, erbium, or thulium, with another oxide, typically ytterbium). Infrared light pumped the ytterbium to an intermediate level, where energy was transferred non-radiatively, to an adjacent thulium or erbium ion. That ion was subsequently pumped a second or third time, to higher levels, ultimately resulting in a radiative decay in the visible (blue or green), though with very low efficiency (in the case of blue, about .1%) Years later, though, these same phosphors are being used to produce white light from UV and visible stimulus. While completely impractical in 1970, improved IR and UV pump sources in conjunction with phosphors in 2010 are making it possible to generate color balanced white solid state lights.
A similar scenario happened with regard to flash memory. Once regarded as an impractical backwater (slower and more expensive than DRAM), flash memory has, 40 years after its initial development, become a paradigm shifting technology. However, it took developments such as radical scaling, series cell memories, multilevel storage, and an explosion in huge, portable applications to make a difference.
Magnetics eventually lost out to electronics, even though it was non-volatile, because it couldn’t scale gracefully. As geometries decreased, the energy density in the bubble wall would have to increase above what is available with realizable materials. Also, generating the required in plane rotating magnetic field was power hungry and bulky. However, today “spintronics” –magnetic effects on an atomic level—may be the basis for the resurgence of magnetism.
Who should get the Nobel Prize? It is up to the commission—and lets recognize that it won’t ever be without controversy. Some (Nobel Peace Prize) winners are reviled as murderers in some countries, while hailed in others. Some scientists received the prize for a single piece of insight. Others worked doggedly for years plowing untilled ground, ultimately developing accurate and groundbreaking, though controversial theories, and then going for additional years ignored or scorned before ultimately being recognized (i.e.,Einstein, Curie).
In the case of CCD’s, much of the insight for how a CCD works was very well understood in 1970. The use of capacitors “in deep depletion” to test doping and trapping, and detecting light, had been standard proceedure for years to characterize semiconductors. In fact, in my first summer at Murray Hill, under Dan Rode, (who reported to John Copeland, the inventor of the Copeland inverse profiler—which used this effect to profile doping densities in semiconductors, including GaAs) I, as a student developed a test setup using hp plotters to do C-V plots. It was well known that charge under the capacitors could be moved (that is how MOS transistors work)—but Boyle and Smith had a flash of insight that a series combination of such transistors could result in a useful, if transient, memory device. That insight wouldn’t have developed if it weren’t for the advent of the bubble memory—something that ultimately had no direct impact.
The fact that it took at least 30 years for CCD’s to become a major factor in the electronic world attests to the fact that many individuals made many important contributions to remove limitations, and provide an environment where CCD’s could contribute. Many of those people have some legitimate claim to a part of the success of CCD’s, though perhaps not rising to the level of the, probably overemphasized, Nobel Prize.
See also Boyle and Smith video fro 1978—admitting that it was a “flash of genius” type discovery: http://www2.alcatel-lucent.com/blog/2009/10/2009-nobel-prize-in-physics-boyle-and-smith-present-the-ccd-in-this-1978-video/.
The video also maintains, as I remember, that from the beginning Boyle and Smith envisioned that CCD's could be used for imaging and digital and analog data processing (typically through the use of tapped delay lines).
Tuesday, March 16, 2010
Ambarella Announces A5s IP Camera Platform
Business Wire: Ambarella will demonstrate the Linux-based A5s IP Camera Reference Platform at invitation-only events during the International Security Conference & Expo (ISC West 2010) to be held in Las Vegas, on March 23-25.
Here is the feature list of the platform:
Here is the feature list of the platform:
- Multi-streaming HD 1080p30 H.264 encode plus simultaneous VGAp30 H.264 for thumbview and 5 MPixel (MP) JPEG for high-resolution
- ISP with 3D Motion Compensated Temporal Filtering (MCTF) and per-pixel Local Exposure Correction (LEC)
- ISP support of up to 32 MP resolution, 8- to 14-bit pixel processing, and 240 MP per-second capture, equivalent to 8 MP at 30 frames per second, enabling Digital Pan/Tilt/Zoom and oversampling to reduce aliasing
- 528 MHz ARM1136J-S CPU with AES/3DES crypto engine
- Rich set of peripherals: 16-bit LPDDR2/DDR2/DDR3, NAND, SLVDS/parallel sensor i/f, BT.656 in/out, analog SD/HD out, HDMI, Ethernet MAC, USB 2.0, SSI/SPI, I2S audio, IDC, UART, RTC, WDT, GPIO, iris/AF motor stepper, and SDIO for SD Card and WiFi/3G
- Low-power 45nm CMOS technology and SoC integration. A5s consumes less than 1 Watt (including DDR) at full HD 1080p30, allowing an IP camera design to use as little as 2 Watts.
- Wide selection of sensors supported, including Aptina MT9M033 (1.3 MP WDR), Aptina MT9J003 (10 MP), OmniVision OV2715 (2 MP), and Sony IMX036 (3 MP).
Monday, March 15, 2010
Security Sensor Market Overview
asmag.com published what appears to be a set of quotes about image sensor market for security and surveillance applications. Few of the quotes are below:
The surveillance market experienced the recession firsthand. "Image sensor revenue across all surveillance cameras will decline from more than US$700 million in 2008 to $435 million in 2013," In-Stat said in a prepared statement.
Pixim estimated 1/3-inch sensors account for 90 percent of all cameras shipped, making it the de facto sensor format. Pixim estimates 1/2-inch image sensors have less than 1-percent share in the security market.
The surveillance market experienced the recession firsthand. "Image sensor revenue across all surveillance cameras will decline from more than US$700 million in 2008 to $435 million in 2013," In-Stat said in a prepared statement.
Pixim estimated 1/3-inch sensors account for 90 percent of all cameras shipped, making it the de facto sensor format. Pixim estimates 1/2-inch image sensors have less than 1-percent share in the security market.
Pixelplus Roadshow Presentations
Pixelplus Roadshow presentations from October 2009 and March 2009 (save file with .pptx extension) shed some light on the company status, partners and future plans. This is the first time I see Pixelplus openly talks about UMC being its foundry, Samsung being one of its customers and Sharp being a royalty-paying development partner.
Thursday, March 11, 2010
Pixel Defects and PTC
Albert Theuwissen continues his great series of PTC articles. The latest article talks about pixel defects and how they distort FPN vs time curve.
Kodak CCD Inside Pentax Medium Format DSLR
Kodak announced that its 40MP KAF-40000 CCD is used in the newly launched Pentax 645D medium format DSLR. The 40MP CCD is based on 6um pixels and said to have 1.7 times larger area than 35mm DSLR sensors.
LensVector Features in San Jose Business Journal
San Jose Business Journal publishes an article about LensVector plans. LensVector is already in production with 30,000 square feet of manufacturing space in Mountain View and Sunnyvale, said CEO Derek Proudian, who co-founded the company with CTO Tigran Galstian, a professor at Laval University in Quebec City. LensVector’s Silicon Valley facilities are designed to scale to 5 million units per month, or 60 million per year. Proudian said the demand could be 10 or 20 times that, and the company is considering a second production line. Expansion in Silicon Valley is one of several options it is considering.
The company hopes to go into mass production in the second half of this year. Though no customer announcements have been made, Samsung is working with LensVector.
The company hopes to go into mass production in the second half of this year. Though no customer announcements have been made, Samsung is working with LensVector.
Wednesday, March 10, 2010
IISW 2009 BSI Symposium Presentations On-Line
6 out of the 7 presentations are now posted for the IISW 2009 Symposium on BSI. It is great for image sensor community that the presenters and their organizations agreed to publicly post the presentations.
Thanks to Eric Fossum for the making it possible!
Thanks to Eric Fossum for the making it possible!
Friday, March 05, 2010
PMD 3D USB-Camera Demo
PMD Technologies published a YouTube demo of its USB-powered 3D ToF camera:
Update: There is another USB-powered ToF camera from Mesa Imaging/CSEM. The camera size is just 4x4x3 cm^3. To achieve low power consumption it uses a higher modulation frequency of 80MHz. The camera prototype provides resolution of 7mm (std dev) at 1m and 3mm (std dev) at shorter distances.
Update: There is another USB-powered ToF camera from Mesa Imaging/CSEM. The camera size is just 4x4x3 cm^3. To achieve low power consumption it uses a higher modulation frequency of 80MHz. The camera prototype provides resolution of 7mm (std dev) at 1m and 3mm (std dev) at shorter distances.
Vanguard Begins Manufacturing Sensors for Omnivision
Digitimes quotes Chinese-language Commercial Times reporting that Vanguard International Semiconductor (VIS) has begun shipping 110nm-made image sensors to OmniVision. The sensors are manufactured at VIS' 8-inch Fab 2.
TSMC owns a good stake in VIS, so such a technology transfer seems quite natural to me.
TSMC owns a good stake in VIS, so such a technology transfer seems quite natural to me.
Thursday, March 04, 2010
Omnivision Predicted to Gain Market Share
Barron's: Baird analyst Tristan Gerra contends Omnivision has new design wins at two leading North America-based smartphone makers, while also winning sole-supplier status at a leading PC maker. “We expect OmniVision will gain significant share this year against its main competitor, primarily with higher-resolution sensors along with ramps into dual-sensor devices,” he writes.
ClairPixel Begins Mass Production at Dongbu
Business Wire: Dongbu HiTek announced that it has commenced mass-production of 300-kpixel WDR sensors at the 130nm node for ClairPixel. The sensor is intended for for automobile "Black Box" applications.
According to iSuppli, ASP for sensors serving specialized automotive, medical device and security applications is currently four times greater than those serving mobile phones and digital cameras.
According to iSuppli, ASP for sensors serving specialized automotive, medical device and security applications is currently four times greater than those serving mobile phones and digital cameras.
Optical Technology To Dominate Large LCD Touchscreens
Digitimes reports that 20-inch and above screens such as LCD monitors, all-in-one PCs and LCD TVs are expected to mainly adopt optical touchscreen technology for the next 1-2 years. Capacitive touch technology has low yields and high prices if adopted into 15-inch or larger screens, according to Digitimes' sources.
Varioptic's Image Stabilizing Lens
I-Micronews, Optics.org publish more details about Varioptic's A316S image stabilizing lens. The system uses less than 50 mW of power during a shot and can compensate for a camera shake of ±0.6deg, according to Varioptic.
The shake compensation angle appears to be about a half of what is needed for long exposure and video OIS, but it should be enough for short exposure zoomed-in shots. The lens is said to be targeted to camera-phone and consumer-grade camcorder markets.
The shake compensation angle appears to be about a half of what is needed for long exposure and video OIS, but it should be enough for short exposure zoomed-in shots. The lens is said to be targeted to camera-phone and consumer-grade camcorder markets.
Omnivision OV9715 Shipping to Multiple Tier-One Automotive Suppliers
PR Newswire: Omnivision announced that it begins mass production of its OV9715 CMOS image sensor for use in 360-degree view and other stand-alone or multi camera automotive vision systems. The 1MP OV9715 is a fully AEC-Q100 qualified sensor for advanced driver assistance systems. It is currently shipping to multiple tier-1 automotive suppliers.
The OV9715's can be used in multi-camera automotive vision systems that use extreme wide-angle (>160 degree) lenses where distortion correction and image stitching are required. The 1/4-inch sensor sensitivity is 3300 mV/lux-sec. The sensor delivers 720p video at 30fps and VGA video at 60fps.
While we are at Omnivision, as WSJ reports, an earthquake of magnitude 6.4 shook south of Taiwan early on Thursday, including Tainan Science Industrial Park-based TSMC fab working on the new 1.1um OmniBSI-2 pixel. TSMC said wafer production in Tainan has been disrupted.
Update: Electronics Weekly reports that TSMC lost 40K wafers in the quake.
The OV9715's can be used in multi-camera automotive vision systems that use extreme wide-angle (>160 degree) lenses where distortion correction and image stitching are required. The 1/4-inch sensor sensitivity is 3300 mV/lux-sec. The sensor delivers 720p video at 30fps and VGA video at 60fps.
While we are at Omnivision, as WSJ reports, an earthquake of magnitude 6.4 shook south of Taiwan early on Thursday, including Tainan Science Industrial Park-based TSMC fab working on the new 1.1um OmniBSI-2 pixel. TSMC said wafer production in Tainan has been disrupted.
Update: Electronics Weekly reports that TSMC lost 40K wafers in the quake.
Wednesday, March 03, 2010
Aptina Announces its HDR Sensor Nomination
Business Wire: Aptina officially reacts to the EDN Innovation of the Year Award nomination: “We are honored that the MT9M033 was selected as a finalist for Innovation of the Year in the Sensor category,” said CEO David Orton. The PR also gives a voting link for the company's product.
Update: Aptina has created Vote Now! page with links to the sensor's papers, sample pictures and other information.
Update: Aptina has created Vote Now! page with links to the sensor's papers, sample pictures and other information.
Monday, March 01, 2010
BAE Develops 1.8 Gigapixel Camera
Defense News: It appears there is a need for Giga-pixel resolution sensors in security applications:
"Today there are two kinds of surveillance sensors in use by the U.S. military: high resolution sensors that offer only a narrow field of view, and low resolution sensors that offer a wide view.
That creates a problem, as described in this DARPA scenario: A UAV (unmanned aerial vehicle) operator with a high resolution sensor watches as two suspects enter a building. But when they leave, they walk away in different directions.
Which one does the operator follow? His narrow-view sensor can't follow both, and a wide-view sensor isn't sharp-eyed enough to see either.
The ARGUS-IS - the Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System - can spot and track "65-plus" targets simultaneously from altitudes higher than 20,000 feet, according to the sensor's inventor, BAE Systems. DARPA awarded BAE an $18.5M contract in late 2007 to build the ARGUS-IS.
Built around a 1.8-Gigapixel digital camera, ARGUS-IS has sharp-enough resolution to identify and track individual people from four miles up in the sky. It's housed in a 15-foot-long pod that's designed to be attached to the underside of a large UAV. During the February test flight, it was attached to the belly of a Black Hawk helicopter.
The camera is made up of 368 5MP video chips mounted in four separate cameras. The images from each camera then are merged into a single large, high-definition image."
"Today there are two kinds of surveillance sensors in use by the U.S. military: high resolution sensors that offer only a narrow field of view, and low resolution sensors that offer a wide view.
That creates a problem, as described in this DARPA scenario: A UAV (unmanned aerial vehicle) operator with a high resolution sensor watches as two suspects enter a building. But when they leave, they walk away in different directions.
Which one does the operator follow? His narrow-view sensor can't follow both, and a wide-view sensor isn't sharp-eyed enough to see either.
The ARGUS-IS - the Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System - can spot and track "65-plus" targets simultaneously from altitudes higher than 20,000 feet, according to the sensor's inventor, BAE Systems. DARPA awarded BAE an $18.5M contract in late 2007 to build the ARGUS-IS.
Built around a 1.8-Gigapixel digital camera, ARGUS-IS has sharp-enough resolution to identify and track individual people from four miles up in the sky. It's housed in a 15-foot-long pod that's designed to be attached to the underside of a large UAV. During the February test flight, it was attached to the belly of a Black Hawk helicopter.
The camera is made up of 368 5MP video chips mounted in four separate cameras. The images from each camera then are merged into a single large, high-definition image."
Omnivision Quarterly Earnings Call Transcript
Seeking Alpha finally published Omnivision's Q3 fiscal 2010 earnings call transcript. Some interesting quotes:
Shaw Hong, CEO:
"OmniBSI-2 delivers the world’s first 1.1-micron back side illumination pixel and is the first OmniVision pixel built on a 300-mm copper castings.
...
OmniBSI-2 technology is not limited to only smaller pixel designs, in fact, it can also be applied to larger pixel designs to achieve performance advantages and exceed both current BSI and FSI image sensors of similar size.
...
Today, our customers use [CameraCube] predominantly for the secondary camera application [ph] in mobile handsets, however, going forward, we anticipate that CameraCube devices will be used as the primary camera in mobile handsets."
Bruce Weyer, VP Marketing:
"Actual performance measurement of OmniBSI-2 product demonstrated improvement from the 20% to 75% across key quality metrics for similar size first-generation BSI pixels including quantum efficiency cross talk and alignment to the metrics.
By comparison, the new 1.1-micron OmniBSI-2 pixel not only outperforms the current 1.75-micron FSI architecture, but also equals the performance of the industry leading 1.4-micron BSI pixel that is currently in mass production.
...
our market leadership in the video centered notebook webcam, automotive, and security markets where we hold over 60%, 50% and 50% respective market shares as measured by industry analysts technosystems research."
Anson Chan, CFO:
"R&D expenses in the third quarter totalled $20.4 million, representing an 8% increase from the $18.9 million we recorded by our second fiscal quarter. The primary reason for the increase was our release of additional mass designs to TSMC, which increased our NRE costs, and NRE cost is a key component of our total R&D expenses."
Yair Reiner – Oppenheimer:
"...question on the ramp up of BSI, as that takes place next year, what are you anticipating in terms of gross margin head wins kind of when you initially try to improve yields and once you are through those initial issues, do you expect BSI parts to carry a higher gross margin than you are having at your traditional products?"
Anson Chan:
"Upon initial production, you will have some unfavorable yields and it takes us typically about six months to resolve those issues before it starts to produce reasonable profit..."
Yair Reiner – Oppenheimer:
"Okay, so in other words extrapolate that gross margins are unlikely to expand until the ramps are fully behind us?"
Anson Chan:
"We definitely not believe there would be any unusual effects on our gross margin for our fourth fiscal quarter 2010."
Yair Reiner – Oppenheimer:
"Can you comment on whether a BSI chip relative to kind of a more traditional chip with the same resolution would carry a higher or an equivalent ASP?"
Bruce Weyer:
"The BSI technology has a lot of premiums for the market relative to the value it brings. It brings a lot better image quality and therefore actually gets us into a bit of a different class of designing products as well. So in that respect, yes, it typically would carry a higher average selling price. The technology also has more advanced process technology involved with it, so it also carries a little bit higher cost basis as well. So, that is where Anson was alluding towards the fact that in the long term you do not anticipate a broad differentiation relative to normal earnings curves."
Shaw Hong, CEO:
"OmniBSI-2 delivers the world’s first 1.1-micron back side illumination pixel and is the first OmniVision pixel built on a 300-mm copper castings.
...
OmniBSI-2 technology is not limited to only smaller pixel designs, in fact, it can also be applied to larger pixel designs to achieve performance advantages and exceed both current BSI and FSI image sensors of similar size.
...
Today, our customers use [CameraCube] predominantly for the secondary camera application [ph] in mobile handsets, however, going forward, we anticipate that CameraCube devices will be used as the primary camera in mobile handsets."
Bruce Weyer, VP Marketing:
"Actual performance measurement of OmniBSI-2 product demonstrated improvement from the 20% to 75% across key quality metrics for similar size first-generation BSI pixels including quantum efficiency cross talk and alignment to the metrics.
By comparison, the new 1.1-micron OmniBSI-2 pixel not only outperforms the current 1.75-micron FSI architecture, but also equals the performance of the industry leading 1.4-micron BSI pixel that is currently in mass production.
...
our market leadership in the video centered notebook webcam, automotive, and security markets where we hold over 60%, 50% and 50% respective market shares as measured by industry analysts technosystems research."
Anson Chan, CFO:
"R&D expenses in the third quarter totalled $20.4 million, representing an 8% increase from the $18.9 million we recorded by our second fiscal quarter. The primary reason for the increase was our release of additional mass designs to TSMC, which increased our NRE costs, and NRE cost is a key component of our total R&D expenses."
Yair Reiner – Oppenheimer:
"...question on the ramp up of BSI, as that takes place next year, what are you anticipating in terms of gross margin head wins kind of when you initially try to improve yields and once you are through those initial issues, do you expect BSI parts to carry a higher gross margin than you are having at your traditional products?"
Anson Chan:
"Upon initial production, you will have some unfavorable yields and it takes us typically about six months to resolve those issues before it starts to produce reasonable profit..."
Yair Reiner – Oppenheimer:
"Okay, so in other words extrapolate that gross margins are unlikely to expand until the ramps are fully behind us?"
Anson Chan:
"We definitely not believe there would be any unusual effects on our gross margin for our fourth fiscal quarter 2010."
Yair Reiner – Oppenheimer:
"Can you comment on whether a BSI chip relative to kind of a more traditional chip with the same resolution would carry a higher or an equivalent ASP?"
Bruce Weyer:
"The BSI technology has a lot of premiums for the market relative to the value it brings. It brings a lot better image quality and therefore actually gets us into a bit of a different class of designing products as well. So in that respect, yes, it typically would carry a higher average selling price. The technology also has more advanced process technology involved with it, so it also carries a little bit higher cost basis as well. So, that is where Anson was alluding towards the fact that in the long term you do not anticipate a broad differentiation relative to normal earnings curves."
IISW 2009 BSI Symposium Papers On-Line
IISW 2009 BSI Symposium papers starting to appear on-line. So far only two papers out of seven are available for download:
Fully Depleted, BSI CCDs on High-Resistivity Silicon
Steve Holland, Lawrence Berkeley National Laboratory, University of California, Berkeley (CA)
Mass Production of BSI Image Sensors : Performance Results
Howard Rhodes, OmniVision, Sunnyvale (CA)
Fully Depleted, BSI CCDs on High-Resistivity Silicon
Steve Holland, Lawrence Berkeley National Laboratory, University of California, Berkeley (CA)
Mass Production of BSI Image Sensors : Performance Results
Howard Rhodes, OmniVision, Sunnyvale (CA)
Subscribe to:
Posts (Atom)