Lists

Thursday, July 02, 2009

Sony High Full Well CCD and Omnivision BSI Papers Review

A nice Image Sensor blog from Japan reviews papers from IISW 2009.

Sony paper is named "High-saturation output 1.55-um-square pixel IT-CCD with metal wiring line structure in a pixel". What I was able to understand from Google translation, Sony used low temperature process to minimize diffusion and get more abrupt junction and increase the photodiode capacitance in small pixels.

Omnivision's paper is titled "The Mass Production of BSI CMOS Image Sensors: Performance Results". Google translation gives the following performance numbers for the pixels, as far as I can understand:

  • 1.4um and 1.75um pixels BSI mass produced using bulk P-epi/P-sub
  • 2-shared pixel design
  • 110nm (FEOL) & 90nm (BEOL) process (1.4um pixel is 90nm/90nm)
  • QE (R / Gb / Gr / B): 1.4um = 43.8/53.5/53.6/51.6%, 1.75um = 53.0/60.1/60.2/60.4
  • Full Well: 1.4um = 4,500 e, 1.75um = 6,500 e
  • SNR10: 1.4 um = 110Lux, 1.75um = 60Lux
  • Both pixels have the read noise of 1-2e
  • No Image Lag
  • Dark Current is 22 ~ 27e / s @ 50C (80e/sec @ 60C)

Honestly, I was disappointed to see relatively low QE despite all the added process complexity. I doubt that switch to BSI is justifiable with these QE numbers. Obviously, QE is not the whole story, one needs to look at color crosstalk too. I'd guess the crosstalk is rather low, otherwise SNR10 of 110Lux would not be achieved.

19 comments:

  1. It is strange that the red QE is still the lowest, similar to FSI process. You would think that blue QE would now be lowest, and the red QE would be highest. Unless they pushed the peak doping of their PD very deep from the front Si surface, in which case it would be difficult to hook up to the transfer gate.

    ReplyDelete
  2. It is not strange, red QE is strongly dependant on Si depth as red light is less absorbed than blue light. Thus if Si is too thin in BSI, light can pass through back-end interface and be lost to the other side.
    For blue light, photodiode depletion zone is situated at the other side of the incoming light compared to FSI structure, so you can expect to capture more photo-generated electrons.
    Moreover QE is strongly linked to stack transmission.
    FH

    ReplyDelete
  3. I had always understood the thinning/backside illumination to be a compromise between getting good blue response without ruining the red response.

    The FSI parts often have poor red response because it is cheaper to have shallow depletion regions than deep ones (process doping profiles, operating voltages and epi thickness). As I understand it the NIR to red photons penetrate deeply enough so as to interact in field free regions causing diffusion MTF issues in cheap sensors. The typical cheap camera module will have an NIR cut filter to prevent the MTF issue from fouling up the final image.

    ReplyDelete
  4. @ "it is cheaper to have shallow depletion regions than deep ones (process doping profiles, operating voltages and epi thickness)."

    Actually, it's much harder to develop a deep photodiode with no image lag. As for the cheaper statement, wafer price is pretty much independent of epi thickness and doping profiles. Deeper photodiode might or might not use more masking steps, but this is another matter. And operating voltage is the same 2.5-3V for both shallow and deep photodiode pixels.

    ReplyDelete
  5. There is no cost difference between shallow or deep depletion regions. This is hog wash. We use thin epi to reduce cross talk.
    -EF

    ReplyDelete
  6. and the thin epi is why the qe is low.... but it helps the mtf for small pixels

    ReplyDelete
  7. Getting back to Omnivision BSI performance, whatever silicon thickness it uses, it has failed to exceed the best of FSI sensors, whatever epi thickness they use. Indeed, Omnivision announced its 1.4um BSI pixel based on 0.11um process more than a year ago:

    http://image-sensors-world.blogspot.com/2008/05/omnivision-demos-14um-bsi-sensor.html

    After a year of improvements and switching process from 0.11um to 90nm, the SNR10 performance still hardly matches the best of FSI breed. Add to this high dark current restricting module thermal design. Add to this unimpressive full well. I've hoped for more from Omnivision. I doubt this kind of performance can propel Omnivision sales, unless they undercut others on price.

    ReplyDelete
  8. How much more do you figure it costs to do BSI (say on an 8" wafer basis)? For arguments sake, let's assume that they can get the yields to be on par with FSI.

    ReplyDelete
  9. When comparing costs of 1.4um pixel, the best of FSI competitors use 65nm process with 12" wafers. I do not think it would be easy competition for Omnivision. Especially so since there is no performance advantage in BSI.

    ReplyDelete
  10. I know OV has 12" color filters and packaging capacity. Management also said they were going 12" at TSMC. As far as the line sizes go, isn't smaller more expensive?

    ReplyDelete
  11. As far as I know, BSI sensors are currently made on 8" wafers at TSMC.

    ReplyDelete
  12. fyi, the EF above is not me. I will post from this account if I make a comment.

    ReplyDelete
  13. At IISW I took an informal survey of the 140+ participants. At 1.1 um pixel size, it seemed no one thought that FSI would be used and it would be BSI. This pretty much represents the opinion of all the big players.

    BSI has advantages in CRA, crosstalk and almost everything else being equal, QE. Indirect advantages are in wiring and layout flexibility. The fab cost is about 20% more (and dropping) at the current SOA, according to TSMC's rough estimate. "Priceless" when it comes to winning next gen sockets in quality products.

    ReplyDelete
  14. At IISW the question was asked one sided and it was so surprising that you came to a conclusion with such a question: "How many think FSI will be used for 1.1um pixels." So no one raises hand. Did you just expect that everyone would participate in the poll? At least you should follow up with an alternative question: "How many think BSI will be used for 1.1um pixels." One would expect a follow up question before drawing such a sweeping conclusion. Your data is just plain wrong. If you do base something on human input, you have to know how to get better input.

    ReplyDelete
  15. 1.1um pixel is another story. What I'm saying is that 1.4um pixel in 65nm FSI process can be made better or, at least, no worse than OmniBSI.

    ReplyDelete
  16. Dear Anonymous,

    I wonder what is "just plain wrong" with the data? I think it is the conclusion you are concerned about. Certainly a professional pollster would come at the question in many ways to eliminate such possible confusion.

    Still, I believe the conclusion is correct based both on the poll and informal conversations. But, you are welcome to draw your own conclusion.
    Why not put your name on your post and state your own conclusion if you are so confident?

    In any case, I thought the information I shared was appropriate and possibly informative to those that were not there.

    ReplyDelete
  17. ""Priceless" when it comes to winning next gen sockets in quality products."

    It sounds like BSI is necessary, at some point, for further progress in CIS. Might as well get an early start.

    Do you think that BSI provides any advantages in implementing high resolution wafer lever optics? will it have any advantage in implementing Wave Front Coding? Or is WFC a pipedream?

    ReplyDelete
  18. I am not familiar enough with the issues in wafer level optics or WFC to answer those questions. I would guess improved CRA has to help for wafer level optics and reduced optical stack thickness.

    WFC...doing a lot of arithmetic with limited SNR signals is always problematic just like the situation with the CCM. Arithmetic steps can reduce SNR.

    This sort of SNR issue was a key limiting factor when artihmetically correcting the focus problem from the first Hubble Space Telescope camera. I think it might be a problem for low light consumer applications as well. To the extent BSI improves SNR then it also helps WFC. Perhaps there are other factors though. I just don't know.

    ReplyDelete
  19. I'm thinking that Omnivision figured out that the computations with WFC introduced enough noise to degrade the image and they therefore had to do a restart to get the best beginning image to make WFC/TF work.
    The way I see it Some kind of Wafer level Optic and EDoF Will be necessary to get a 3MP+ module in a phone. If the goal is to mount the module on the surface of the main board, the z height is probably restricted to 3mm regardless of resolution. Is there any technology in the neighborhood that can get a 3mm height on a 5MP module with a good result?

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.