Friday, March 28, 2014

Pixpolar Patent Case Study

Aalto University published a presentation by Artto Aurola, Pixpolar founder, on patent considerations and lessons from his startup company's history:


Regarding the Kodak image sensor-related patents mentioned in the presentation, most part of them was acquired by Omnivision.

55 comments:

  1. Artto, you should pass more time to make your invention understandable. To be honestly, I can consider myself as expert but I didn't catch your idea at all. Could you please make a more conceptual presentation of your invention instead of either falling into endless match formula or using too commercial wording. Thanks!

    ReplyDelete
  2. That Omnivision post states that they bought $65M worth of Kodak patents. Intellectual Ventures bought $525M worth.
    http://www.forbes.com/sites/ericsavitz/2012/12/19/kodak-agrees-to-sell-patents-for-525m-to-group-led-by-rpx-intellectual-ventures/

    ReplyDelete
    Replies
    1. Omnivision acquired image sensor-related patents. Kodak used to have many more patents in areas related to photographic film, chemical processing, camera and optics design, medical, printing, to name a few. I guess all these were bought for $525M.

      Delete
    2. I believe that the patents US5,625,210; US6,160,281; US6,107,655; and US6,657,665 were not amongst the patents that Omnivision bought but instead a part of the 525 M$ deal since these patents are essential CMOS image sensor patents and thus a source of considerable revenue stream from a sizeable CMOS image sensor market. I believe also that the most likely owner of these patents is Intellectual Ventures or RPX (or both) since if the license agreement of these patents had some suspicious elements the big cell phone manufacturers would probably not want to be involved.

      Delete
    3. You are incorrect. According to the USPTO data base, these patents are assigned to Omnivision.

      Delete
    4. Eric, you are right - Omnivision is truly the latest assignee of these patents. Taking into account the essential nature and the fundamental importance of these patents to CMOS image sensors the price seems to be really small. It is, however, very difficult to say whether the deal was good or bad since the pricing of the patents depends in the end of the day on the content of the Kodak's license agreements which are not publicly available.

      Delete
    5. Among the bigger players, the strategic value of key patents is deterrence from lawsuits from competitors. It is a sort of mutual-assured-destruction (MAD) détente, in the absence of an actual cross-license covenant. Omnivision did not really have any key IP until they purchased the Kodak IP, which includes BSI and stacking technology. (think: Ziptronix, for example, or Sony for stacked ICs). I would speculate that the pricing reflected the value of the Kodak IP to Omnivision for strategic purposes.
      Practicing entities, like Omnivision, usually don't try to enforce their patents against competitors unless they feel really squeezed (e.g. Apple v. Samsung) or when their business is in sunset. Lately it seems then they try to sell their IP to NPEs to monetize their value. NPE's have little to lose in lawsuits. But let's not have that discussion again.
      Anyway, the value of something depends on supply and demand, I have not heard any mutterings of regret from other big companies that they let the Kodak IP get away from them. The consideration paid by Omnivision was the value of the Kodak IP to Omnivision, the best deal Kodak thought they could get at that point. Like any other purchase, only time will tell if it was a good deal, a fair deal, or a poor deal.

      Delete
    6. Eric, once again I totally disagree with you about the worth of patents US5,625,210; US6,160,281; US6,107,655; and US6,657,665 since they represent the Holy Grail and the Holy Trinity for the present CMOS image sensor business approaching 10 B$ annual turn-over. It is just IN SANE that there was only one bidder paying just 65 M$ for them or then there is a more sinister explanation.

      If there had been questionable conditions in the license agreements of these patents it would have been a problem for Kodak when the bankruptcy procedure started to seem inevitable. By ‘selling’ the patents at 31.3.2011 one would have avoided uncomfortable questions related to the license conditions popping up during the bankruptcy procedure, which started at 19.1.2012.

      It seems even that these FUNDAMENTALLY valuable patents were hidden amongst a big pile of much less worthy patents. Anyhow the selling price of these patents is ridiculous and dubious since the patents could have been worth of billions of dollars (without any foul play). I guess that the judge of the bankruptcy procedure would have been very keen on knowing the reasons behind the sale of the patents if he only had known their worth.

      Delete
    7. Who is this? Anyway let me say that the "good" patents were not hidden, and of course highlighted by the Kodak people (image sensor engineers) charged by Kodak with making the sale. It is silly to think they were worth billions. Even Kodak, on its most optimistic day, put the value at significantly less than "billions." But we can just agree to disagree.

      Delete
  3. Artto, if you read this, and if it's something you can even comment on, I'm curious how much approximately it costs in total to get a patent. Is it cheaper or more expensive in Europe compared to the U.S.?

    ReplyDelete
  4. Basically the patent protection should be considered from 2 sides: production side and consumer side. From production side, you can patent in the countries where there are producers who can use your technology to compete with you. From consumer side, your competitors can sell the products by using your technology. Theoretically speaking, you can only protect one side and this saves you a lot of money. For example, Korean and Taiwan, there are a lot of producers, but their own market is small. So if your technology is protected in their market such as US, EC, China, India, then you get an efficient protection.
    In the past, the main market of imaging products are US and EC, so only US patent gives you an efficient protection. Since majority of makers are located in US and main EC producers such Philips has to sell to US market. But Today the things changes a lot.
    For example, US patent of Agilent on optical mouse could stop PixelArt coping and selling mouse sensors on Chinese market. Today it's not easy to get a granted patent in China.

    -yang ni

    ReplyDelete
  5. I think the focus in this presentation on Kodak is misleading. Kodak was not a big player in CCDs compared to Sony, NEC, Matsushita, Sharp and Toshiba even though the group at Kodak was technically quite good. To say Kodak lost a foothold in digital imaging due to licensing is not correct at all, as far as I can tell. It was more of a fumble by upper management. The Kodak image sensor portfolio only contained a few strong patents - CMOS shared readout is the main one that comes to mind. In my opinion the CIS PPD patent was weak and probably indefensible due to leaving JPL inventors off the patent. (Dumb mistake by Kodak). And then, how could Kodak make the first PPD CMOS APS and completely lose a position in the market, even when teamed with Motorola? It is a sad story of mismanagement.
    I also cannot explain why OVTI paid $65M for the Kodak patents. I consulted for two companies on that portfolio and could not put the valuation higher than about $20M, compared to Kodak asking more than 10x of that. Most of the patents are not basic or they are about to expire. Some more recent IP was interesting however, and perhaps that is what OVTI was interested in. (I think I am still bound by NDA not to discuss this in any detail). Mostly, I think OVTI was flush with cash and Kodak had a hard time coming down in price. Anyway, in the end we can say the patents were worth exactly what was paid for them. It is the best measure of valuation.
    I think Pixpolar's idea is fine, generally speaking, but not sure it is compelling. Generally compared to the IP and technology of Sony and Samsung (and TSMC and ST and Aptina) it would be a tough sell. Even the sale of the QIS fundamental patent to Rambus was a challenge. US 8,648,287 (priority date of 2005, issued 2014 - almost 9 years!) I sympathize with Artto.

    ReplyDelete
    Replies
    1. Hi Eric,

      I generally disagree on your opinion about the importance of Kodak's patents. The pinned photo-diode patent US5,625,210 (valid until 13.4.2015) is of fundamental importance for present CMOS image sensors by enabling low noise (combination of low dark noise & CDS readout). The patents US6,160,281 (valid until 28.2.2017), US6,107,655 (valid until 15.8.2017), and US6,657,665 (valid until 31.12.2018) are also of fundamental importance since they enable transistor sharing between pixels so that the amount of transistors per pixel incorporating the pinned photo-diode could be reduced from 4 to 1.75 or even less.

      All the present CMOS image sensor manufacturers utilize features described in these key patents. Thus one can say that Kodak possessed the key technology in the field of digital imaging and one may wonder why Kodak licensed it away instead of now dominating the field.

      The reason may have been poor management. I believe, however, that the most important reason was that KODAK lacked the freedom to operate due to your CALTECH patent US5,471,515 and patents claiming priority on it (US7,369,166 still valid until 12.7.2016 due to extension). In these patents intra pixel charge transfer and utilization of CMOS process for image sensor manufacturing are described. There are a couple of possible scenarios that may have taken place:
      -Photobit (acquired by Micron which later on spinned-off Aptina) may have had an exclusive license at least for a certain period of time
      -Deadlock with CALTECH e.g. due to disagreement of the origin of the pinned photo-diode patent (taking into account your argument about the origin of the pinned photo-diode) or due to license fees

      At 2006 when we came out of the closet the Kodak technology was still not in use and the image sensor manufacturers were using instead the old and poor three-transistor CMOS image sensor technology. We contacted several CMOS image sensor manufacturers as well as Kodak (contacting Kodak at that time was probably the dumbest possible thing to do). In the initial meetings we agreed with the majority of the companies to have follow-up meetings with their technical teams. However, after a short while all the companies except Kodak cut off their contact to us and after about a year or so all the CMOS image sensor manufacturers started using Kodak’s technology, i.e., the companies had acquired the necessary licenses on afore said Kodak’s and CALTECH’s patents.

      The logical explanation for this is that Kodak knew that they had lost their unique position in the field due to Pixpolar’s image sensor technology since it enabled better image quality (non-destructive CDS readout, no interface generated dark noise, & no interface generated 1/f noise) and smaller pixel size (at minimum only 1 transistor per pixel).

      The advantage of Kodak tech was, however, that the time to market was considerably shorter than with Pixpolar’s tech (1 year versus up to 3 years) due to the back-side illuminated nature of Pixpolar’s technology. Thus Kodak had actually very strong bargaining power since if a CMOS image sensor manufacturer would have chosen our tech they would have needed to be a long time out of market after the first image sensors utilizing Kodak’s tech would have come to market.

      I assume that Kodak used its strong position in the licensing negotiations to ban the image sensor manufacturers from having any contact with Pixpolar or to file patents on Pixpolar’s technology during the license period. This would probably be in conflict with the anti-trust laws but there is naturally very little that we can about the situation.

      So for afore described reasons I believe that Pixpolar was the reason why Kodak let go of their largest asset in the digital imaging domain. Thus when Kodak’s film business went down they didn’t have anymore a foothold in the digital domain and therefore one can say that Pixpolar indirectly caused the fall of Kodak.

      Delete
    2. "I believe that Pixpolar was the reason why Kodak let go of their largest asset in the digital imaging domain" and "one can say that Pixpolar indirectly caused the fall of Kodak"

      Wow. Well, I am sorry to say but I believe this is just a fantasy....

      Delete
    3. Instead of ungrounded opinions I and probably many others would appreciate if you good give some well-grounded counterarguments on the matter.

      Delete
    4. OK, Artto, have it your way.
      1. Your timeline is completely wrong. Around 2000/2001 Toshiba introduced the PPD into the CMOS APS in mass production. This was followed soon by Micron and ST. Kodak had already transferred the PPD process to Motorola and Moto was testing the waters. I was there during all of this. Hardly ungrounded opinion.
      2. Due to the tech transfer from JPL to Kodak circa 1994/1995, Kodak had a license to the Caltech patent portfolio. They were not blocked at all and in fact we were all hoping they would successfully commercialize this technology. A couple of years later, Kodak, Motorola and Photobit joined in a 3-way development effort to commercialize the technology. This was announced publicly in EE Times June 1997, 9 years before Pixpolar "decloaked". This was before Toshiba or any other big player had entered the market with a PPD device.
      3. Well before 2006, Kodak had fumbled their lead. By this time every big player was in the marketplace. Kodak was now a niche player. I don't know what their volume was but I am sure it was not highly ranked. One of Kodak's problems was using Moto as a foundry (due to George Fisher moving from Moto to Kodak as CEO). Moto was expensive and slow, and their was no apparent upper management support for this foundry sort of arrangement at Moto. It was just Fisher legacy. From Moto's own perspective, they tried to launch their own line but it was not successful.

      Your conclusion is totally specious and a fantasy. Aside from a rough timing coincidence, there is no cause and effect relationship. I lived through most of this period, at least up to 2003, heavily involved in these relationships and developments. My opinions are hardly ungrounded. I was there.

      Delete
    5. Eric, you were certainly there and many thanks for the information - I believe that it is well appreciated by many others also. My point is, however, that until 2006 the combination of pinned photo-diode and transistor sharing was not present in the mobile phone market. Without transistor sharing it takes four transistors per pinned photo-diode pixel (three if you trade the selection transistor against smaller dynamic range) and thus when faced with Pixpolar's tech enabling at best only one transistor per pixel the pinned photo-diode tech needed the boost from transistor sharing in order to get down to 1.75 transistors per pixel so that it could be competitive against Pixpolar's tech.

      Delete
    6. I am sorry but you are misinformed. Toshiba used sharing almost from the start in 2001, followed by Micron and others. Years before 2006.

      Delete
    7. I am confident that at 2006 transistor sharing was not used in CMOS image sensors of mobile phones but just one or two years later. I got this information for example from Nokia which was at the time by far the biggest customer of CMOS image sensors.

      Delete
    8. This article seems to be saying that Panasonic, Canon, and Sony each had PPD shared pixel processes in 2003.
      http://www.eetimes.com/document.asp?doc_id=1203368

      Delete
    9. This about scientific papers at ISSCC 2004 – of course the image sensor manufacturers will analyze, research, and patent on their competitors technologies like it has been the case e.g. with CCDs, BCMD, pinned photodiode, and transistor sharing (Pixpolar being the exception though). In this way they will gain foothold in potential cross-licensing negotiations.

      The beauty of the transistor sharing is that it can be done on mask level - changes to the process are not mandatory. It could have been that an image sensor manufacturer had manufactured image sensors featuring transistor sharing in areas outside the patent cover or sold such image sensors to niche markets in areas under patent cover. My point is, however, that in 2006 transistor sharing was not utilized in mobile phones in areas under the cover of the transistor sharing patents. The probable reason for this is that the image sensor manufacturers did not have the necessary Kodak license for transistor sharing.

      If you disagree please point out to me a mobile phone model that was sold in US in 2006 and that was equipped with a camera utilizing an image sensor featuring pinned photo-diode and transistor sharing and that was not from Kodak.

      Delete
    10. Mobile phones maybe not before 2006, but Canon was shipping EOS camera's with pinned photo-diodes and transistor sharing back in 2005.

      Delete
    11. As you probably know patents are licensed separately to different business areas like Digital Still Cameras (DSC), Digital Single Lens Reflex (DSLR) cameras, and mobile phones. The fact that in 2005 Canon had used the combination of pinned photo-diode and transistor sharing in DSC or DSLR cameras does not indicate that Kodak had licensed transistor sharing also to mobile phones. Besides Canon wasn't a player in mobile phones.

      Delete
  6. If anyone is interested, this is an interesting article on the fate of Kodak's patent portfolio. http://spectrum.ieee.org/at-work/innovation/the-lowballing-of-kodaks-patent-portfolio

    ReplyDelete
  7. "At 2006 when we came out of the closet the Kodak technology was still not in use and the image sensor manufacturers were using instead the old and poor three-transistor CMOS image sensor technology."

    I know for sure that the module design co. I was working with was first using/designing with shared readout in 2002 going into 2003. First 2.5T closely followed by 1.75.

    ReplyDelete
    Replies
    1. I should have been more precise - at 2006 we were talking to big image sensor manufacturers supplying image sensors for mobile phones and at this time the shared pixel architecture was not yet utilized in mobile phones.

      Delete
    2. I think what happened there was that companies started using the shared readout structure for everything - including for mobile phones in 2003 - regardless of the IP position, otherwise they were going to miss the mobile boat. They then had to deal with the legal mess years later (e.g. JPL/Caltech court case) and I guess now that out of court settlements have been made to make the whole thing go away.

      Delete
    3. Hi, the JPL/Caltech case was about intra-pixel charge transfer and about the use of CMOS process for image sensor manufacturing. However, the transistor sharing in pinned photo-diode pixels was not yet utilized in 2006 - I know this for sure.

      Delete
  8. the first shared pixel was the sharing between RST and SEL between the active pixels in the consecutive row, this was a patent from Kodak. I used this since, this structure is used in AMLCD long time ago.

    ReplyDelete
    Replies
    1. I assume that you are talking about Active-Matrix Liquid-Crystal Displays and it may very well be that transistor sharing has been used in the field. I am, however, talking about pinned photo-diode pixels utilizing transistor sharing in CMOS color image sensors that are aimed for mobile phone based digital photography.

      Delete
  9. Forgive my frankness, I don't think Kodak ever ever considering any thread from pixpolar. We've seen tons of "breakingthrough" ideas almost every year, but if you do not have a working demo sensor (not a single pixel), nobody will take it seriously, not to mention a company like Kodak in 2006.

    ReplyDelete
    Replies
    1. OK, I don’t but you can take all this just as a coincidence:
      -2006 Kodak holds the key patents (pinned photo-diode & transistor sharing) in the field of CMOS image sensors
      -2006 we started discussions with Kodak and big CMOS image sensor suppliers for mobile phones. We agreed on technical level meetings with most of them.
      -Soon all the CMOS image sensor manufacturers decided to back-off from discussions with us. The only technical level discussion we had was with Robert Guidash and his team from Kodak.
      -After a year or so all the big image sensor manufacturers supplying for mobile phones started using pinned photo-diode pixels with transistor sharing.

      Delete
    2. I think you are quite unaware of a lot of things that were going on in the imaging world at the time. In 2006 there were lots of sensors using pinned diodes in mobile (nearly all?). Kodak had been struggling and failing in the digital world for years before that.

      I don't think Pixpolar had a significant effect on any part of the mobile industry or Kodak. As the presentation says, Pixpolar lacked industry experts.

      Don't confuse correlation for causation.

      Delete
    3. As I mentioned already above, my point is that in 2006 transistor sharing was not utilized in mobile phones in areas under the cover of the transistor sharing patents. The probable reason for this is that the image sensor manufacturers did not have the necessary Kodak license for transistor sharing.

      If you disagree please point out to me a mobile phone model that was sold in US in 2006 and that was equipped with a camera utilizing an image sensor featuring pinned photo-diode AND transistor sharing and that was not from Kodak.

      Delete
    4. “We've seen tons of "breakingthrough" ideas almost every year”.

      In image sensors there are four basic known ways for read-out of integrated charge:

      1) External gate read-out (channel affected by charge present on an external gate). This is utilized in present CCDs and CMOS image sensors and it was first invented somewhere end of sixties.
      2) Internal gate read-out (channel affected by charge located inside semiconductor; the charge inside the semiconductor and in the channel are of the opposite type). It is mentioned first time in the seventies in conjunction with CCDs. Later on used in active pixel sensors like in BCMD and DEPFET sensors.
      3) Base read-out in bipolar junction transistors (charge on the base affects emitter current or potential). As far as I know stems from the eighties.
      4) Modified Internal Gate (MIG) readout (channel or base affected by charge located inside semiconductor; the charge inside the semiconductor and in the channel are of the same type). Invented in 2004.

      Pixpolar’s MIG readout is the fourth fundamental way to read-out integrated charge in image sensors invented after around 30 years after the previous one. In addition Pixpolar’s image sensor read-out technology provides best Signal to Noise Ratio (SNR) and thus superior low light image quality. So it is not just one of tons of “breakingthrough” ideas appearing almost every year. The reasons behind the best SNR are:

      1) Lowest dark noise (no interface generated dark noise)
      2) Lowest read noise (no interface generated 1/f noise)
      3) Non-Destructive Correlated Double Sampling (NDCDS; no accumulation of read noise)
      4) No amplification noise
      5) Very high quantum efficiency (fully depleted back-side illuminated pixel with 100 % fill-factor)
      6) No blooming, no smear, very low cross-talk (fully depleted, inherent vertical anti-blooming structure)

      In addition Pixpolar’s MIG technology offers even better SNR in image sensors based on other semiconductor materials than silicon due to the following facts:
      -the interface quality is considerably poorer in other semiconductor materials than silicon
      -Pixpolar’s MIG technology is not affected by the interface quality unlike other image sensor technologies
      Thus in the future the night vision sensors attached to your Oculus or Google Glasses will be MIG technology based Silicon Germanium CMOS image sensors.

      Delete
  10. No one talked about CMD (Charge Modulation Device) ? It looks quite similar. Am I correct ?
    -yang ni

    ReplyDelete
    Replies
    1. Yang Ni, you are a trouble maker! CMD and BCMD were also very interesting single-transistor active pixel sensors and predecessors of the subject device. I am sure Artto can explain the detailed differences.

      Delete
    2. Hi Yang, in this link http://www.youtube.com/watch?v=6748wInVd7E&feature=youtu.be there is a video describing the differences between Pixpolar's Modified Internal Gate (MIG) technology and Internal Gate (IG) technology which is e.g. also known as CMD, BCMD, and DEPFET. You can also find the video on our blog at www.pixpolar.com/blog.

      Delete
  11. I spent three years valuing Kodak patents-in-suit and their many portfolio groups, this one included for institutional investors, manufacturers looking to circumvent and three of the largest OEMs on the planet, who have interests in capture IP. Much of this portfolio is based on 3T pixel. Just before Kodak went chapter 11, the portfolio OVTI got was valued at about $1.2-$1.4B. Kodak was trying to get $2B with no takers. The three largest suitors who were looking at this hired me to provide a timetable when Kodak would run out of cash. The idea was to then get it amongst themselves as defensive patents only. Kodak put them up for sale, but no one would touch it due to the fact that they most likely would not own them outright once the court deemed the sale post facto. So this particular portfolio was grossly devalued, Kodak then went belly up, the group of patents was once again put on the market, this time managed by the court and OVTI picked it up. OVTI did not get the core group of patents that are more relevant to enforcement. Kodak kept them. This portfolio OVTI owns has defensive value but is not part of the core patents Kodak still owns that are based on image capture process, bayer and 4T pixel, and some BSI.

    ReplyDelete
    Replies
    1. I never heard numbers this high in my discussions with my clients. From a valuation point of view, the patents were not enabling for existing big players, and the main concern was that they would fall into the hands of a NPE. The big guys were already playing just fine in the field, and had enough of their own patents to countersue if need be. OVTI did everybody a favor by keeping key sensor technology patents out of the hands of the NPEs. Maybe if a new deep-pocketed company wanted to get into image sensors (e.g. a Google) then maybe the Kodak portfolio would be enabling to that new line of business. That is the only way I see the valuation being as high as I recall, much less as high as you say. I also heard Kodak was intransigent in negotiation, at least initially, even at the lower numbers I recall. I don't think Kodak held back any key patents in image sensor technology - seemed like they were all there in the portfolio I looked at, and the "key" patents cited above are all now assigned to OVTI.
      I only saw a small slice of the patent sale and sounds like you saw a much bigger picture. Still, I can't reconcile the $ or content you write about with my recollection.
      Someday an insider story would be quite interesting as a case study.

      Delete
    2. I would appreciate a lot if you could give answers to these questions.

      1) The key 4T patents are US5,625,210; US6,160,281; US6,107,655; and US6,657,665 which are according to USPTO database owned by OVT. You are, however, referring that Kodak kept these patents (and sold them perhaps to the super-consortium?). Could you please explain this contradiction?

      2) You are referring that Kodak sold patents to OVT after they went belly up. According to information in an earlier post the patents were sold to OVT at 31.3.2011 but the chapter 11 procedure started only at 19.1.2012. There seems to be also a contradiction regarding this matter - could you please explain?

      3) I assume that when you did your job you had access to material that Kodak handed out to you, i.e., there was no court order that mandated Kodak to provide you with the entire material related to all patents and their corresponding license agreements - is this correct?

      Delete
  12. Eric, It was not just George Fisher and moto that started the fall within Kodak's sensor business. Kodak's demise was due to the fact that they had a bloated upper and middle management that was entrenched in the legacy film business, milking the cash cow until their pensions kicked in. This all powerful group hated the technologists within Kodak. It was an internal all out war at Kodak during these times. Willy Shih, VP of Digital at the time you are describing, had a corporate mandate to do everything possible to slow digital adoption and development down so that a smoother transition could occur. It worked with DSCs until camera phones came along and Kodak knew then that the convergence was out of their control. I provided definitive proof of this at the time through a blog. The problem was exactly as you describe, other competitors moved ahead of Kodak and also, ODMs and developers were no longer too concerned about circumventing Kodak IP.

    ReplyDelete
    Replies
    1. Absolutely, the Moto part was only one of many problems. Kodak is a classic case of a company holding on way too long to one business paradigm and stifling internal innovation. I will say that the image sensor group at Kodak seemed quite open to CMOS image sensors, with Tom Lee being the thought leader there. George Fisher, as a new CEO, was also quite supportive of the shift to digital imaging. The layers in-between that were the problem.

      Delete
  13. Many thanks to everybody for the thorough discussion – it helped at least me to sharpen my view about the Kodak topic. Now that the discussion has chilled down I prepared a synthesis on the matter. I know that many of you guys disagree with it but I will anyhow stand behind my view. First the facts:

    -Kodak held the key patents in the CMOS image sensor business. The key patents are, as it is aptly described in a previous post, “The Holy Grail” US5,625,210 for pinned photo-diode and “The Holy Trinity” US6,160,281, US6,107,655, and US6,657,665 for transistor sharing. The importance of “The Holy Grail” is that it enables low noise. The importance of “The Holy Trinity” is that one can reduce the minimum amount of transistors required per pixel from three to 1.75 or even less improving thus considerably the resolution.

    -The technology described in the “Holy Grail” and “Holy Trinity” patents was already used in 2005 in Canon’s EOS cameras but in 2006 it was not yet utilized in mobile phones that were sold in areas protected by these patents.

    -Pixpolar’s Modified Internal Gate (MIG) pixel technology enables at best only one transistor per pixel and better low light image quality than Kodak’s tech. In 2006 the first MIG patent became public after which Pixpolar started discussions with Kodak and big CMOS image sensor suppliers for mobile phones. We agreed on technical level meetings with most of them soon after which all the CMOS image sensor manufacturers decided to back off from discussions with us. The only technical level discussion we had was with Robert Guidash and his team from Kodak.

    -After a year or so all the big CMOS image sensor manufacturers supplying for mobile phones started using pinned photo-diode pixels with transistor sharing.

    -Since 2006 the big CMOS image sensor manufacturers have neither been willing to engage in negotiations with Pixpolar nor making publications nor filing patents on Pixpolar’s technology.

    I think that it can be stated with certainty that Kodak had licensed transistor sharing, i.e. “The Holy Trinity”, for Canon’s Digital Still Cameras (DSC) and Digital Single Reflex Lens (DSRL) cameras. The reason for this is that Kodak was at that time such a strong player that Canon could not have got away with breaking these patents. It is also important to note that Canon was not a player in mobile phones. Another important point is that according to a previous email: “Willy Shih, VP of Digital at the time you are describing, had a corporate mandate to do everything possible to slow digital adoption and development down so that a smoother transition could occur. It worked with DSCs until camera phones came along and Kodak knew then that the convergence was out of their control.”

    A third important point is that when we discussed with a Kodak’s business development person he said that special emphasis was put in business development for establishing machines to be placed e.g. in shopping malls so that people could come with their DSC or DSLR memory cards, choose the images they want to develop at the machine, go shopping, and after shopping is done pick up the developed photos from the machine. According to this and the second point it is obvious to conclude that the DSCs or DSLRs were actually not Kodak’s true enemies but instead the mobile phone cameras due to mobile phones’ instant ability to share photos. Thus it is not a wonder if Kodak had licensed its “Crown Jewels”, i.e. the “Holy Grail” and “Holy Trinity” only to DSCs and DSLRs since people were much more likely to develop photos taken with DSCs or DSLRs than with mobile phones. By keeping the image quality in DSCs and DSLRs superior when compared to mobile phones with the aid of selective licensing of the “Holy Grail and Trinity” patents people would stick longer to their DSCs and DSLRs enabling thus a “smoother transition” and the ability to “milk the cash cow” for a longer period of time.

    TO BE CONTINUED…

    ReplyDelete
  14. …CONTINUATION

    A fourth important point is that Kodak had a large revenue stream coming from three transistor (3T) and four transistor (4T) CMOS image sensor pixel patents (the 4T comprises the “Holy Grail” and “Holy Trinity” patents). In case the big CMOS image sensor manufacturers would have changed to Pixpolar’s MIG pixel technology Kodak would have lost this revenue stream.

    A fifth important point to note is that if Pixpolar’s image sensor technology would not have been a threat to Kodak then Kodak should have received a considerable revenue stream from the licensing of the “Holy Grail” and “Holy Trinity” patents to mobile phones since the mobile phones are by far the biggest market for CMOS image sensors. It is easy to calculate ballpark numbers for the potential licensing revenue stream of afore said patents. When compared to the old 3T pixel the combination of “Holy Grail” and “Holy Trinity” enables roughly 70 % higher resolution and much better low light image quality, which would have been important selling points and an enhancer of the “sex appeal” of mobile phones (in 2006 the resolution was truly the king). When compared to the “Holy Grail” the combination of “Holy Grail” and “Holy Trinity” enables roughly 130 % higher resolution. Thus it would have been rather easy to gain in both cases at least 10 % as revenues from the CMOS image sensor business. This would have made in 2006 an annual revenue stream of roughly 500 M$ and now roughly 1 B$. Calculating all the years together one would have got several billion dollars of cumulative revenues from the “Holy Trinity”. Thus if Pixpolar’s tech had not been a threat, the price of 65 M$ for the “Holy Grail” and “Holy Trinity” patents would have been simply a felony. On the other hand, if Pixpolar technology were truly a threat then it would have even made sense to offer very low license fees for these patents which could explain the terribly low price of 65 M$.

    The fact that Pixpolar has been isolated stems probably from restrictions in the license agreements of at least the “Holy Grail” and “Holy Trinity” patents which probably does not comply well with the anti-trust laws. This would explain the reason why the “Holy Grail” and “Holy Trinity” patents were sold before the chapter 11 bankruptcy procedure started since if such restrictions had existed and the court had noticed them it could have let to large punitive compensations and at worst even to imprisonment. This matter could also be another explanation for the low price and the fact that there was just one bidder since if there had been such “toxic” license conditions the selling of afore said patents would not have been a problematic undertaking.

    The fact that the CMOS image sensor manufacturers chose Kodak instead of Pixpolar is probably due to couple of reasons. First the time to market with the Kodak technology was much shorter than with Pixpolar (1 year vs. up to 3 years) and thus by selecting Pixpolar’s tech a CMOS image sensor manufacturer would have been out of market for a long period of time (up to 2 years) after the competitors would have started shipping sensors equipped with Kodak’s tech. Thus Kodak had actually a strong negotiation position. On the other hand, the big CMOS image sensor manufacturers would have lost their existing patent protection since vast piles of patents on present CMOS image sensor tech would have become more or less useless which would certainly not have pleased the R&D departments (i.e. the infamous not invented here effect). Thus if the big image sensor manufacturers had chosen Pixpolar’s tech they could have faced competition from totally new players. This is also emphasized by the fact that MIG pixel technology is easier to adjust to standard CMOS logic processes than the existing CMOS image sensor technology.

    TO BE CONTINUED…

    ReplyDelete
  15. Based on afore said I would suggest for the big CMOS image sensor manufacturers to quit the hide and seek game in order to start joint licensing/cross-licensing negotiations with Pixpolar. This would bear several advantages like:
    -the licensing fee would be reasonable since there is not yet a greedy VC on board of Pixpolar
    -the risk of punitive compensations would be avoided in case a licensing agreement is made
    -consumers could be provided with what they really want, namely, with considerably better low light image quality
    -the infrastructure comprising Back-Side Illumination (BSI), high frame rate, High Dynamic Range (HDR), and Optical Image Stabilization (OIS) is already there
    -if desired the negotiations can be held in complete secrecy insured by proper NDAs
    -do you really think that you are better off in negotiations in case Pixpolar’s patents end up in the hands of one of your big clients or of a big patent troll?

    END

    ReplyDelete
  16. Not so fast Artto.
    The pinned PD is well known from CCDs, so the patent of its application to CMOS has not been challenged in a court. I am not sure if it would stand. At least it would be a lot weaker than you think. The second issue is the MIG. It is very similar to BCMD. Same thing, it remains to be seen if it would stand in court. Finally, the light sensitivity of (conversion factor) is much lower in MIG than in a standard FD despite of your claims. I know this, I have been working on that. Perhaps the failure was on technical basis and not on patents. This is just an excuse.

    ReplyDelete
    Replies
    1. First of all, the patent examiner of the Kodak’s patent on utilization of pinned photo-diode in CMOS image sensor pixels (“The Holy Grail” patent) knew for sure that the pinned photo-diode was already used in CCDs since this was clearly mentioned in the patent application. Nevertheless the transistor sharing patents (“The Holy Trinity”) are anyhow strong patents since there is for sure no prior art standing in the way. Thus even if the pinned photo-diode patents were not to hold in court, Kodak’s transistor sharing patents (“The Holy Trinity”) would anyhow hold. Since the transistor sharing patents are of fundamental importance to present CMOS image sensors there is no a way around the fact that Kodak has had a key role in the CMOS image sensor business.
      Secondly, MIG image sensors are very different from BCMDs (Texas Instruments) since the principle of operation is totally different (in MIG based Field Effect Transistors the signal charge and the charge in the channel are of the same type whereas in BCMD these charges are of the opposite type; besides bipolar MIG pixels are possible whereas bipolar BCMD pixels are not) and this difference is clearly explained in the MIG patents. On the other hand, plenty of image sensor manufacturers (e.g. Toshiba, Sony, Sharp, ST-Microelectronics) have received patents on the BCMD (aka Internal Gate, IG) principle. Thus the claim that Pixpolar’s MIG patents would not stand in court due to BCMD patents is of similar relevance than stating that CMOS image sensor patents in general would not hold in court due to CCD patents.
      Thirdly, the MIG sensors have a decent conversion factor (aka conversion gain) due to reasons that I will explain next. The wisest way to read-out the MIG sensors is to use current mode read-out which is much faster and which enables much lower capacitive cross-talk than source follower read-out (aka voltage mode read-out) without sacrificing the conversion gain. The following comparison can be made between MIG sensor and the DEPFET sensor obeying the IG / BCMD principle of operation. With current mode read-out in the DEPFET sensors 3 electron read noise has been achieved despite surface channel operation and off-chip Correlated Double Sampling (CDS) read-out. The downside of the surface channel operation is that a considerable portion of the read noise stems from interface generated 1/f-noise – one could roughly estimate that the interface generated 1/f-noise contributes up to 2 electrons of the total of 3 electron read noise. Our present MIG pixel design has been verified with 3D simulations according to an existing CMOS process and it comprises a truly deep buried channel (a deep buried channel is not really feasible in IG sensors). The benefit of the deep buried channel is that the interface generated 1/f-noise can be more or less completely avoided. Due to this fact, due to lack of interface generated dark noise, and the fact that on-chip CDS read-out will be utilized in the MIG sensor, the MIG sensor enables much lower noise than the DEPFET sensor. On the other hand, the current mode conversion gain of the DEPFET sensor having the 3 read noise is between 300 and 400 pA/e whereas the current mode conversion gain in the MIG sensors is according to 3D simulations around 1 nA/e, i.e., roughly three times higher than in DEPFET sensor. Based on these facts one can deduce that the read noise in our current MIG sensor design is way below 3 three electrons meaning that there is ample of conversion gain.
      In case you are willing to engage in negotiations we would be more than happy to present our pixel design to you. Feel free to contact us any time.

      Delete
  17. The structure of the MIG pixel looks as though it might make it tricky to obtain a high full well capacity, particularly at smaller pixel pitches. What kind of value for this would you expect for pixels from 1.12µm-2µm?

    ReplyDelete
    Replies
    1. START

      Yes is the short answer to your question whether it is possible to provide decent Full Well Capacity (FWC) even in small MIG pixels but it is actually not relevant to the most advanced MIG pixel designs (a detailed explanation is provided later on in this text). This is naturally unlike in present CCD and CMOS Image Sensors (CIS) wherein the size of the Full Well Capacity (FWC) is important from Dynamic Range (DR) and Signal to Noise Ratio (SNR) point of view. It is fair enough to say that in today’s smallest pixels the FWC is already rather limited and thus some other means than FWC should be provided in order to improve the high limit of the DR scale and the maximum SNR.

      One way to increase the maximum SNR and SNR in general is to utilize also information from neighboring pixels in smooth image areas or in image areas comprising smooth stripe like contrast/color variations. This is, however, not possible in image areas having a lot of contrast and color variation in all directions. Unfortunately, the information of neighboring pixels cannot be used for improving the high limit of the DR scale (because of pixel saturation).

      One way to considerably improve the DR is to utilize logarithmic read-out, i.e., to exploit the non-linear signal levels in the saturation regime. In CCDs and CIS the problem with this approach is, however, that it is relatively difficult to apply when compared to standard operation in linear read-out regime and thus it is typically not exploited. The benefit of the MIG sensors is that the logarithmic read-out could be more easily utilized since it is inherently present (due to built-in anti-blooming mechanism) and due to the fact that MIG sensors utilize current mode read-out (small logarithmic signals can be more reliably determined since there is no capacitive cross-talk). It is difficult to say how the SNR behaves in the logarithmic read-out region since it depends on how the logarithmic read-out is realized but it is hereby assumed that it would not improve the maximum SNR of the pixel.

      Another effective way to increase considerably the DR while still operating in the linear read-out regime is to apply at least two exposure times per one single image/frame. By adjusting a short exposure time according to bright pixels so that the bright pixels can be read-out before saturation is reached and by providing at least one additional pixel read-out corresponding to a long exposure time it is possible to considerably enhance the DR but it is not possible to increase the maximum SNR. The problem with this multiple exposure time approach is actually that the SNR is not a continuously increasing function of the photon flux that is absorbed in the pixel. Instead, at locations corresponding to change of exposure time the SNR has abrupt discontinuities where the SNR suddenly decreases considerably. Such differences in the SNR levels at different DR regimes are in general the easier to detect and the more annoying to the human eye the smaller the FWC. This phenomenon can be mitigated with the aid of image processing algorithms (e.g. by intentionally adding noise) but it is should be appreciated that it is a non-trivial task to deform the SNR profile so that it pleases the eye and that the SNR at smaller flux levels is not degraded too much.

      Another problem related to the multiple exposure time method is that in case there are very dark areas in the image the preset long exposure time would need to be so long that blur due to subject or camera movements could be easily resulted in. Yet another disadvantage in standard single chip CIS is also that the higher the amount of different exposure times the slower the read-out time of the entire image/frame resulting in additional motion artefacts in all lighting conditions. One should note also that in CCDs it is very difficult to utilize more than 2 exposure times per image/frame.

      TO BE CONTINUED…

      Delete
  18. …CONTINUATION

    Based on afore said it can be deduced that one can achieve with multiple read-out method decent DR even with poor FWC but the problems are poor maximum SNR and the shape of the SNR curve. There is actually a simple way to avoid both of these problems, namely, to read-out the pixels with a constant read-out rate corresponding to the shortest exposure time that is utilized in the multiple exposure time method so that the read-out would be performed just before the brightest relevant pixels (not pixels corresponding to the sun or to lamps which could be imaged with an initial read-out corresponding to very small exposure time) reach saturation. In this manner the same value for the high end of the DR scale would be achieved than in the multiple exposure time method. However, by adding together all the read-out results corresponding to a single pixel much higher maximum SNR can be reached than with the multiple exposure time method and above all, with a completely natural SNR versus absorbed photon flux curve! In other words, in this high read-out rate method the problems of the multiple exposure time method related to the unnatural SNR versus absorbed photon flux curve are completely avoided, and what is the most intriguing point is that the maximum SNR would be independent of the FWC (provided that fast enough read-out rate can be utilized)! Yet another benefit of the high read-out rate method is that the exposure time of each pixel can be set afterwards independently according to camera and subject movements, i.e., image blur can be avoided!

    It is important to note that the high read-out rate mode of operation cannot be established in CCDs. It is also very difficult to achieve with standard single chip CIS due to large amount of sequentially read-out pixels (typically one column). This amount can be reduced to 1/2 column, 1/3 column, etc. but the smaller this number is the more difficult it is to realize. However, with 3D integrated CIS (separate CIS and read-out chips stacked together) it is possible to achieve high read-out rate operation since the amount of pixels that are read-out sequentially can be made much smaller than the amount of pixels in one column (at best all pixels could be read-out simultaneously). An interesting point is also that there are already 3D integrated CIS on the market (Sony Exmor RS CIS).

    It can be deduced from afore said that in the high read-out rate method the maximum SNR as well as the upper limit of the DR scale are not limited by the FWC provided that fast enough read-out rate is available, that a completely natural SNR versus photon flux curve is obtained, and that the exposure time of each pixel can be set afterwards in order to avoid image blur. In 3D integrated CIS the only aspects where the high read-out rate method is inferior to the multiple exposure time method is that the SNR at small signal levels as well as the low limit of the DR scale (and thus DR) are poorer which stems from the high CIS read-out rate. The poor DR and the poor low light image quality naturally greatly impair the applicability of the high read-out rate method in 3D integrated CIS.

    TO BE CONTINUED…

    ReplyDelete
  19. …CONTINUATION

    The reason why in CIS a high read-out rate decreases the SNR at small signal levels and worsens the lower limit of the DR scale stem from Destructive Correlated Double Sampling Read-out (DCDSR; utilized by CIS and CCDs). The problem with DCDSR is that the higher the read-out rate the higher the overall noise due to accumulation of read noise, i.e., the smallest detectable photon flux depends on the frame rate. The benefit of MIG image sensors featuring Non-Destructive Correlated Double Sampling Read-out (NDCDSR) is that the signal can be read-out accurately without destroying it and thus rest can be performed only after read-outs when the signal exceeds a certain threshold level. Thus there is no accumulation of read noise (or to be precise, the accumulation of read noise is greatly reduced and it does not affect the overall noise at any signal level) and thus the read-out rate does not affect the overall noise.

    Consequently, in MIG sensors featuring NDCDSR the FWC has no effect on SNR or DR in case high enough read-out rate can be utilized, i.e., in case 3D integrated MIG sensor featuring NDCDSR is utilized. The additional benefits are naturally that a completely natural SNR versus absorbed photon flux curve is obtained and that the exposure time of each pixel can be set afterwards in order to avoid image blur. Thus if an image sensor manufacturer were to transfer to manufacturing of 3D integrated NDCDSR MIG image sensors the FWC would be relevant only from read-out rate point of view.

    However, if an image sensor manufacturer were to transfer to the manufacturing of DCDSR MIG sensors or single chip NDCDSR MIG sensors the size of FWC would naturally be relevant since high read-out rate mode of operation could not be utilized. It is at this point important to note that MIG sensors featuring NDCDSR may exhibit three different types of FWC, namely NDCDS FWC, DCDS FWC and Four Transistor (4T) FWC which are ordered from smallest to largest in size. In MIG sensors featuring DCDSR may exhibit two different types of FWC, namely DCDS FWC and 4T FWC. In NDCDSR MIG sensors one could make a first read-out according to the NDCDS FWC regime and if the read-out result would indicate that the NDCDS FWC regime would have been exceeded another read-out according to the 4T FWC regime. In DCDS MIG sensors one could make first the first part of the CDS read-out according to both DCDS FWC and 4T FWC regimes, then reset the pixel and make the second part of the CDS read-out according to both DCDS FWC and 4T FWC regimes – one would select the read-out result according to DCDS FWC regime if the signal is below certain threshold and according to 4T FWC regime if the signal is above this threshold. Another option would be to decide after the first part of the CDS read-out according to DCDS FWC regime is performed whether to use DCDS MIG or DCDS 4T CIS read-out.

    The NDCDS FWC refers to the maximum amount of charge that can be read-out in NDCDS manner and it does not need to be large since it is only relevant at small signal levels. The DCDS FWC is larger than NDCDS FWC and corresponds in NDCDS MIG sensors to the case that charge cannot be transferred anymore successfully in a non-destructive manner into and out of the MIG node. In DCDS FWC regime the signal charge is first read-out when the charge is in MIG, next the pixel is reset, and finally the signal according to the empty MIG is read-out. In NDCDS FWC and DCDS FWC read-out domain it is important that the signal charges are located in MIG which is situated close to the channel of the readout transistor/transistors in order to provide high charge to current conversion gain.

    TO BE CONTINUED…

    ReplyDelete
  20. …CONTINUATION

    The 4T FWC corresponds to the situation in which the signal charges start to spread out to the bulk of the MIG pixel comprising through silicon trenches. The through silicon trenches improve the FWC since they provide additional capacitance as well as blooming protection beside MIG sensor’s inherent vertical anti-blooming mechanism. The charges corresponding to the 4T DCDS FWC regime are read-out in standard 4T CIS way by connecting the drain (acting now as Floating Diffusion, FD, in a 4T configuration) of the reset structure/transistor to the gate of a read-out transistor which is beneficially shared between neighboring pixels (and/or situated on a separate read-out chip). The read-out procedure in the 4T FWC regime would be the following: first reset the drain of the reset structure/transistor, then disconnect it from the reset voltage, next take a first CDS read-out, then pulse the gate of the MIG pixel’s reset structure/transistor so that the signal charges enter into the reset structure’s/transistor’s drain, then return back to the original potential of the gate of the reset structure/transistor (i.e. ground or preferably negative potential), and finally take the second CDS read-out. It would be beneficial that in the 4T FWC regime the p-type barrier layer would be non-depleted (can be facilitated e.g. by applying smaller/suitable potentials to some nodes of transistors belonging to the MIG pixel) but it is not necessary.

    In case reference is made to Samsung’s ISOCELL pixel it is evident based on Pixpolar’s patents and public patent applications that the ISOCELL pixel resembles already very much a MIG pixel (signal charges are stored beneath transistors & p-type barrier layer & similar reset structure reaching through the p type barrier layer & through silicon trenches) the difference being naturally that the doping concentrations are not (yet) similar (which are, however, very easy to modify according to the MIG operation principle) and that the NDCDSR is not (yet) utilized in the ISOCELL pixel. By making the observation that the FWC of the ISOCELL pixel is considerably larger than the FWC of a standard 4T CIS pixel one can make the estimation that the 4T FWC of the MIG pixel is roughly equal to the FWC of a standard 4T CIS pixel.

    Beside the different FWCs of MIG pixel one should appreciate that the MIG pixels enable also logarithmic read-out which improves considerably the DR at the higher end of the DR scale. According to afore described facts it is clear that, if desired, MIG pixels enable also decent FWC for example if 3D integration were not available. It is also fair enough to say that Sony and Samsung are already well prepared to the future transition from 4T CIS pixel architecture to the MIG pixel architecture.

    END

    ReplyDelete
  21. It should be further noted, that in MIG sensors featuring DCDS read-out the signal charge can be also read-out in a non-destructive non-CDS manner albeit much less accurately than in the DCDS read-out. This enables non-destructive monitoring of the accumulation of signal charge in the MIG pixel so that the MIG pixel can be read-out in DCDS manner when the amount of signal charge is big enough. Thus high maximum SNR and high DR can be obtained also with MIG pixels featuring DCDS read-out and small FWC because the effect of accumulation of read noise can be mitigated. The problem is, however, that in low light when the accumulation of signal charge is slow the images are easily spoiled by image blur due to lack of accurate CDS read-out data which facilitates the correction of image blur.

    In cameras featuring Optical Image Stabilization (OIS) and 3D integrated DCDS MIG image sensors it would be possible to remove image blur caused by camera movements by reading out the image sensor in DCDS manner always before the integrated roll replacement of any pixel will be too excessive, which is actually possible due to accurate angular velocity sensors and due to the fast image/frame read-out speed of a 3D integrated image sensor. The image blur caused by subject movements cannot be, however, corrected in this manner (one would need to have beside a fast image processor an additional camera for the detection of subject movements or one would need to continuously read-out in DCDS manner part of the pixels according to subjects or any moving objects – both of these approaches being rather unpractical). Thus a camera equipped with 3D integrated DCDS MIG sensor utilizing beside DCDS also non-destructive non-CDS read-out would be more or less limited in low light to imaging of motionless sceneries.

    ReplyDelete
  22. Yet another important point to note is that in 3D integrated DCDS MIG sensors featuring OIS it is actually also possible to remove subject induced image blur even if non-destructive non-CDS read-out is utilized in case forced DCDS read-out of the whole matrix is performed at certain preset intervals. In this manner the necessary CDS read-out data for the correction of subject (and camera roll movement) induced image blur can be obtained. Therefore with 3D integrated DCDS MIG sensors having small FWC and utilizing non-destructive non-CDS read-outs in between selected or forced DCDS read-outs it is possible to remove image blur and to acquire similar or higher maximum SNR and high limit of the DR scale than with traditional sensors featuring DCDS read-out and having a large FWC. In both of these sensors one would obtain the same value for the low limit of the DR scale (by assuming that read noise and dark noise are the same), which would, however, be quite a bit poorer than in a MIG sensor equipped with NDCDS read-out (for further information on the matter please check Pixpolar’s white paper at http://www.pixpolar.com/wp-content/uploads/2013/09/PIXPOLAR_WHITEPAPER_20130929.pdf).

    ReplyDelete

All comments are moderated to avoid spam.