tag:blogger.com,1999:blog-19092890.post5539509219914495035..comments2024-03-19T08:15:45.946+02:00Comments on Image Sensors World: Pixpolar Patent Case StudyVladimir Koifmanhttp://www.blogger.com/profile/01800020176563544699noreply@blogger.comBlogger55125tag:blogger.com,1999:blog-19092890.post-16946911035945708042014-04-22T09:47:12.098+03:002014-04-22T09:47:12.098+03:00Yet another important point to note is that in 3D ...Yet another important point to note is that in 3D integrated DCDS MIG sensors featuring OIS it is actually also possible to remove subject induced image blur even if non-destructive non-CDS read-out is utilized in case forced DCDS read-out of the whole matrix is performed at certain preset intervals. In this manner the necessary CDS read-out data for the correction of subject (and camera roll movement) induced image blur can be obtained. Therefore with 3D integrated DCDS MIG sensors having small FWC and utilizing non-destructive non-CDS read-outs in between selected or forced DCDS read-outs it is possible to remove image blur and to acquire similar or higher maximum SNR and high limit of the DR scale than with traditional sensors featuring DCDS read-out and having a large FWC. In both of these sensors one would obtain the same value for the low limit of the DR scale (by assuming that read noise and dark noise are the same), which would, however, be quite a bit poorer than in a MIG sensor equipped with NDCDS read-out (for further information on the matter please check Pixpolar’s white paper at http://www.pixpolar.com/wp-content/uploads/2013/09/PIXPOLAR_WHITEPAPER_20130929.pdf).<br />Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-17211849060363244072014-04-20T10:59:08.981+03:002014-04-20T10:59:08.981+03:00It should be further noted, that in MIG sensors fe...It should be further noted, that in MIG sensors featuring DCDS read-out the signal charge can be also read-out in a non-destructive non-CDS manner albeit much less accurately than in the DCDS read-out. This enables non-destructive monitoring of the accumulation of signal charge in the MIG pixel so that the MIG pixel can be read-out in DCDS manner when the amount of signal charge is big enough. Thus high maximum SNR and high DR can be obtained also with MIG pixels featuring DCDS read-out and small FWC because the effect of accumulation of read noise can be mitigated. The problem is, however, that in low light when the accumulation of signal charge is slow the images are easily spoiled by image blur due to lack of accurate CDS read-out data which facilitates the correction of image blur.<br /><br />In cameras featuring Optical Image Stabilization (OIS) and 3D integrated DCDS MIG image sensors it would be possible to remove image blur caused by camera movements by reading out the image sensor in DCDS manner always before the integrated roll replacement of any pixel will be too excessive, which is actually possible due to accurate angular velocity sensors and due to the fast image/frame read-out speed of a 3D integrated image sensor. The image blur caused by subject movements cannot be, however, corrected in this manner (one would need to have beside a fast image processor an additional camera for the detection of subject movements or one would need to continuously read-out in DCDS manner part of the pixels according to subjects or any moving objects – both of these approaches being rather unpractical). Thus a camera equipped with 3D integrated DCDS MIG sensor utilizing beside DCDS also non-destructive non-CDS read-out would be more or less limited in low light to imaging of motionless sceneries.<br />Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-37695046488244418172014-04-15T10:54:20.364+03:002014-04-15T10:54:20.364+03:00…CONTINUATION
The 4T FWC corresponds to the situa...…CONTINUATION<br /><br />The 4T FWC corresponds to the situation in which the signal charges start to spread out to the bulk of the MIG pixel comprising through silicon trenches. The through silicon trenches improve the FWC since they provide additional capacitance as well as blooming protection beside MIG sensor’s inherent vertical anti-blooming mechanism. The charges corresponding to the 4T DCDS FWC regime are read-out in standard 4T CIS way by connecting the drain (acting now as Floating Diffusion, FD, in a 4T configuration) of the reset structure/transistor to the gate of a read-out transistor which is beneficially shared between neighboring pixels (and/or situated on a separate read-out chip). The read-out procedure in the 4T FWC regime would be the following: first reset the drain of the reset structure/transistor, then disconnect it from the reset voltage, next take a first CDS read-out, then pulse the gate of the MIG pixel’s reset structure/transistor so that the signal charges enter into the reset structure’s/transistor’s drain, then return back to the original potential of the gate of the reset structure/transistor (i.e. ground or preferably negative potential), and finally take the second CDS read-out. It would be beneficial that in the 4T FWC regime the p-type barrier layer would be non-depleted (can be facilitated e.g. by applying smaller/suitable potentials to some nodes of transistors belonging to the MIG pixel) but it is not necessary.<br /><br />In case reference is made to Samsung’s ISOCELL pixel it is evident based on Pixpolar’s patents and public patent applications that the ISOCELL pixel resembles already very much a MIG pixel (signal charges are stored beneath transistors & p-type barrier layer & similar reset structure reaching through the p type barrier layer & through silicon trenches) the difference being naturally that the doping concentrations are not (yet) similar (which are, however, very easy to modify according to the MIG operation principle) and that the NDCDSR is not (yet) utilized in the ISOCELL pixel. By making the observation that the FWC of the ISOCELL pixel is considerably larger than the FWC of a standard 4T CIS pixel one can make the estimation that the 4T FWC of the MIG pixel is roughly equal to the FWC of a standard 4T CIS pixel.<br /><br />Beside the different FWCs of MIG pixel one should appreciate that the MIG pixels enable also logarithmic read-out which improves considerably the DR at the higher end of the DR scale. According to afore described facts it is clear that, if desired, MIG pixels enable also decent FWC for example if 3D integration were not available. It is also fair enough to say that Sony and Samsung are already well prepared to the future transition from 4T CIS pixel architecture to the MIG pixel architecture.<br /><br />ENDArtto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-30861197639504182452014-04-15T10:52:18.205+03:002014-04-15T10:52:18.205+03:00…CONTINUATION
The reason why in CIS a high read-o...…CONTINUATION<br /><br />The reason why in CIS a high read-out rate decreases the SNR at small signal levels and worsens the lower limit of the DR scale stem from Destructive Correlated Double Sampling Read-out (DCDSR; utilized by CIS and CCDs). The problem with DCDSR is that the higher the read-out rate the higher the overall noise due to accumulation of read noise, i.e., the smallest detectable photon flux depends on the frame rate. The benefit of MIG image sensors featuring Non-Destructive Correlated Double Sampling Read-out (NDCDSR) is that the signal can be read-out accurately without destroying it and thus rest can be performed only after read-outs when the signal exceeds a certain threshold level. Thus there is no accumulation of read noise (or to be precise, the accumulation of read noise is greatly reduced and it does not affect the overall noise at any signal level) and thus the read-out rate does not affect the overall noise.<br /><br />Consequently, in MIG sensors featuring NDCDSR the FWC has no effect on SNR or DR in case high enough read-out rate can be utilized, i.e., in case 3D integrated MIG sensor featuring NDCDSR is utilized. The additional benefits are naturally that a completely natural SNR versus absorbed photon flux curve is obtained and that the exposure time of each pixel can be set afterwards in order to avoid image blur. Thus if an image sensor manufacturer were to transfer to manufacturing of 3D integrated NDCDSR MIG image sensors the FWC would be relevant only from read-out rate point of view.<br /><br />However, if an image sensor manufacturer were to transfer to the manufacturing of DCDSR MIG sensors or single chip NDCDSR MIG sensors the size of FWC would naturally be relevant since high read-out rate mode of operation could not be utilized. It is at this point important to note that MIG sensors featuring NDCDSR may exhibit three different types of FWC, namely NDCDS FWC, DCDS FWC and Four Transistor (4T) FWC which are ordered from smallest to largest in size. In MIG sensors featuring DCDSR may exhibit two different types of FWC, namely DCDS FWC and 4T FWC. In NDCDSR MIG sensors one could make a first read-out according to the NDCDS FWC regime and if the read-out result would indicate that the NDCDS FWC regime would have been exceeded another read-out according to the 4T FWC regime. In DCDS MIG sensors one could make first the first part of the CDS read-out according to both DCDS FWC and 4T FWC regimes, then reset the pixel and make the second part of the CDS read-out according to both DCDS FWC and 4T FWC regimes – one would select the read-out result according to DCDS FWC regime if the signal is below certain threshold and according to 4T FWC regime if the signal is above this threshold. Another option would be to decide after the first part of the CDS read-out according to DCDS FWC regime is performed whether to use DCDS MIG or DCDS 4T CIS read-out.<br /><br />The NDCDS FWC refers to the maximum amount of charge that can be read-out in NDCDS manner and it does not need to be large since it is only relevant at small signal levels. The DCDS FWC is larger than NDCDS FWC and corresponds in NDCDS MIG sensors to the case that charge cannot be transferred anymore successfully in a non-destructive manner into and out of the MIG node. In DCDS FWC regime the signal charge is first read-out when the charge is in MIG, next the pixel is reset, and finally the signal according to the empty MIG is read-out. In NDCDS FWC and DCDS FWC read-out domain it is important that the signal charges are located in MIG which is situated close to the channel of the readout transistor/transistors in order to provide high charge to current conversion gain.<br /><br />TO BE CONTINUED…Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-87886987958750494502014-04-15T10:50:03.511+03:002014-04-15T10:50:03.511+03:00…CONTINUATION
Based on afore said it can be deduc...…CONTINUATION<br /><br />Based on afore said it can be deduced that one can achieve with multiple read-out method decent DR even with poor FWC but the problems are poor maximum SNR and the shape of the SNR curve. There is actually a simple way to avoid both of these problems, namely, to read-out the pixels with a constant read-out rate corresponding to the shortest exposure time that is utilized in the multiple exposure time method so that the read-out would be performed just before the brightest relevant pixels (not pixels corresponding to the sun or to lamps which could be imaged with an initial read-out corresponding to very small exposure time) reach saturation. In this manner the same value for the high end of the DR scale would be achieved than in the multiple exposure time method. However, by adding together all the read-out results corresponding to a single pixel much higher maximum SNR can be reached than with the multiple exposure time method and above all, with a completely natural SNR versus absorbed photon flux curve! In other words, in this high read-out rate method the problems of the multiple exposure time method related to the unnatural SNR versus absorbed photon flux curve are completely avoided, and what is the most intriguing point is that the maximum SNR would be independent of the FWC (provided that fast enough read-out rate can be utilized)! Yet another benefit of the high read-out rate method is that the exposure time of each pixel can be set afterwards independently according to camera and subject movements, i.e., image blur can be avoided!<br /><br />It is important to note that the high read-out rate mode of operation cannot be established in CCDs. It is also very difficult to achieve with standard single chip CIS due to large amount of sequentially read-out pixels (typically one column). This amount can be reduced to 1/2 column, 1/3 column, etc. but the smaller this number is the more difficult it is to realize. However, with 3D integrated CIS (separate CIS and read-out chips stacked together) it is possible to achieve high read-out rate operation since the amount of pixels that are read-out sequentially can be made much smaller than the amount of pixels in one column (at best all pixels could be read-out simultaneously). An interesting point is also that there are already 3D integrated CIS on the market (Sony Exmor RS CIS).<br /><br />It can be deduced from afore said that in the high read-out rate method the maximum SNR as well as the upper limit of the DR scale are not limited by the FWC provided that fast enough read-out rate is available, that a completely natural SNR versus photon flux curve is obtained, and that the exposure time of each pixel can be set afterwards in order to avoid image blur. In 3D integrated CIS the only aspects where the high read-out rate method is inferior to the multiple exposure time method is that the SNR at small signal levels as well as the low limit of the DR scale (and thus DR) are poorer which stems from the high CIS read-out rate. The poor DR and the poor low light image quality naturally greatly impair the applicability of the high read-out rate method in 3D integrated CIS.<br /><br />TO BE CONTINUED…Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-19214546930534597742014-04-15T10:47:35.398+03:002014-04-15T10:47:35.398+03:00START
Yes is the short answer to your question wh...START<br /><br />Yes is the short answer to your question whether it is possible to provide decent Full Well Capacity (FWC) even in small MIG pixels but it is actually not relevant to the most advanced MIG pixel designs (a detailed explanation is provided later on in this text). This is naturally unlike in present CCD and CMOS Image Sensors (CIS) wherein the size of the Full Well Capacity (FWC) is important from Dynamic Range (DR) and Signal to Noise Ratio (SNR) point of view. It is fair enough to say that in today’s smallest pixels the FWC is already rather limited and thus some other means than FWC should be provided in order to improve the high limit of the DR scale and the maximum SNR.<br /><br />One way to increase the maximum SNR and SNR in general is to utilize also information from neighboring pixels in smooth image areas or in image areas comprising smooth stripe like contrast/color variations. This is, however, not possible in image areas having a lot of contrast and color variation in all directions. Unfortunately, the information of neighboring pixels cannot be used for improving the high limit of the DR scale (because of pixel saturation).<br /><br />One way to considerably improve the DR is to utilize logarithmic read-out, i.e., to exploit the non-linear signal levels in the saturation regime. In CCDs and CIS the problem with this approach is, however, that it is relatively difficult to apply when compared to standard operation in linear read-out regime and thus it is typically not exploited. The benefit of the MIG sensors is that the logarithmic read-out could be more easily utilized since it is inherently present (due to built-in anti-blooming mechanism) and due to the fact that MIG sensors utilize current mode read-out (small logarithmic signals can be more reliably determined since there is no capacitive cross-talk). It is difficult to say how the SNR behaves in the logarithmic read-out region since it depends on how the logarithmic read-out is realized but it is hereby assumed that it would not improve the maximum SNR of the pixel.<br /><br />Another effective way to increase considerably the DR while still operating in the linear read-out regime is to apply at least two exposure times per one single image/frame. By adjusting a short exposure time according to bright pixels so that the bright pixels can be read-out before saturation is reached and by providing at least one additional pixel read-out corresponding to a long exposure time it is possible to considerably enhance the DR but it is not possible to increase the maximum SNR. The problem with this multiple exposure time approach is actually that the SNR is not a continuously increasing function of the photon flux that is absorbed in the pixel. Instead, at locations corresponding to change of exposure time the SNR has abrupt discontinuities where the SNR suddenly decreases considerably. Such differences in the SNR levels at different DR regimes are in general the easier to detect and the more annoying to the human eye the smaller the FWC. This phenomenon can be mitigated with the aid of image processing algorithms (e.g. by intentionally adding noise) but it is should be appreciated that it is a non-trivial task to deform the SNR profile so that it pleases the eye and that the SNR at smaller flux levels is not degraded too much.<br /><br />Another problem related to the multiple exposure time method is that in case there are very dark areas in the image the preset long exposure time would need to be so long that blur due to subject or camera movements could be easily resulted in. Yet another disadvantage in standard single chip CIS is also that the higher the amount of different exposure times the slower the read-out time of the entire image/frame resulting in additional motion artefacts in all lighting conditions. One should note also that in CCDs it is very difficult to utilize more than 2 exposure times per image/frame.<br /><br />TO BE CONTINUED…Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-21891075007655732652014-04-09T15:35:24.924+03:002014-04-09T15:35:24.924+03:00The structure of the MIG pixel looks as though it ...The structure of the MIG pixel looks as though it might make it tricky to obtain a high full well capacity, particularly at smaller pixel pitches. What kind of value for this would you expect for pixels from 1.12µm-2µm?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-20754453408860329292014-04-08T14:20:11.080+03:002014-04-08T14:20:11.080+03:00First of all, the patent examiner of the Kodak’s p...First of all, the patent examiner of the Kodak’s patent on utilization of pinned photo-diode in CMOS image sensor pixels (“The Holy Grail” patent) knew for sure that the pinned photo-diode was already used in CCDs since this was clearly mentioned in the patent application. Nevertheless the transistor sharing patents (“The Holy Trinity”) are anyhow strong patents since there is for sure no prior art standing in the way. Thus even if the pinned photo-diode patents were not to hold in court, Kodak’s transistor sharing patents (“The Holy Trinity”) would anyhow hold. Since the transistor sharing patents are of fundamental importance to present CMOS image sensors there is no a way around the fact that Kodak has had a key role in the CMOS image sensor business.<br />Secondly, MIG image sensors are very different from BCMDs (Texas Instruments) since the principle of operation is totally different (in MIG based Field Effect Transistors the signal charge and the charge in the channel are of the same type whereas in BCMD these charges are of the opposite type; besides bipolar MIG pixels are possible whereas bipolar BCMD pixels are not) and this difference is clearly explained in the MIG patents. On the other hand, plenty of image sensor manufacturers (e.g. Toshiba, Sony, Sharp, ST-Microelectronics) have received patents on the BCMD (aka Internal Gate, IG) principle. Thus the claim that Pixpolar’s MIG patents would not stand in court due to BCMD patents is of similar relevance than stating that CMOS image sensor patents in general would not hold in court due to CCD patents.<br />Thirdly, the MIG sensors have a decent conversion factor (aka conversion gain) due to reasons that I will explain next. The wisest way to read-out the MIG sensors is to use current mode read-out which is much faster and which enables much lower capacitive cross-talk than source follower read-out (aka voltage mode read-out) without sacrificing the conversion gain. The following comparison can be made between MIG sensor and the DEPFET sensor obeying the IG / BCMD principle of operation. With current mode read-out in the DEPFET sensors 3 electron read noise has been achieved despite surface channel operation and off-chip Correlated Double Sampling (CDS) read-out. The downside of the surface channel operation is that a considerable portion of the read noise stems from interface generated 1/f-noise – one could roughly estimate that the interface generated 1/f-noise contributes up to 2 electrons of the total of 3 electron read noise. Our present MIG pixel design has been verified with 3D simulations according to an existing CMOS process and it comprises a truly deep buried channel (a deep buried channel is not really feasible in IG sensors). The benefit of the deep buried channel is that the interface generated 1/f-noise can be more or less completely avoided. Due to this fact, due to lack of interface generated dark noise, and the fact that on-chip CDS read-out will be utilized in the MIG sensor, the MIG sensor enables much lower noise than the DEPFET sensor. On the other hand, the current mode conversion gain of the DEPFET sensor having the 3 read noise is between 300 and 400 pA/e whereas the current mode conversion gain in the MIG sensors is according to 3D simulations around 1 nA/e, i.e., roughly three times higher than in DEPFET sensor. Based on these facts one can deduce that the read noise in our current MIG sensor design is way below 3 three electrons meaning that there is ample of conversion gain.<br />In case you are willing to engage in negotiations we would be more than happy to present our pixel design to you. Feel free to contact us any time.Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-49739620596958259542014-04-05T03:33:46.340+03:002014-04-05T03:33:46.340+03:00Not so fast Artto.
The pinned PD is well known fro...Not so fast Artto.<br />The pinned PD is well known from CCDs, so the patent of its application to CMOS has not been challenged in a court. I am not sure if it would stand. At least it would be a lot weaker than you think. The second issue is the MIG. It is very similar to BCMD. Same thing, it remains to be seen if it would stand in court. Finally, the light sensitivity of (conversion factor) is much lower in MIG than in a standard FD despite of your claims. I know this, I have been working on that. Perhaps the failure was on technical basis and not on patents. This is just an excuse. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-5496005925021803052014-04-04T07:55:58.106+03:002014-04-04T07:55:58.106+03:00Based on afore said I would suggest for the big CM...Based on afore said I would suggest for the big CMOS image sensor manufacturers to quit the hide and seek game in order to start joint licensing/cross-licensing negotiations with Pixpolar. This would bear several advantages like:<br />-the licensing fee would be reasonable since there is not yet a greedy VC on board of Pixpolar<br />-the risk of punitive compensations would be avoided in case a licensing agreement is made<br />-consumers could be provided with what they really want, namely, with considerably better low light image quality<br />-the infrastructure comprising Back-Side Illumination (BSI), high frame rate, High Dynamic Range (HDR), and Optical Image Stabilization (OIS) is already there<br />-if desired the negotiations can be held in complete secrecy insured by proper NDAs<br />-do you really think that you are better off in negotiations in case Pixpolar’s patents end up in the hands of one of your big clients or of a big patent troll?<br /><br />ENDArtto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-65857438962006170962014-04-04T07:53:00.237+03:002014-04-04T07:53:00.237+03:00…CONTINUATION
A fourth important point is that Ko...…CONTINUATION<br /><br />A fourth important point is that Kodak had a large revenue stream coming from three transistor (3T) and four transistor (4T) CMOS image sensor pixel patents (the 4T comprises the “Holy Grail” and “Holy Trinity” patents). In case the big CMOS image sensor manufacturers would have changed to Pixpolar’s MIG pixel technology Kodak would have lost this revenue stream.<br /><br />A fifth important point to note is that if Pixpolar’s image sensor technology would not have been a threat to Kodak then Kodak should have received a considerable revenue stream from the licensing of the “Holy Grail” and “Holy Trinity” patents to mobile phones since the mobile phones are by far the biggest market for CMOS image sensors. It is easy to calculate ballpark numbers for the potential licensing revenue stream of afore said patents. When compared to the old 3T pixel the combination of “Holy Grail” and “Holy Trinity” enables roughly 70 % higher resolution and much better low light image quality, which would have been important selling points and an enhancer of the “sex appeal” of mobile phones (in 2006 the resolution was truly the king). When compared to the “Holy Grail” the combination of “Holy Grail” and “Holy Trinity” enables roughly 130 % higher resolution. Thus it would have been rather easy to gain in both cases at least 10 % as revenues from the CMOS image sensor business. This would have made in 2006 an annual revenue stream of roughly 500 M$ and now roughly 1 B$. Calculating all the years together one would have got several billion dollars of cumulative revenues from the “Holy Trinity”. Thus if Pixpolar’s tech had not been a threat, the price of 65 M$ for the “Holy Grail” and “Holy Trinity” patents would have been simply a felony. On the other hand, if Pixpolar technology were truly a threat then it would have even made sense to offer very low license fees for these patents which could explain the terribly low price of 65 M$.<br /><br />The fact that Pixpolar has been isolated stems probably from restrictions in the license agreements of at least the “Holy Grail” and “Holy Trinity” patents which probably does not comply well with the anti-trust laws. This would explain the reason why the “Holy Grail” and “Holy Trinity” patents were sold before the chapter 11 bankruptcy procedure started since if such restrictions had existed and the court had noticed them it could have let to large punitive compensations and at worst even to imprisonment. This matter could also be another explanation for the low price and the fact that there was just one bidder since if there had been such “toxic” license conditions the selling of afore said patents would not have been a problematic undertaking.<br /><br />The fact that the CMOS image sensor manufacturers chose Kodak instead of Pixpolar is probably due to couple of reasons. First the time to market with the Kodak technology was much shorter than with Pixpolar (1 year vs. up to 3 years) and thus by selecting Pixpolar’s tech a CMOS image sensor manufacturer would have been out of market for a long period of time (up to 2 years) after the competitors would have started shipping sensors equipped with Kodak’s tech. Thus Kodak had actually a strong negotiation position. On the other hand, the big CMOS image sensor manufacturers would have lost their existing patent protection since vast piles of patents on present CMOS image sensor tech would have become more or less useless which would certainly not have pleased the R&D departments (i.e. the infamous not invented here effect). Thus if the big image sensor manufacturers had chosen Pixpolar’s tech they could have faced competition from totally new players. This is also emphasized by the fact that MIG pixel technology is easier to adjust to standard CMOS logic processes than the existing CMOS image sensor technology.<br /><br />TO BE CONTINUED…Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-34001009852974557622014-04-04T07:51:45.666+03:002014-04-04T07:51:45.666+03:00Many thanks to everybody for the thorough discussi...Many thanks to everybody for the thorough discussion – it helped at least me to sharpen my view about the Kodak topic. Now that the discussion has chilled down I prepared a synthesis on the matter. I know that many of you guys disagree with it but I will anyhow stand behind my view. First the facts:<br /><br />-Kodak held the key patents in the CMOS image sensor business. The key patents are, as it is aptly described in a previous post, “The Holy Grail” US5,625,210 for pinned photo-diode and “The Holy Trinity” US6,160,281, US6,107,655, and US6,657,665 for transistor sharing. The importance of “The Holy Grail” is that it enables low noise. The importance of “The Holy Trinity” is that one can reduce the minimum amount of transistors required per pixel from three to 1.75 or even less improving thus considerably the resolution.<br /><br />-The technology described in the “Holy Grail” and “Holy Trinity” patents was already used in 2005 in Canon’s EOS cameras but in 2006 it was not yet utilized in mobile phones that were sold in areas protected by these patents.<br /><br />-Pixpolar’s Modified Internal Gate (MIG) pixel technology enables at best only one transistor per pixel and better low light image quality than Kodak’s tech. In 2006 the first MIG patent became public after which Pixpolar started discussions with Kodak and big CMOS image sensor suppliers for mobile phones. We agreed on technical level meetings with most of them soon after which all the CMOS image sensor manufacturers decided to back off from discussions with us. The only technical level discussion we had was with Robert Guidash and his team from Kodak.<br /><br />-After a year or so all the big CMOS image sensor manufacturers supplying for mobile phones started using pinned photo-diode pixels with transistor sharing.<br /><br />-Since 2006 the big CMOS image sensor manufacturers have neither been willing to engage in negotiations with Pixpolar nor making publications nor filing patents on Pixpolar’s technology.<br /><br />I think that it can be stated with certainty that Kodak had licensed transistor sharing, i.e. “The Holy Trinity”, for Canon’s Digital Still Cameras (DSC) and Digital Single Reflex Lens (DSRL) cameras. The reason for this is that Kodak was at that time such a strong player that Canon could not have got away with breaking these patents. It is also important to note that Canon was not a player in mobile phones. Another important point is that according to a previous email: “Willy Shih, VP of Digital at the time you are describing, had a corporate mandate to do everything possible to slow digital adoption and development down so that a smoother transition could occur. It worked with DSCs until camera phones came along and Kodak knew then that the convergence was out of their control.”<br /><br />A third important point is that when we discussed with a Kodak’s business development person he said that special emphasis was put in business development for establishing machines to be placed e.g. in shopping malls so that people could come with their DSC or DSLR memory cards, choose the images they want to develop at the machine, go shopping, and after shopping is done pick up the developed photos from the machine. According to this and the second point it is obvious to conclude that the DSCs or DSLRs were actually not Kodak’s true enemies but instead the mobile phone cameras due to mobile phones’ instant ability to share photos. Thus it is not a wonder if Kodak had licensed its “Crown Jewels”, i.e. the “Holy Grail” and “Holy Trinity” only to DSCs and DSLRs since people were much more likely to develop photos taken with DSCs or DSLRs than with mobile phones. By keeping the image quality in DSCs and DSLRs superior when compared to mobile phones with the aid of selective licensing of the “Holy Grail and Trinity” patents people would stick longer to their DSCs and DSLRs enabling thus a “smoother transition” and the ability to “milk the cash cow” for a longer period of time.<br /><br />TO BE CONTINUED…Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-17532273115281925032014-04-01T21:37:01.469+03:002014-04-01T21:37:01.469+03:00As you probably know patents are licensed separate...As you probably know patents are licensed separately to different business areas like Digital Still Cameras (DSC), Digital Single Lens Reflex (DSLR) cameras, and mobile phones. The fact that in 2005 Canon had used the combination of pinned photo-diode and transistor sharing in DSC or DSLR cameras does not indicate that Kodak had licensed transistor sharing also to mobile phones. Besides Canon wasn't a player in mobile phones.Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-28298323255492925672014-04-01T21:20:45.045+03:002014-04-01T21:20:45.045+03:00“We've seen tons of "breakingthrough"...“We've seen tons of "breakingthrough" ideas almost every year”.<br /><br />In image sensors there are four basic known ways for read-out of integrated charge:<br /><br />1) External gate read-out (channel affected by charge present on an external gate). This is utilized in present CCDs and CMOS image sensors and it was first invented somewhere end of sixties.<br />2) Internal gate read-out (channel affected by charge located inside semiconductor; the charge inside the semiconductor and in the channel are of the opposite type). It is mentioned first time in the seventies in conjunction with CCDs. Later on used in active pixel sensors like in BCMD and DEPFET sensors.<br />3) Base read-out in bipolar junction transistors (charge on the base affects emitter current or potential). As far as I know stems from the eighties.<br />4) Modified Internal Gate (MIG) readout (channel or base affected by charge located inside semiconductor; the charge inside the semiconductor and in the channel are of the same type). Invented in 2004.<br /><br />Pixpolar’s MIG readout is the fourth fundamental way to read-out integrated charge in image sensors invented after around 30 years after the previous one. In addition Pixpolar’s image sensor read-out technology provides best Signal to Noise Ratio (SNR) and thus superior low light image quality. So it is not just one of tons of “breakingthrough” ideas appearing almost every year. The reasons behind the best SNR are:<br /><br />1) Lowest dark noise (no interface generated dark noise)<br />2) Lowest read noise (no interface generated 1/f noise)<br />3) Non-Destructive Correlated Double Sampling (NDCDS; no accumulation of read noise)<br />4) No amplification noise<br />5) Very high quantum efficiency (fully depleted back-side illuminated pixel with 100 % fill-factor)<br />6) No blooming, no smear, very low cross-talk (fully depleted, inherent vertical anti-blooming structure)<br /><br />In addition Pixpolar’s MIG technology offers even better SNR in image sensors based on other semiconductor materials than silicon due to the following facts:<br />-the interface quality is considerably poorer in other semiconductor materials than silicon<br />-Pixpolar’s MIG technology is not affected by the interface quality unlike other image sensor technologies<br />Thus in the future the night vision sensors attached to your Oculus or Google Glasses will be MIG technology based Silicon Germanium CMOS image sensors.<br />Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-35657837098432552282014-04-01T19:30:13.451+03:002014-04-01T19:30:13.451+03:00Mobile phones maybe not before 2006, but Canon was...Mobile phones maybe not before 2006, but Canon was shipping EOS camera's with pinned photo-diodes and transistor sharing back in 2005.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-52662019277048114192014-04-01T16:12:08.153+03:002014-04-01T16:12:08.153+03:00I would appreciate a lot if you could give answers...I would appreciate a lot if you could give answers to these questions.<br /><br />1) The key 4T patents are US5,625,210; US6,160,281; US6,107,655; and US6,657,665 which are according to USPTO database owned by OVT. You are, however, referring that Kodak kept these patents (and sold them perhaps to the super-consortium?). Could you please explain this contradiction?<br /><br />2) You are referring that Kodak sold patents to OVT after they went belly up. According to information in an earlier post the patents were sold to OVT at 31.3.2011 but the chapter 11 procedure started only at 19.1.2012. There seems to be also a contradiction regarding this matter - could you please explain?<br /><br />3) I assume that when you did your job you had access to material that Kodak handed out to you, i.e., there was no court order that mandated Kodak to provide you with the entire material related to all patents and their corresponding license agreements - is this correct?Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-20551062055865385152014-04-01T14:18:56.644+03:002014-04-01T14:18:56.644+03:00As I mentioned already above, my point is that in ...As I mentioned already above, my point is that in 2006 transistor sharing was not utilized in mobile phones in areas under the cover of the transistor sharing patents. The probable reason for this is that the image sensor manufacturers did not have the necessary Kodak license for transistor sharing.<br /><br />If you disagree please point out to me a mobile phone model that was sold in US in 2006 and that was equipped with a camera utilizing an image sensor featuring pinned photo-diode AND transistor sharing and that was not from Kodak.<br /><br />Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-37427622080045998082014-04-01T14:12:38.401+03:002014-04-01T14:12:38.401+03:00This about scientific papers at ISSCC 2004 – of co...This about scientific papers at ISSCC 2004 – of course the image sensor manufacturers will analyze, research, and patent on their competitors technologies like it has been the case e.g. with CCDs, BCMD, pinned photodiode, and transistor sharing (Pixpolar being the exception though). In this way they will gain foothold in potential cross-licensing negotiations.<br /><br />The beauty of the transistor sharing is that it can be done on mask level - changes to the process are not mandatory. It could have been that an image sensor manufacturer had manufactured image sensors featuring transistor sharing in areas outside the patent cover or sold such image sensors to niche markets in areas under patent cover. My point is, however, that in 2006 transistor sharing was not utilized in mobile phones in areas under the cover of the transistor sharing patents. The probable reason for this is that the image sensor manufacturers did not have the necessary Kodak license for transistor sharing.<br /><br />If you disagree please point out to me a mobile phone model that was sold in US in 2006 and that was equipped with a camera utilizing an image sensor featuring pinned photo-diode and transistor sharing and that was not from Kodak.<br /><br />Artto Aurolanoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-60867050657554457082014-04-01T13:19:27.779+03:002014-04-01T13:19:27.779+03:00Absolutely, the Moto part was only one of many pro...Absolutely, the Moto part was only one of many problems. Kodak is a classic case of a company holding on way too long to one business paradigm and stifling internal innovation. I will say that the image sensor group at Kodak seemed quite open to CMOS image sensors, with Tom Lee being the thought leader there. George Fisher, as a new CEO, was also quite supportive of the shift to digital imaging. The layers in-between that were the problem. Eric R Fossumhttps://www.blogger.com/profile/09740612324630105312noreply@blogger.comtag:blogger.com,1999:blog-19092890.post-57938524246742832702014-04-01T13:08:06.649+03:002014-04-01T13:08:06.649+03:00I never heard numbers this high in my discussions ...I never heard numbers this high in my discussions with my clients. From a valuation point of view, the patents were not enabling for existing big players, and the main concern was that they would fall into the hands of a NPE. The big guys were already playing just fine in the field, and had enough of their own patents to countersue if need be. OVTI did everybody a favor by keeping key sensor technology patents out of the hands of the NPEs. Maybe if a new deep-pocketed company wanted to get into image sensors (e.g. a Google) then maybe the Kodak portfolio would be enabling to that new line of business. That is the only way I see the valuation being as high as I recall, much less as high as you say. I also heard Kodak was intransigent in negotiation, at least initially, even at the lower numbers I recall. I don't think Kodak held back any key patents in image sensor technology - seemed like they were all there in the portfolio I looked at, and the "key" patents cited above are all now assigned to OVTI. <br />I only saw a small slice of the patent sale and sounds like you saw a much bigger picture. Still, I can't reconcile the $ or content you write about with my recollection.<br />Someday an insider story would be quite interesting as a case study.Eric R Fossumhttps://www.blogger.com/profile/09740612324630105312noreply@blogger.comtag:blogger.com,1999:blog-19092890.post-26900665147946731212014-04-01T08:54:49.226+03:002014-04-01T08:54:49.226+03:00This article seems to be saying that Panasonic, Ca...This article seems to be saying that Panasonic, Canon, and Sony each had PPD shared pixel processes in 2003.<br />http://www.eetimes.com/document.asp?doc_id=1203368Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-8166061142519025612014-04-01T03:49:02.799+03:002014-04-01T03:49:02.799+03:00Eric, It was not just George Fisher and moto that ...Eric, It was not just George Fisher and moto that started the fall within Kodak's sensor business. Kodak's demise was due to the fact that they had a bloated upper and middle management that was entrenched in the legacy film business, milking the cash cow until their pensions kicked in. This all powerful group hated the technologists within Kodak. It was an internal all out war at Kodak during these times. Willy Shih, VP of Digital at the time you are describing, had a corporate mandate to do everything possible to slow digital adoption and development down so that a smoother transition could occur. It worked with DSCs until camera phones came along and Kodak knew then that the convergence was out of their control. I provided definitive proof of this at the time through a blog. The problem was exactly as you describe, other competitors moved ahead of Kodak and also, ODMs and developers were no longer too concerned about circumventing Kodak IP.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-73552884063342138622014-04-01T03:27:36.439+03:002014-04-01T03:27:36.439+03:00I spent three years valuing Kodak patents-in-suit ...I spent three years valuing Kodak patents-in-suit and their many portfolio groups, this one included for institutional investors, manufacturers looking to circumvent and three of the largest OEMs on the planet, who have interests in capture IP. Much of this portfolio is based on 3T pixel. Just before Kodak went chapter 11, the portfolio OVTI got was valued at about $1.2-$1.4B. Kodak was trying to get $2B with no takers. The three largest suitors who were looking at this hired me to provide a timetable when Kodak would run out of cash. The idea was to then get it amongst themselves as defensive patents only. Kodak put them up for sale, but no one would touch it due to the fact that they most likely would not own them outright once the court deemed the sale post facto. So this particular portfolio was grossly devalued, Kodak then went belly up, the group of patents was once again put on the market, this time managed by the court and OVTI picked it up. OVTI did not get the core group of patents that are more relevant to enforcement. Kodak kept them. This portfolio OVTI owns has defensive value but is not part of the core patents Kodak still owns that are based on image capture process, bayer and 4T pixel, and some BSI.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-88264109205115793652014-04-01T02:36:13.509+03:002014-04-01T02:36:13.509+03:00I think you are quite unaware of a lot of things t...I think you are quite unaware of a lot of things that were going on in the imaging world at the time. In 2006 there were lots of sensors using pinned diodes in mobile (nearly all?). Kodak had been struggling and failing in the digital world for years before that. <br /><br />I don't think Pixpolar had a significant effect on any part of the mobile industry or Kodak. As the presentation says, Pixpolar lacked industry experts.<br /><br />Don't confuse correlation for causation.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-47608538455478555142014-03-31T23:51:35.832+03:002014-03-31T23:51:35.832+03:00Hi Yang, in this link http://www.youtube.com/watch...Hi Yang, in this link http://www.youtube.com/watch?v=6748wInVd7E&feature=youtu.be there is a video describing the differences between Pixpolar's Modified Internal Gate (MIG) technology and Internal Gate (IG) technology which is e.g. also known as CMD, BCMD, and DEPFET. You can also find the video on our blog at www.pixpolar.com/blog.Artto Aurolanoreply@blogger.com