Wednesday, January 13, 2016

Invisage at CES

Invisage publishes media responses on its demos at CES 2016:

Click for a larger version

15 comments:

  1. The image of InVisage sensor looks noisy on this photo. Is it cuased by additional noise related to GS pixel?

    ReplyDelete
  2. It is, probably, caused by high dark current and DSNU. As it always was suspected for Invisage designs. I wonder if you make images in a hot day outside - how that would look. I also wonder about lag and color rendering.

    ReplyDelete
  3. I saw the InVisage short film and I wasn't particularly impressed by the quality of video. Samsung Cameras (S6, Note 5 etc), deliver better quality video to the naked eye.
    I am happy that Sony has jumped on the bandwagon, and will be helping them produce sensors. But I want to see some genuine tests highlighting their claims of the sensor being superlatively better than present BSI sensors made by Sony (and Samsung). Also their Global shutter and extremely high dynamic range (since Sony sensors already have HDR in video that adds a stop or two of dynamic range already).

    ReplyDelete
    Replies
    1. the problem of the invisage video was the lens and the color-correction.. other than that it has shown more quality than "Samsung" cameras.. sorry that's a fact!

      Delete
    2. Global shutter is definitely real and no reason to suspect it isn't. Higher full well and better saturation characteristics are probably no lie either (HDR in single shot is much better for moving video). Noise is the issue here and is highly unlikely that it is even close to modern Sony BSI. That would most likely be a death knell in phone use because so many shots are taken in poor lighting. Much like Foveon this will shine in good light only and there may even be a niche market for it but without phone sales to bankroll development its unlikely to see it go mainstream. Still exciting times.

      Delete
    3. I also wondered about the noise. What I've been told by people who saw the device their first thought after looking at the noise characteristic was it's most likely because the Invisage has not implemented noise reduction or a very poor one.

      I own the S6 and the image screams noise reduction.. and it's still japan type videoy (they created the "video" look) looking crap. The sensor from Invisage on the other hand shown some "cinematic" quality (analog quality).

      Delete
  4. "the problem of the invisage video was the lens and the color-correction.. other than that it has shown more quality than "Samsung" cameras.. sorry that's a fact!"
    I saw the making video (the ‘behind the scenes’ video), and they used a much larger lens on the sensor (with an adaptor kind of contraption). And the softness in the video was not due to the lens. It must have been due to an early prototype of the sensor. The coloring and grading were unimpressive, I would agree on that. But, I wasn’t impressed with most bits about the image, in general, either.

    “Global shutter is definitely real and no reason to suspect it isn't. Higher full well and better saturation characteristics are probably no lie either (HDR in single shot is much better for moving video). Noise is the issue here and is highly unlikely that it is even close to modern Sony BSI. That would most likely be a death knell in phone use because so many shots are taken in poor lighting. Much like Foveon this will shine in good light only and there may even be a niche market for it but without phone sales to bankroll development its unlikely to see it go mainstream. Still exciting times.”

    The Moto X, a host of other top end phones offer HDR even in video. Then Nexus 5x and 6p have a sensor that is capable of 300fps (it does 240fps in the 6p, thanks to the Snapdragon 810). I am guessing in a sensor with that speed of readout, the rolling shutter will be miniscule. The pixel size on that sensor (Sony IMX377) is 1.55 microns, and the micron on those are much bigger than on the quantum dots sensor, too.

    "I own the S6 and the image screams noise reduction.. and it's still japan type videoy (they created the "video" look) looking crap. The sensor from Invisage on the other hand shown some "cinematic" quality (analog quality)."
    I own 2 Samsung S6s and the image quality is very impressive. As a matter of fact, with an App called the Cinema 4k (and set to ‘Flat Profile’ and the bitrate pushed to 200Mbps), I would say, the image is better than most DSLRs that shoot full video (not mirrorless, and not 4k). I have compared the Dynamic Range difference, and I would say that the Samsung S6 shoots about 7 Stops of Dynamic Range in video under normal settings, and about 8-9 in HDR mode (where it uses Dual Exposure, stitching 2 exposure values, for each frame, together). With the Cinema 4k app, and the Flat Profile Intensity set to High (and Flat Profile set to Dark + Light), I am guessing the Dynamic Range is more in the Range of 9-10 stops. Which is the same as most DSLRs. Also, since the Bitrate is 200Mbps (at its highest), there is very little macroblocking, that’s to the much higher bitrate (4 times), which people usually mistake as noise at times.
    I am planning to shoot a short film with it. Have been a little too busy of late, so maybe in a few weeks. I will post the video with the youtube link online, and you can see the image quality for yourselves.

    I personally wish that everything that InVisage is saying is true, since sensor technology has been terribly stagnant for the last couple of decades, with silicon itself being the biggest problem. Sensors like the Fuji X-Trans have improved the quality of the image and sharpness by moving away from the regular bayer pattern array, instead of finding a replacement for silicon. So there is an urgency for developing on this front.
    I am guessing that any sensor that does HDR in video, can be programmable to expose at 2 noticeably different different levels, and the exposures can be stitched together to create an HDR image, which is atleast 2-3 stops better (if not more) than that from a non-HDR image (same goes for video).

    ReplyDelete
    Replies
    1. "since sensor technology has been terribly stagnant for the last couple of decades" Really? So you consider sensors today to be similar to those available in the say 1990 time frame? Also, isn't the goal to achieve what the human eye sees or more? Silicon QE is not bad for this. It is everything that happens after the electrons are generated that is the problem, and most of it is the existing infrastructure for coding and transmitting and displaying visual data.

      Delete
  5. I didn't mean as suggesting that no development has taken place. Only that Silicon is not the best solution.


    "Also, isn't the goal to achieve what the human eye sees or more?"
    A host of sensors starting with the A7s show that sensors see a lot more than the human eye. Low light, dynamic range (human eye sees far less than 10 stops at one point of time, though they can adjust quickly to brigher or darker lighting, and that's why people assume than human eyes have so much dynamic range), more detail in terms of almost everything else.

    Like I posted above. Merely by rearranging the RGb Bayer patter layout, and faster sensor readout itself so much improvement is noticea

    ReplyDelete
  6. To Gossip : you said "silicon is not the best solution". Apparently you have a better solution in mind ? Curious to hear about your solution that is better than silicon. To my opinion, silicon is the best option we have at this moment to fabricate affordable devices for the consumer world. Of course, silicon has its own limitations, but we do neither have a good alternative to silicon that allows us to combine an image sensor and its electronics on the same chip.
    On the other hand, do we have to copy the human eye in all aspects ? I am not convinced. Also the human eye has its shortcomings (e.g. temperature range, frames per second, radiation hardness), but in some aspects, the human eye is outperforming any other "solution", think about power consumption.

    ReplyDelete
  7. I must first say that it is quite difficult to post on the site, on a hand-help mobile device (a phablet, in my case). I hope this can be resolved, and the sign-in should happen before one starts the comment, so that the back and forth do not cause issues with comment being deleted or being posted incomplete (like has happened with mine). Thanks.

    To Albert:
    "To my opinion, silicon is the best option we have at this moment to fabricate affordable devices for the consumer world."

    "at this moment" and "affordable devices for the consumer world".
    "To my opinion" ("to my knowledge", perhaps, is what you meant). Well, neither of us (your and I) is a scientist or creator of semi-conductors, and neither of us has really tested a host of other semi-conductor for the job, either.
    Having said that, right now, Silicon is the only semi-conductor used in all photographic sensors, from very low priced ones, to extremely high priced ones, and everything, in between.
    The quality of sensors, including signal to noise ration, readout speed, dynamic range, megapixel count of the sensor, and shrinkage in size, has been exponential in the last few years.
    Also, the quality of photos has improved tremendously, as well, with far better processors, and in the case of mobile devices, with SoCs that have photo specific features for improving mobile quality. And, from the looks of things, we have come quite a long way.
    My saying, "silicon is not the best solution", doesn't in any way mean that Silicon is bad or not appropriate. But, that we haven't really made any efforts to find any genuine alternatives to Silicon.

    Quantum Dots sounds like a very good alternative. But, I read that quantum dot sensors will have much shorter shelf lives that silicon sensor.
    Another thing that people keep saying, is that mobile phone sensors are Nowhere near DSLRs or even High End Point and Shoot Cameras.

    This is mostly true since point and shoot have much better glass, and their sensor sizes are forever increasing.

    But, we am seeing smartphone sensors getting much bigger, and the readout speeds increasing drastically. Combine this will some amazing (plastic or super thin) lenses, higher HDR (thanks to better processors), more features for RAW and we could start seeing the lines between smartphones and lower end point and shoot start blurring, very soon.

    ReplyDelete
    Replies
    1. Speaking for myself, I have built CCDs in Silicon, GaAs InGaAs and all sorts of other III-V combinations. There are also many papers on the defect density and fabrication issues associated with other materials if I have not experienced first hand. Silicon, hands down, is far far far superior to other materials. Dr. Theuwissen, a well known detector scientist and engineer, knows this too, as do many other technologists in the image sensor community. In this particular realm, dear Anonymous, I have to say you are apparently clueless, relatively speaking.

      Delete
    2. Pretty sure Albert knows what he is talking about as well. ;)

      Common Anonymous...

      Delete
  8. Dear Eric, I guess I jumped the gun. I am a filmmaker, among many other things. I am working for a few camera companies right now, in the very early states. I could post some information after a few months (right now, there may be confidentiality issues). I have read many papers on the Silicon vs the competition, and I realise that most of the competition don't seem feasible for a variety of reasons.

    No hard feelings. We're just debating here.

    ReplyDelete
    Replies
    1. It's ok. This is a forum frequented by hard core image sensor technologists so you just need to be careful when making loose comments. Having user input is always welcome.

      Delete

All comments are moderated to avoid spam.