So they’re using a classifier to identify sails and water and then use prior knowledge to reconstruct spectrum of sails and water in order to use this spectrum to classify water as water and sail as sail? But they knew that to start with! This makes no sense as a classifier as they need a classifier to even get started. And, of course, brick wall filters yield great SNR but you don’t get any color without already knowing what you’re looking at. But if you know what you’re looking at – why do you even care? In this line of thought the Macbeth chart demo is harshly misleading – the only way the colors were able to be reconstructed is that they trained their system for Macbeth charts so they know where which color patch is. Hence, this isn’t image capturing but computer graphics generation. The only realistic use-case I could imagine is if the classifier works fine without actual color/spectrum information and the user demands some low-quality picture in RGB. But why wouldn’t they just accept monochrome?
A major hurdle for all non-Bayer sensing is the back-end image processing. If your sensor isn't Bayer, then you incur large penalties in processing time, algorithm complexity, and power. The advantages of a new technology has to overcome them all. E.g. stacked-pixel sensors (Foveon), Pelican, RGBC/RGBW/RWWB, multispectral, convolutional imaging, wavefront decoding, all plenoptic imaging, etc. The costs may seem tolerable for still images, where processing time is less important. Video and live-preview can be serious deal-breakers.
When convert non RGB to RGB, you need additional gain to insure the color accuracy, which will cause noise increase. So the final SNR may even worse although the sensitivity is better.
I hope Voyage 81 comes out better than Planet 82...
ReplyDeleteResearch conducted by Brian Keelan at Apina showed no degradation of color reproduction from RYYB.
ReplyDeleteIs that a published work ? If yes, could you please share the link here. Thx
DeleteSo they’re using a classifier to identify sails and water and then use prior knowledge to reconstruct spectrum of sails and water in order to use this spectrum to classify water as water and sail as sail? But they knew that to start with! This makes no sense as a classifier as they need a classifier to even get started. And, of course, brick wall filters yield great SNR but you don’t get any color without already knowing what you’re looking at. But if you know what you’re looking at – why do you even care? In this line of thought the Macbeth chart demo is harshly misleading – the only way the colors were able to be reconstructed is that they trained their system for Macbeth charts so they know where which color patch is. Hence, this isn’t image capturing but computer graphics generation. The only realistic use-case I could imagine is if the classifier works fine without actual color/spectrum information and the user demands some low-quality picture in RGB. But why wouldn’t they just accept monochrome?
ReplyDeleteA major hurdle for all non-Bayer sensing is the back-end image processing. If your sensor isn't Bayer, then you incur large penalties in processing time, algorithm complexity, and power. The advantages of a new technology has to overcome them all. E.g. stacked-pixel sensors (Foveon), Pelican, RGBC/RGBW/RWWB, multispectral, convolutional imaging, wavefront decoding, all plenoptic imaging, etc. The costs may seem tolerable for still images, where processing time is less important. Video and live-preview can be serious deal-breakers.
ReplyDeleteWhen convert non RGB to RGB, you need additional gain to insure the color accuracy, which will cause noise increase. So the final SNR may even worse although the sensitivity is better.
Delete