Occipital HW Leader Evan Fletcher did an interesting work on searching for an optimal CFA pattern among arbitrary large sized patterns in an unconstrained machine learning search. The result might surprise the companies proposing better CFA mosaics, such as Fujifilm:
"After training for ~24 hours, the learned color filter array looks quite familiar:
The network seems to have learned something very similar to a RGGB Bayer pattern, complete with 2×2 repetition and the pentile arrangement of the green pixels! This was quite surprising, especially given that there is no spatial constraint on arrangement or repetition in this network design whatsoever.
This optimization appears to have independently confirmed the venerable Bayer pattern as a good choice for a mosaic function – it was fascinating to see a familiar pattern arise from an unconstrained optimization."
Unfortunately this result might be meaningless because that the training image set is most probably from bayer pattern image sensors.
ReplyDeleteMay be. The author mentions this too as one of the factors that he needs to double check in future:
Delete"As the training images were captured using cameras that almost certainly use a Bayer pattern themselves, there is a concern that Bayer artifacts (i.e. reduced resolution in red/blue vs green) may be what is driving the Bayer pattern to arise in the learned color filter array, despite efforts to hide or reduce these Bayer artifacts in the training set.
I may try training with fully synthetic rendered data, which would never involve a Bayer pattern, in an effort to eliminate this possibility."
There is another result suggesting that more complex CFA might be better for color sensing in noisy environments: http://inf.ufrgs.br/~bhenz/projects/joint_cfa_demosaicing/
ReplyDeleteThe result is 4x4 CFA with about 8 distinct colors
They should use foveon sensor.
ReplyDeleteYou mean Foveon as a source for training and test pictures, right?
DeleteOr capture a scene using a monochrome sensor/ camera with several colour filters? That would be the best reference right? Is there a pixel size or resolution dependency qua cross talk?
DeleteAs an owner of Sigma DP2m with foveon sensor can say that output color is highly regularized and denoised, so it's a very bad source of training data for such algorithims
ReplyDeleteIn my humble opinion, you would need true hyperspectral data, simulate the spectral bandwidth of filters and figure out which filters are best. There was an Israeli startup who claimed that they could build a sort of hyperspectral camera using only 5 or 4 filters (they figured it out using principal component analysis). They also checked which set of filters would lead to the optimal color response. I think the company was called hc.vision.
ReplyDelete