Pixart and National Chiao Tung University, Taiwan publish an open access paper "Fast Imaging in the Dark by using Convolutional Network" by Mian Jhong Chiu, Guo-Zhen Wang, and Jen-Hui Chuang presented at 2019 IEEE International Symposium on Circuits and Systems (ISCAS):
"While fast imaging in low-light condition is crucial for surveillance and robot applications, it is still a formidable challenge to resolve the seemingly inevitable high noise level and low photon count issues. A variety of image enhancement methods such as de-blurring and de-noising have been proposed in the past. However, limitations can still be found in these methods under extreme low-light condition. To overcome such difficulty, a learning-based image enhancement approach is proposed in this paper. In order to support the development of learning-based methodology, we collected a new low-lighting dataset (less than 0.1 lux) of raw short-exposure (6.67 ms) images, as well as the corresponding long-exposure reference images. Based on such dataset, we develop a light-weight convolutional network structure which is involved with fewer parameters and has lower computation cost compared with a regular-size network. The presented work is expected to make possible the implementation of more advanced edge devices, and their applications."
I think advancements in denoising have been remarkable over the last few years. In this work, we compare Fig 1 image (c) to denoised image (d). To me the denoised image (if you enlarge it) looks like a painting so I suppose it might be an improvement for certain applications. Maybe a negative for others.
ReplyDeleteThis is a denoising algorithm and it can not be generalized to other application. I think another algorithm for low light enhancement which was introduced in 2018 gives better results.
ReplyDeleteThe algorithm is called Learning to see in the dark. it can be found here:
https://github.com/cchen156/Learning-to-See-in-the-Dark
two main problems of the See in the dark are: complexity, generalization.
Complexity can be solved by using a smaller U-net network. For Generalization problem, it is possible to generate a large data-set of different image sensors and train the network. (or generate a synthetic dataset indifferent light conditions).
I believe much of those AI algoritmhs are creative guesswork in order to resemble something, not nesecerily the truth, but something. A fuzzy zone between reality and fiction. I believe there must be users that dont mind a high degree of fiction, and others that dont want it at all or at a lesser extent. AI should be optional for the end user.
ReplyDeleteThose algorithm seems to perform some kind of assisted image synthesis.
ReplyDelete