Sunday, May 19, 2019

Zoom To Learn, Learn To Zoom paper "Zoom To Learn, Learn To Zoom" by Xuaner Cecilia Zhang, Qifeng Chen, Ren Ng, and Vladlen Koltun from UC Berkeley, HKUST, and Intel Labs claims a significant improvement over the earlier digital zoom algorithms:

"This paper shows that when applying machine learning to digital zoom for photography, it is beneficial to use real, RAW sensor data for training. Existing learning-based super-resolution methods do not use real sensor data, instead operating on RGB images. In practice, these approaches result in loss of detail and accuracy in their digitally zoomed output when zooming in on distant image regions. We also show that synthesizing sensor data by resampling high-resolution RGB images is an oversimplified approximation of real sensor data and noise, resulting in worse image quality. The key barrier to using real sensor data for training is that ground truth high-resolution imagery is missing. We show how to obtain the ground-truth data with optically zoomed images and contribute a dataset, SR-RAW, for real-world computational zoom. We use SR-RAW to train a deep network with a novel contextual bilateral loss (CoBi) that delivers critical robustness to mild misalignment in input-output image pairs. The trained network achieves state-of-the-art performance in 4X and 8X computational zoom."

No comments:

Post a Comment

All comments are moderated to avoid spam and personal attacks.