Samsung introduces 17LPV process at its Foundry Forum 2021. It's a combination of 28nm BEOL and FinFET. One of the main applications of the new process is logic dies in high resolution, high-speed, HDR CIS:
Out of interest, what would an image that is from a "beyond human eye mobile camera" be used for?
One can digitally "zoom-in" with such a sensor... BUT i) how often would this be used? ii) wouldn't external noise sources start being the limiting factor?, and iii) how often (i.e. most of the time) would the image be downsampled anyway.
Roger N. Clark used 0.3 arcmin resolution and 120x120° FOV to calculate the human eye resolution to be 576Mpix. I couldn't find any decent specifications, but I am pretty sure that modern telephoto camera modules in smartphones already reach angular resolutions of better than 1 arcmin. Super wide modules with 120° FOV are also state of the art. So with just using two camera modules the instantaneous human eye performance can be achieved today, you just need to combine the images from both modules. As I haven't seen anyone doing this, it doesn't seem to be really valuable or in demand.
Disregarding practical limitations like the diffraction limit, capturing a full super-wide FOV with the angular resolution of a telephoto image would allow to look around in the image later on. Nowadays this can be done by stitching a panorama. Once again I can't give you citable numbers but my gut feel is that panorama images make out less than 0.1% of all captured images, probably because they take more effort in taking AND viewing. A 600MPix sensor would make the panorama acquisition easy but afterwards there is still a pinch/zoom/swipe-hell waiting for you.
But let's face it: even if it doesn't make sense, it will sell. Simply because more megapixel = better.
The assertion that the eye is equivalent to a 576 Mpixel sensor ignores the fact that the eye is a foveal sensor which only has its highest resolution at the center of the field of view. The eye's resolution drops rapidly as you move away from that central macular region. Our peripheral vision is used very differently to alert the brain that something needs more attention and we then look in that direction to get a higher resolution view.
additionally, in the eye there is massive data reduction already "in the sensor". The eye/brain "camera" has more in common with an event driven camera than a classical camera based on a frame based sensor.
Out of interest, what would an image that is from a "beyond human eye mobile camera" be used for?
ReplyDeleteOne can digitally "zoom-in" with such a sensor... BUT
i) how often would this be used?
ii) wouldn't external noise sources start being the limiting factor?, and
iii) how often (i.e. most of the time) would the image be downsampled anyway.
Roger N. Clark used 0.3 arcmin resolution and 120x120° FOV to calculate the human eye resolution to be 576Mpix. I couldn't find any decent specifications, but I am pretty sure that modern telephoto camera modules in smartphones already reach angular resolutions of better than 1 arcmin. Super wide modules with 120° FOV are also state of the art. So with just using two camera modules the instantaneous human eye performance can be achieved today, you just need to combine the images from both modules. As I haven't seen anyone doing this, it doesn't seem to be really valuable or in demand.
ReplyDeleteDisregarding practical limitations like the diffraction limit, capturing a full super-wide FOV with the angular resolution of a telephoto image would allow to look around in the image later on. Nowadays this can be done by stitching a panorama. Once again I can't give you citable numbers but my gut feel is that panorama images make out less than 0.1% of all captured images, probably because they take more effort in taking AND viewing. A 600MPix sensor would make the panorama acquisition easy but afterwards there is still a pinch/zoom/swipe-hell waiting for you.
But let's face it: even if it doesn't make sense, it will sell. Simply because more megapixel = better.
JH
The assertion that the eye is equivalent to a 576 Mpixel sensor ignores the fact that the eye is a foveal sensor which only has its highest resolution at the center of the field of view. The eye's resolution drops rapidly as you move away from that central macular region. Our peripheral vision is used very differently to alert the brain that something needs more attention and we then look in that direction to get a higher resolution view.
Deleteadditionally, in the eye there is massive data reduction already "in the sensor". The eye/brain "camera" has more in common with an event driven camera than a classical camera based on a frame based sensor.
DeleteTSMC falls behind Samsung. She is developing N28, but...no customers...means no ability to develop by herself...
ReplyDeleteTSMC is a her? You never stop learning... I guess TSMC isn't complaining about a lack of customers... nor is Samsung. The comment is a bit pointless
Delete