Imaging Resource: New Pentax K-3 II DSLR features Ricoh pixel shift resolution enhancement that removes the need for Bayer demosaicing, first presented at CP+ a month ago.
Pentax IS actuator shifts the sensor by 1 pixel in 4 directions. "The result is an image which has full color information at every pixel location -- and thus improved resolution and a greater resistance to false color artifacts -- but only a relatively modest increase in file size...
As an added bonus, images shot in the Pixel Shift Resolution mode should also have a cleaner, tighter noise pattern. The reason for this is twofold. First, since multiple exposures are involved, noise can be averaged out across those exposures. Secondly, in a Bayer-filtered sensor, two out of three colors at each pixel location must be interpolated (read: guessed) from the values of surrounding pixels. When that happens, noise from adjacent pixels is likewise spread across their neighbors, resulting in a less film-like and blotchier, more objectionable noise pattern. With full color information at each pixel, a Pixel Shift Resolution shot's high ISO grain pattern is finer, and we're guessing easier to clean up post-capture, too."
Pentax Youtube video calls this mode "Pixel Shift Resolution". It's somewhat similar to one appeared in the previous Pentax DSLR model, but then it used to be a high frequency 500Hz vibration, whereas now it's said to provide a higher resolution for static objects:
The pixel shift resolution mode and the anti aliasing filter simulator is two separate and different modes.
ReplyDeleteThe anti aliasing filter simulator was introduced with the Pentax K-3 i october 2013. This feature works within one single exposure. The purpose is to reduce color moire patterns with simple blurring. This can be turned on and off, contrary to the traditional anti aliasing filters witch are a physical blurring filter on top of the sensor.
The brand new successor K-3 II that was launched yesterday features both modes, but they cant be used simultaneously. The new pixel shift resolution mode uses four exposures with the sensor moved one pixel between exposures. The purpose is to get full color information in every pixel of the merged file output.
Yes, you are right. Pentax has published a new video a couple of hours ago showing the resolution enhancement mode. I've updated the Youtube links in the post.
DeleteI guess this approach would have problems if there are moving objects in the image while the four captures happen.
ReplyDeleteYes, as shown on e-m5 II samples
DeleteWould seem to have GREAT possibilities for astrophotography, but (alas) is not useable in "B" shutter speed mode (so maximum shutter speed is 30 seconds). However, if I'm reading this right, that would really be 30 seconds at each pixel position (making it an effective 2 minute exposure, as far as photons is concerned? Having the camera set to record in RAW means getting a 120MB RAW file out of each 30 second exposure on a K-3 II. Have not tried this yet, but combined with STACKING SOFTWARE, like DeepSkyStacker this could be a huge advance.
DeleteWell. Hasselblad has used shifted sensor for a long time in their Multishot models.
ReplyDeletehttp://www.hasselblad.com/medium-format/h5d-multi-shot
You are right, shifting the sensor is not new (patent of Reimar Lenz). But if we were only interested in technologies that are new, this blog would have been MUCH shorter. On the other hand, it is also very stimulating to see older principles implemented in new products, why not ?!
ReplyDeletePixel shift resolution is incorretn in the picture
ReplyDeleteIt must be 1/2 pixel shift and not 1 pixel, otherwise the effect of increasing the resolution will not be
Thats not correct. Remember R and B has only 1/4 of the area resolution that the Mp number indicates in the bayer image. G has 1/2 the area resolution. Shifting the sensor like described increases area resolution of R and B by 4 times. G get double area resolution with double sampling.
DeleteThe practical result is lower noise and higher DR (4 exposures averaging and less prone to burning out channels because every exposure is 75% shorter then a comparable single exposure), higher resolution in resolution tests and elimination of color/monochrome moiré in non moving scenes. Moving objects will get severe moiré artifacts.
I remember Apple has patented this method
ReplyDeleteI think they were using the image stabilisations system (which was optical) to do the shifts though.
DeleteYes and no. You will increase resolution but not pixel count.(unless 6-shot)
ReplyDeleteIn single shot, each pixel is an estimate (demosaic) as it only has one out of four chanels. This induces blur. In 4-shot each pixel gets "true" values.
The level of detail is increased.
What about doing multiple exposures and stitching things together just as they happen because of movement? It may need a bit more complicated than bayer pattern to pick up the exact movement through correlation perhaps?
ReplyDelete