Monday, April 11, 2016

Lytro Cinema Camera Features 755MP , up to 300fps Sensor

BusinessWire: Lytro introduces "Lytro Cinema, the world’s first Light Field solution for film and television."

Lytro Cinema defies traditional physics of on-set capture allowing filmmakers to capture shots that have been impossible up until now,” said Jon Karafin, Head of Light Field Video at Lytro. “Because of the rich data set and depth information, we’re able to virtualize creative camera controls, meaning that decisions that have traditionally been made on set, like focus position and depth of field, can now be made computationally. We’re on the cutting edge of what’s possible in film production.

With Lytro Cinema, every frame of a live action scene becomes a 3D model: every pixel has color and directional and depth properties. The camera features:
  • The highest resolution video sensor ever designed, 755 RAW megapixels at up to 300 FPS
  • Up to 16 stops of dynamic range and wide color gamut
  • Integrated high resolution active scanning
Lytro Cinema will be available for production in Q3 2016 to exclusive partners on a subscription basis.


  1. Is there really a data bus that can do 1/2 TB/sec?

    1. It is not clear that they say this thing can do 755M and 300fps at the same time. You may be looking at 300fps at an aggressive decimation. So 500GB/s (12 400Gbps multi-fiber optical links?) data bus likely does not come into play.

      Still, a 755Mpix FPA would be very interesting in a lot of areas beyond the silly plenoptics aerial imaging?). But these are hard numbers to do. Take an about 2:1 aspect ratio (between 16:9 and 2.4:1) and you end up with roughly 40k x 19k col/row counts. Even at about 2-3um pitch (16 stops?) we are looking at huge FPAs, well beyond even medium format imagers.

      Reading out 19k rows at 300fps requires row cycle times on the order of 175ns - I don't know if one could do even 10x slower on rows this long.

      Looking forward to seeing more details on this thing.

  2. Maybe this is a computed resolution based on light field characteristics and post production, not a "real" resolution.

    1. Considering the final image for professional cinema shall be essentially artifacts-free, the effective resolution will be about 200-500x lower than then the sensor resolution. This is more pronounced for lenses of shallow depth of field and higher Z depth resolution.

      In this particular case it could be like 3Mpix final resolution for 755Mpix sensor, yet I would say most DP will be very far from being happy comparing Lytro results to e.g. Alexa at even HD, not to mention 5K production.

      We have yet to see how well these shots can be integrated with the existing production pipelines and UHD or 5K+ workflows. So far I'm remaining skeptical as the amount of data to process and store is tremendous, yet yielding just mediocre resolution intermediary footage.

      Yes, it could definitely simplify production, the question is at what price.

  3. One advantage of this approach if it works is that it supports post-production computation of both axial magnification and depth of focus independently. This allows simulation of any combination of image size and lens focal length. I suppose it could also solve the problems caused by special lenses that have significant field curvature.

    Now, instead of just offering the "film look" Lytro can also offer the "CGI look".


All comments are moderated to avoid spam.