Lists

Wednesday, February 18, 2009

ST Combines TSV with EDoF

ST introduced the 1/4" optical format, 3MP raw Bayer sensors with integrated EDoF. The VD6853 and VD6803 sensors are said to enable camera modules as small as 6.5 x 6.5mm. EDoF extends the sensor's depth of field from 15cm to infinity. The sensors are available in ST’s TSV wafer-level package. This type of package enables the production of standard as well as wafer-level camera modules.

The 1.75um pixel-based VD6853 and VD6803 sensors differ in their interface: 10-bit parallel legacy interface in VD6803 and CCP2 interface in VD6853. Both provide 20fps speed at full resolution and use ST's 90nm CMOS process.

Engineering samples and demo-kits are available now, with the volume production scheduled for Q3 2009. Unit pricing is below $5, depending on the package types and shipment dates.

6 comments:

  1. Who is the EDoF vendor?

    ReplyDelete
  2. ST uses "Software Lens(TM)" wording in its data brief. Dblur also uses "Software Lens(TM)" term. Does it look like an answer?

    ReplyDelete
  3. dBlur looks like more than software. A special lens is involved. The method itself reminds me of CDM's Wavefront Coding.

    -----------------------------------------------

    A brief explanation of EDOF
    The technology behind EDOF (extended depth of focus) is not intuitively obvious to most engineers unless they have been involved in physical optics. So, perhaps some explanation is in order. This one follows a discussion with Arie Shapira, director of sales engineering at EDOF start-up Dblur Technologies Ltd.

    In a conventional optical system, the lens focuses the light wavefront arriving from each point in the scene onto a single corresponding point in the image—in the case of digital cameras, on the surface of the image-sensor array. If the light from a point in the scene is smeared over a diffuse area instead of contained within a point, the lens is out of focus, and the image is blurred. This blur is not random. The lens transforms a point source into a blur through a mathematical transform called a point-spread function, Shapira explains. You can think of it as the 2-D impulse response of the lens.

    As a transform, the blur can be reversible. Under some conditions, digital-signal processing can reconstruct a sharp image from the superposition of all the blurred images from all the points in the scene. It is a small matter of an inverse 2-D transform, which you can implement with a 2-D convolution. This method works if the sensor is fairly linear and the point-spread function is well-behaved. Anyone who has used Photoshop's unsharp mask to fix up their digital-camera images is aware of the idea.

    Now for the fun part: You can design a lens that is intentionally out of focus, so that it creates a blurred image from a point source all the time and—this part is critical—so that the point-spread function is nearly independent of how far the point source is from the lens. So, if the points in the scene are on the surface of a business card 10 cm away or if the points are on the side of a building 100m away, you get the same kind of pattern in the image.

    When the reconstruction algorithm transforms those individual blurs back into nice, sharp points, the inverse transform is still independent of the distance between subject and lens. The business card close to the lens is in focus, and so is the building. You have extended the depth of focus far beyond that of a conventional lens. You can use the technology in other ways, as well: to control known aberrations in the lens, for instance, or to increase the optical system's tolerance for mechanical inaccuracies.

    ReplyDelete
  4. ST does use DxO, but not exclusively. Mr Image Sensor is correct.

    ReplyDelete
  5. The VD6853 and VD6803 sensors from STMicro intergrate the EDoF solution offered by DBlur, i.e. Software Lens.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.