Wired: The latest MIT gesture recognition system uses an array of optical sensors that are arranged right behind a grid of liquid crystals. The sensors can capture the image of a finger when it is pressed against the screen. But as the finger moves away the image gets blurred.
By displacing the layer of optical sensors slightly relative to the liquid crystals array, the researchers can modulate the light reaching the sensors and use it capture depth information, among other things.
In this case, the liquid crystals serve as a lens and help generate a black-and-white pattern that lets light through to the sensors. That pattern alternates rapidly with whatever the image that the LCD is displaying, so the viewer doesn’t notice the pattern.
The pattern also allows the system to decode the images better, capturing the same depth information that a pinhole array would, but doing it much more quickly, say the MIT researchers.
MIT researchers haven’t been able to get LCDs with built-in optical sensors to test, though they say companies such as Sharp and Planar have plans to produce them soon.
For now, doctoral candidate Matthew Hirsch and his colleagues at MIT have mocked up a display in the lab to run their experiments. The mockup uses a camera that is placed some distance from the screen to record the images that pass through the blocks of black-and-white squares. MIT will present the idea at the Siggraph conference on Dec. 19.
No comments:
Post a Comment
All comments are moderated to avoid spam and personal attacks.