Tuesday, April 24, 2018

Sony Stacked Vision Chip Paper

MDPI Special Issue on the 2017 International Image Sensor Workshop keeps publishing papers presented at the workshop. Sony paper "Design and Performance of a 1 ms High-Speed Vision Chip with 3D-Stacked 140 GOPS Column-Parallel PEs" by Atsushi Nose, Tomohiro Yamazaki, Hironobu Katayama, Shuji Uehara, Masatsugu Kobayashi, Sayaka Shida, Masaki Odahara, Kenichi Takamiya, Shizunori Matsumoto, Leo Miyashita, Yoshihiro Watanabe, Takashi Izawa, Yoshinori Muramatsu, Yoshikazu Nitta, and Masatoshi Ishikawa presents:

"We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning."

1 comment:

  1. It seems various players try to bring camera images into 'robotic' movement control loops. 1kHz is quite slow if the camera image should replace a encoder for servo axis, nevertheless it would not be bad for a control feedback cascaded to the encoder e.g. to actively compensate vibration into the 100s Hz range.
    It will be interesting to see if event based image sensors pick up in this topic. Prohpesee for example just recently announced a demokit (yesterday there was a posting on this blog) - where they offer a 100us temporal resolution on VGA resolution.
    One big advantage of event based sensors is the removal of redundant information in every frame, one disadvantage is that without this redundancy subpixel interpolation might get difficult...


All comments are moderated to avoid spam and personal attacks.