Wednesday, June 22, 2016

Image Sensor Architecture for Continuous Mobile Vision

Robert LiKamWa publishes his presentation at ACM/IEEE International Symposium on Computer Architecture (ISCA) 2016, Seoul, Korea "RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision" by Robert LiKamWa, Yunhui Hou, Julian Gao, Mia Polansky, Lin Zhong (Rice University). Few slides:


  1. The title of my 1984 PhD dissertation was "Charge-coupled analog computer elements and their application to smart image sensors," which also included a "dream device" concept that is essentially a pixel-parallel stacked smart image sensor. Of course eventually I decided that despite the low power and compact advantages of discrete-time analog signal processing, that really the best thing we could do would be to put the ADC on chip and do the rest of the processing in digital. Thus we could ride Moore's Law and take advantage of all the digital device and circuit advancements taking place around the world.
    I still feel this way, but there is always a gravitational pull towards re-exploring analog focal plane image processing. I will also bet these authors have no idea about the history in this area.

  2. Doesn't analog ride the Moore's law wave too? And didn't you become known for re-exploring CMOS image sensors? What's bad about re-exploring?

    1. Analog does not surf the Moore's law wave like digital does, and generationally speaking, analog is falling further behind due to voltage rail constraints, and various size-related issues.
      Nothing wrong with re-exploring. Always good to know the history however, esp. if you are an academic.


All comments are moderated to avoid spam and personal attacks.