Lists

Saturday, August 05, 2017

Chronocam in Novus Light

NovusLight publishes an article "Vision Inspired by Biology" based on the talk with Luca Verre, the CEO and co-founder of Chronocam. Few quotes:

"Based on the new technology concept, the company recently released a QVGA resolution (320 by 240 pixels) sensor with a pixel size of 30-microns on a side and quoted power efficiency of less than 10mW.

Because the sensor is able to detect the dynamics of a scene at the temporal resolution of few microseconds (approximately 10 usec) depending on the lighting conditions, the device can achieve the equivalent of 100,000 frames/sec.

In Chronocam’s vision sensor, the incident light intensity is not encoded in amounts of charge, voltage, or current but in the timing of pulses or pulse edges. This scheme allows each pixel to autonomously choose its own integration time. By shifting performance constraints from the voltage domain into the time domain, the dynamic range is no longer limited by the power supply rails.

Therefore, the maximum integration time is limited by the dark current (typically seconds) and the shortest integration time by the maximum achievable photocurrent and the sense node capacitance of the device (typically microseconds). Hence, a dynamic range of 120dB can be achieved with the Chronocam technology.

Due to the fact that the imager also reduces the redundancy in the video data transmitted, it also performs the equivalent of a 100x video compression on the image data on chip.

The company announced it had raised $15 million in funding from Intel Capital, along with iBionext, Robert Bosch Venture Capital GmbH, 360 Capital, CEAi and Renault Group.
"


From the company presentation at Event-based Vision Workshop 2017:

9 comments:

  1. Curious what technology node they're using, just to get an idea of the complexity of the circuitry in each pixel based on the micrograph.

    ReplyDelete
  2. based on the dimensions of the pixel i'd guess that intel gave them access to their 10u process back from 1971.

    ReplyDelete
  3. It is not in a 10µm process. More in the direction of 90 nm. You have to put a lot of stuff in each pixel, the presentation shows the layout.

    The sensor itself reminds me to a similar device from the Austrian Institute of Technology in Vienna of 2012. That each pixel catches the differences, that makes sense in a fixed environment. But in a moving car, I think you get to much data. And color reconstruction in a nonlinear space is really tricky.

    ReplyDelete
    Replies
    1. The base event-driven vision technology was invented by the group of Tobi Delbrück at Uni Zurich and ETH Zurich. It is licensed to AIT. He trained the CTO of Chronocam (who was at AIT at the time).

      Delete
    2. Yes, so what ... Sony, Samsung, Omnivision, ... they all make a lot of business based on CMOS image sensors, invented by someone else !

      Delete
    3. just like you said, you know CMOS image sensor is not an invention by Sony, Samsung, or Omnivision. So why are you rebuting when people point out the factual origin of this technology, which is very often completely omitted by articles for some reason?

      Delete
  4. @Anonymous #1 - I had a look at Chronocam's Video, it's basically a pixel-based difference where new information updates the pixel's output.

    Same as DVS from Unitectra (et. al): https://youtu.be/0ZEM57DZJes?list=PLWa6uO3ZUweAZ-VXnnBsDDsBbz32BLlYf - there's a few Videos explaining this.

    ReplyDelete
  5. The data event is created by pure intensity change on the pixel, this intensity change can come either from a moving edge or from a local intensity modulation. For the events generated by a moving edge, they basically equivalent to an edge detector, I remember that Mowald and Tobi have built a Marr stereo matching chip using the moving edge directly inside the chip.
    If we ignore the local intensity changes, the output of this sensor is equivalent to an address coded edge image of a scene. The triggering instance should vary a little bit according to the sub pixel position of the corresponding edge, but what are the salient informations we can get from this phase difference ?

    We have built such difference detection inside our logarithmic sensor, here are 2 real videos
    https://www.youtube.com/watch?v=9AD8WfFKWhY&t=189s
    https://www.youtube.com/watch?v=XaV_ZP1CQ1c

    Obviously these images are frame based and not event based, but I have still some difficulty to believe that this can be a solution for general purpose vision applications. Maybe some one can help me to get better vision on this topic.

    Thanks !

    -yang ni

    ReplyDelete
  6. Perhaps the biggest advantage is that the camera consumes very little power when nothing happens.
    The second advantage, for stationary camera, is that only relevant information is delivered.
    Last but not least, the effective 'frame rate' can be rather high

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.