Lists

Monday, October 12, 2015

Chronocam Startup Presents Event-Driven Sensor

EETimes: Chronocam AS (Paris, France) is a 2014 spin-off of University Pierre et Marie Curie and University of Vienna. The startup is based on 15-year long research on asynchronous image sensors.

"The initial image sensor is QVGA resolution (320 by 240 pixels) with a pixel size of 30-microns on a side and sampling circuit alongside each pixel. The sensor is not clocked and does not send frames of data, said Christop Posch, chief technology officer. Each of the pixels in the array acts independently and sends information that is time-based. In addition, the pixel only sends information when there is a significant change. The result is scene-dependent data compression that results in time-continuous but sparse stream of events sent over an asynchronous data bus. Chronocam calls the technology CCAM EyeOT.

Combined performance figures include up-date speeds equivalent to 100k frames per second, a dynamic range of greater than 120dB. The video compression is a factor of 100 up from conventional image sensors and a power consumption of less than 10mW.
"

The first sensor uses UMC process, and has a size of 1cm by 0.8cm. The company is working on the pixel size reduction.


The company fundings include ones of €750,000 from Robert Bosch Venture Capital and CEA Investissement.

20 comments:

  1. This looks a lot like this presentation we had at the IISW this year from Chenghan Li: http://www.imagesensors.org/Past%20Workshops/2015%20Workshop/2015%20Papers/Sessions/Session_13/13-05_Li_Delbruck.pdf

    Really interesting technology.

    ReplyDelete
  2. Toibi of ETH has designed a sensor for University of Vienna in an European project.

    ReplyDelete
  3. Well I am the founder of an event-driven computing technology company, that's based on software. For the longest time, I thought it all should be done at the hardware level--the further on very front end, the better. This sensor is in the right direction.

    ReplyDelete
    Replies
    1. Maybe you would be interested to know that you can buy prototypes of event-based vision and audition sensors at inilabs (www.inilabs.com), spin-off of the institute of neuroinformatics (INI) in Zurich, where the first version of this event-based sensor originated.
      Chronocam's sensor was originally developed at the Austrian institute of technology by Posch et al, and is itself based on the sensor developed by Tobi at Patrick at the INI.

      Li's IISW paper is different from Chronocam's sensor because it combines an asynchronous change detection pixel with an standard (synchronous) APS pixel, while Chronocam combines an asynchronous change detection pixel with an also asynchronous 'time-to-first-spike' pixel.

      For reference, I am former PhD student of Tobi and co-author of Li's paper.

      Delete
    2. Thanks a whole bunch for all these information!
      I bookmarked this page, will check out the inilabs' stuff. Good to see universities in Europe are getting out ivory towers, actively putting their discoveries into good technology/products. I have a naive question, is the asynchronous nature of the pixel allowing you to subscribe the changes whenever happening at the individual pixel level?

      Delete
    3. I don't really understand your question, but I'll try to answer anyway. For this I can only talk about the sensors coming out of INI, and it hold for the vision as well as the audition sensors. At the output of the chip you get the addresses of the pixels with very short latency after a change in illumination in the corresponding pixel.
      The cameras that are sold by iniLabs have a USB interface, and at the driver you get packets of events, where an event is an address and a timestamp with microsecond resolution.

      Delete
    4. I was using even-driven computing's terms. Basically subscribe-model means the down-stream applications only get specific events that are of interests, e.g., the change of pixels in the center area but not on the edge. Based on your reply, this type of sensors do not do that, they stream out the events regardless and asynchronously.

      Delete
    5. You are right, there is no subscibtion. Something like that would have to be implemented off-chip in an FPGA or so... The earlier sensor had a way to shut off rows, but that's not really the same thing, and since it was never used we abandoned it for later chips.

      Btw, there is a new startup that will try to use these sensors for visual positioning, navigation, etc:
      http://www.insightness.com/

      Delete
    6. What is the relation between Chronocam and Insightness?

      Delete
    7. There is no direct relation between Chronocam and Insightness, except that they are using very related vision sensors and that the founders used to collaborate for their research. Insightness is a spin-off from the institute of neuroinformatics in Zurich and it involves Tobi Delbruck, the co-inventor of the Dynamic Vision Sensor.

      Delete
  4. Congrats for your works on log sensor, Dr Delbruck!

    ReplyDelete
  5. It was actually initiated by a guy called J. Kramer that passed away, then carried out by Delbruck, Lichteiner and Posch. Posch took it further by integrated a time based coding of gray levels making the sensor a full asynchronous camera, While Delbruck followed a more conventional path adding conventional APS pixels to the initial chip, which looks like a step backward for the technology....

    ReplyDelete
    Replies
    1. True, Kramer initiated the event-based sensor, but I would argue that Patrick (Lichtsteiner)'s pixel design really made it to work. After all, all subsequent publications I am aware of use his differentiator circuit, Posch included.

      Whether adding a conventional APS pixel is a step backward is debatable, I think both approaches have their advantages and disadvantages. It will depend on the application which one will be more suited...

      Delete
  6. What is the definition of DR for an asynchronous sensor?
    Can all pixels change simultaneously, constantly, like for flicker?
    Also, what is the lowest light level change a pixel will respond to? (in electrons).

    Is such a sensor used in any product today? Or anticipated to be widely used in any product?

    Lastly, the work of Jurg Kramer is from early 2000's, right?
    Carver Mead's group (incl. Tobi) looked at changes in signal before then.
    And even Bell Labs/JPL published a paper on a motion sensitive CMOS APS in 1995 ISSCC.
    Finally, event-driven readout was well understood for sure in 1988 as discussed by several groups in a conference in Leuven on detectors for physics experiments.
    I am just not sure what timeline Berner is on. Is it just the biomimetic work?

    ReplyDelete
    Replies
    1. About DR, we were not able to encounter a scene which surpassed the DR of our sensors, so the DR numbers are scene illumination in full sunlight to scene illumination where the sensor still perceives 50% contrast without changing any bias parameter. The numbers are academic, let's say these sensors have enough DR...

      Pixel response time is light dependent and usually fast enough to react to flicker, which can cause quite a lot of headache.

      Lowest light level in electrons I have to admit we never assessed.

      The austrian institute of technology tried to market traffic sensors (essentially car or people counters) using such a sensor. I think they sold some, but I have no clue how many. Otherwise I am not aware of products using such a sensor. I think largely because there were no algorithms available that could really make use of the new kind of output data. Recently there are more and more publications about algorithms, so I am curious if we see products in the near future.

      Yeah, Kramer's latest work to which I referred above is from 2003 I think, but as you mentioned there have been silicon retinas since the 80ies. But as far as I know Kramers, and much more so Lichtsteiners sensor where the first ones that were really usable in real world situations.

      I don't know about event-based readout in detectors. I have developed the readout circuits used in Tobi's group nowadays during my PhD, they are based on Boahens work and use word-serial address events, but compared to Boahen use less transistors in the pixel to make it smaller. And it was hard enough to get it to work reliably...

      So, my timeline are the last 10-15 years, and in our work there is not so much biomimetic left... The early silicon retinas were much more biomimetic.

      Delete
    2. Thanks for the information. Historical event-driven and data-driven detectors can be found via Google Scholar search but I understand your interest area. Also, it is easy to make a decent sensor that operates under relatively bright light conditions. But what separates the wheat from the chaff is the low light performance, which involves fill factor, QE, and read noise. Anyway, the sensors are interesting but I still think scanned sensors are better. Another axiom: Never do in analog what can be done digitally, despite the allure of analog solutions!

      Delete
    3. Well, we followed your axiom and built single-bit AD converters into the pixel! ;-)
      The ATIS by Posch et al is really a fully digital pixel where also intensity information is transmitted off-pixel (not just off-chip) in a digital manner, at the cost of a complex pixel.

      Our dynamic range numbers of 120dB or so actually mean that the sensors work quite decently in low light, however I don't know how well they compare to the latest image sensors from the Sony A7s and the like. But I am curious to see the output of the latest sensor from Tobi's group which will be BSI. In combination with the relatively big pixel that makes a big photodiode...

      Scanned sensors surely take nicer pictures and are better for lots of applications, but the event-based sensors do have a latency and data reduction advantage which could make them useful in Robotics especially. The future will tell if event-based sensor catch on or not...

      Delete
  7. We have developed a commercial available logarithmic differential sensor which can operate correctly to 1lux. But We are always looking for applications for this sensor. If you have any ideas, please let me know. Here is an example : https://www.youtube.com/watch?v=XaV_ZP1CQ1c

    Thanks !
    -yang ni

    ReplyDelete
    Replies
    1. Dear Yang Ni, is there more information on this sensor available?

      Thanks
      Raphael

      Delete
  8. Hi,
    does anyone know what the difference between the chronocam and the dynamic vision sensor from inilabs are?

    i mean the dvs is the sensor for the chronocam. is it right?

    thx
    Peter

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.