Lists

Thursday, October 25, 2018

iniVation Ultra-Low-Power 65 nm DVS with Built-in Noise Filter

iniVation, a spin-off of the University of Zurich and the ETH Zurich, announces the successful development of next-generation Dynamic Vision Sensor technology. Highlights of the design include:
  • Event-based architecture enables ultra-high speed (equivalent frame rate beyond 10 kfps)
  • Pixel-parallel noise and spatial redundancy suppression
  • Ultra-low power, 18 nW per pixel, 26 pJ per event @ 1.2 V
  • Bandwidth up to 180 M events per second
  • Dynamic range beyond 100 dB
  • Compact 10um pixel design, fabricated with 65 nm process

13 comments:

  1. A fan running to 1800rpm is equivalent to 30 revolutions per second. This means that for one millisecond, the fan only move 1/30 of turn. Then the exposure time is low enough to almost stop the motion.
    On the other hand, 300 lux isn't a very low light conditions. The number of photons that reach the pixel may be in the order of several tens of thousands. Any sCMOS sensor will obtain a very detailed image, even with only a fraction of the exposure.

    ReplyDelete
    Replies
    1. the exposure is enough for show what is shown on figures. aer based detectors show unprocesses images like this, representing a high pass filter with the fan blade moving to the green pixels, and leaving the red ones, no change in clck.

      Delete
    2. The purpose of the figure is to demonstrate the effect of the built-in noise filtering in a regular setting, not an extreme setting. So it by no means is showing the performance limits.
      It's true that an APS/CCD sensor can also capture a clear image of the fan in this particular setting. But an APS/CCD sensor would have a hard time doing so at e.g. 1k fps, while consuming less than 1mW and producing less than 1MB/s of data.
      Btw, under 300 lux, a photodiode that size could only collect a few thousands of photons in 1ms, not a few tens of thousands, if you take into account the scene reflectance and lens attenuation.

      Delete
    3. If you use a highly efficient processor integrated on the same chip or 3D stacked on, then you can also do 1kfps easily with this resolution. In a mouse sensor, the fps is 5K and the power consumption is low without using 65nm process ...

      Delete
    4. Certainly. No doubt that we could have more than one way to achieve high speed, or high speed and low power at the same time, or even high speed low power and small data all three together (like the DVS)? The mouse sensor btw typically burns >10mW with only a few hundreds pixels, so is probably not the most compelling example.
      DVS is a relatively young technology with its unique combination of strengths and weaknesses. One of the philosophy behind this technology is to solve vision tasks in the most efficient way possible. I think this idea has quite some application potential in the IoT ubiquitous vision era.

      Delete
    5. I agree that the address vision is very promising. However, if you show an example of a new technology, this example must illustrate the strong points of this technology. If the example produce results that are worse that the obtained using conventional technology then this is no representative.

      On the other hand. 300lux of green ligh (the worse case), with an exposure time of 1ms and using a optical with F/1.4 will produce 12500 photons and is the light is red (640nm) will produce more than 70000. This high enough to obtain much more than a binary image

      Delete
    6. I take your point that this figure is not presenting the strength effectively, which in my view is more on the system level. I also would like to point out a difference in our views. If I understand you correctly, you consider an image captured by the conventional camera better than the binary data from the DVS. I think it depends. For certain applications, a stream of smart binary data may achieve the task goal while using the least system resource. In these applications, such binary data is better than 12-bit images, even if the sensors consume the same amount of power to produce the two types of information. Sometimes, less is more.

      Delete
  2. I think the core advantage of this camera is also one of its disadvantages. It removes most of the redundancy and only transport changes. So for example in robotics you can act on the few changed and do not have to transport the same 99% of the image from camera to a PC or processor. But... In a frame based camera, you can do image processing on every frame and get for example the position of fidicual marks. You can
    get high precision due to interpolation over many image edges - and you still get accurate information once movement stops.

    DVS object tracking gets tricky once there is no longer a movement. But well - this is the base of the technology.

    A second weakness I experienced if used on a PC is the latency of non realtime components. USB for example introduces a few ms delay every now and then and the OS introduces more. If you want to feed the image processing results into robotic actorics, e.g. servo drives, you have realtime on both edges of the problem (camera, servo) and non realtime in between. It is a bit tricky also here to make the link in a proper way.

    One remarkably nice point in inivation is its open software approach. Most software opensource on git, a lot of examples to get started, a lot of papers for the basics.

    It will be interesting to learn in which applications and in which hardware/OS environments this technology will get used. It might require a deeper integration into target hardware environment than a standard industrial camera in order to get the advantages. But it also promises approaches that are impossible with frame based imaging. I'm looking forward to visit Inivation booth in Stuttgart to see their demos

    ReplyDelete
    Replies
    1. Thank you for your feedback. We look forward to talk to you in Stuttgart too :)

      Delete
    2. I agree that this technology is interesting. But because the binary output, probably will be better to combine with other technologies like photon counting device. For example, if the photo active device is a SPAD, this produce binary output in a natural way and also allows to work under ultra low ligh conditions. Is a suggestion to improve this technology.

      Delete
  3. There is some discrepancy between the text "Compact 10um pixel design", and the image which shows a 20um pixel with 4 photodiodes, if Inivation has a 10um pixel then it is state of the art, if it is 20um then they are a little behind.

    ReplyDelete
    Replies
    1. Sorry for the confusion. The 20um unit is acutally a group of 4 pixels. So each pixel is still 10um.

      Delete
  4. Artificial vision is a complex task which requires a lot of processing of different forms. The smart sensor is only OK for some specific applications with a well defined task in a well defined environment.
    For a more general artificial vision, such smart sensing is not only insufficient but also too much reductor of information. If you include the power consumption in the auxillary processing, there is NO benefice of power consumption for a whole system at all.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.