Lists

Thursday, July 20, 2017

Event-based Vision Workshop Materials On-Line

It came to my attention that the International Workshop on Event-based Vision at ICRA'17 has been held on June 2, 2017 in Singapore. The workshop materials are kindly made available on-line, including pdf presentations and videos.

The Workshop organizers have also created a very good Github-hosted list of Event Based Vision Resources.

Chronocam, ETH Zurich, Samsung are among the presenters of event driven cameras:





ETH Zurich and University of Zurich also announces Misha Award for the achievements in Neuromorphic Imaging. The 2017 Award goes to "Event-based Vision for Automomous High Speed Robotics" work by Guillermo Gallego, Elias Muggler, Henry Rebecq, Timo HorstSchafer, and Davide Scaramuzza from University of Zurich, Switzerland.

Thanks to TD and GG for the info!

5 comments:

  1. I question really what are the practical applications of such approach. Since the temporal change thresholding reduces considerably the richness of image information and makes further processing quasi-impossible.

    ReplyDelete
  2. This problem has been a part of the struggle Tobi and others had to undertake, and it is still holding them down. But there are interesting ideas coming up.

    The root of the misunderstanding lies in your choice of words "the richness of image information" which is trying to frame its use in terms of image capture. Note that they are not even calling these devices "image sensors", but rather "vision sensors". If you need to capture the image of the scene, this is probably a bad choice. If you need to perceive the environment, fast, it can get interesting.

    Bigger problems with this thing are the pixel size, and I think still the readout backplane, sizing it for peak load corner cases (when everything in the scene changes, e.g., with panning across a complex texture scene). I suspect these two issues are holding DVS at relatively small array sizes, which in turn limits their usability. I hope stacking will happen soon and will relax some of these constraints.

    ReplyDelete
    Replies
    1. The fact that all the human being can admire the 7th art means that our brain is perfectly adapted to understand the world from frame based video.

      Delete
  3. Kudos to the conference for making all this material available. But that picture on their front page just causes flashbacks. Jet lagged in YABBA (Yet Another Bloody BAllroom). Is there still some coffee at the back?

    ReplyDelete
  4. I think it has a perfect fit in robotics. It is hard to use a conventional camera "as an encoder" for servo drives since its hard to achieve a proper frame of lets say 1kHz including conventional image processing like a feature matcher on every frame. With this aproach you solve this problem. The 'pencil balancer' paper for example demonstrates this approach. You can never get into the low ms range with a small microcontroller with conventional imaging.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.