tag:blogger.com,1999:blog-19092890.post3429147433958818991..comments2024-03-28T17:41:43.970+02:00Comments on Image Sensors World: Chronocam in Novus LightVladimir Koifmanhttp://www.blogger.com/profile/01800020176563544699noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-19092890.post-58677058687362354122017-08-15T20:16:28.010+03:002017-08-15T20:16:28.010+03:00Perhaps the biggest advantage is that the camera c...Perhaps the biggest advantage is that the camera consumes very little power when nothing happens.<br />The second advantage, for stationary camera, is that only relevant information is delivered.<br />Last but not least, the effective 'frame rate' can be rather highOmer Korechhttps://www.blogger.com/profile/03847651486365834138noreply@blogger.comtag:blogger.com,1999:blog-19092890.post-83823070408046367732017-08-09T15:47:07.159+03:002017-08-09T15:47:07.159+03:00The data event is created by pure intensity change...The data event is created by pure intensity change on the pixel, this intensity change can come either from a moving edge or from a local intensity modulation. For the events generated by a moving edge, they basically equivalent to an edge detector, I remember that Mowald and Tobi have built a Marr stereo matching chip using the moving edge directly inside the chip. <br />If we ignore the local intensity changes, the output of this sensor is equivalent to an address coded edge image of a scene. The triggering instance should vary a little bit according to the sub pixel position of the corresponding edge, but what are the salient informations we can get from this phase difference ?<br /><br />We have built such difference detection inside our logarithmic sensor, here are 2 real videos<br />https://www.youtube.com/watch?v=9AD8WfFKWhY&t=189s<br />https://www.youtube.com/watch?v=XaV_ZP1CQ1c<br /><br />Obviously these images are frame based and not event based, but I have still some difficulty to believe that this can be a solution for general purpose vision applications. Maybe some one can help me to get better vision on this topic. <br /><br />Thanks !<br /><br />-yang niYangNIhttps://www.blogger.com/profile/14424444367081117570noreply@blogger.comtag:blogger.com,1999:blog-19092890.post-80246622336990454852017-08-09T02:41:59.974+03:002017-08-09T02:41:59.974+03:00just like you said, you know CMOS image sensor is ...just like you said, you know CMOS image sensor is not an invention by Sony, Samsung, or Omnivision. So why are you rebuting when people point out the factual origin of this technology, which is very often completely omitted by articles for some reason?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-40696511538910279642017-08-08T05:48:58.004+03:002017-08-08T05:48:58.004+03:00@Anonymous #1 - I had a look at Chronocam's Vi...@Anonymous #1 - I had a look at Chronocam's Video, it's basically a pixel-based difference where new information updates the pixel's output.<br /><br />Same as DVS from Unitectra (et. al): https://youtu.be/0ZEM57DZJes?list=PLWa6uO3ZUweAZ-VXnnBsDDsBbz32BLlYf - there's a few Videos explaining this.Robhttps://www.blogger.com/profile/08222535289392471857noreply@blogger.comtag:blogger.com,1999:blog-19092890.post-13812728426151083682017-08-07T22:17:45.505+03:002017-08-07T22:17:45.505+03:00Yes, so what ... Sony, Samsung, Omnivision, ... th...Yes, so what ... Sony, Samsung, Omnivision, ... they all make a lot of business based on CMOS image sensors, invented by someone else !Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-43975609202321045922017-08-07T21:36:36.800+03:002017-08-07T21:36:36.800+03:00The base event-driven vision technology was invent...The base event-driven vision technology was invented by the group of Tobi Delbrück at Uni Zurich and ETH Zurich. It is licensed to AIT. He trained the CTO of Chronocam (who was at AIT at the time).Kynan Enghttp://inilabs.comnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-43789187810100725542017-08-07T11:45:45.998+03:002017-08-07T11:45:45.998+03:00It is not in a 10µm process. More in the direction...It is not in a 10µm process. More in the direction of 90 nm. You have to put a lot of stuff in each pixel, the presentation shows the layout.<br /><br />The sensor itself reminds me to a similar device from the Austrian Institute of Technology in Vienna of 2012. That each pixel catches the differences, that makes sense in a fixed environment. But in a moving car, I think you get to much data. And color reconstruction in a nonlinear space is really tricky.Dana Diezemannhttp://www.ids-imaging.comnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-54846641524084305192017-08-07T09:19:53.979+03:002017-08-07T09:19:53.979+03:00based on the dimensions of the pixel i'd guess...based on the dimensions of the pixel i'd guess that intel gave them access to their 10u process back from 1971.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19092890.post-51141295346545062082017-08-06T01:24:13.367+03:002017-08-06T01:24:13.367+03:00Curious what technology node they're using, ju...Curious what technology node they're using, just to get an idea of the complexity of the circuitry in each pixel based on the micrograph.Anonymousnoreply@blogger.com