Lists

Tuesday, April 09, 2019

aiCTX Neuromorphic CNN Processor for Event-Driven Sensors

Swiss startup aiCTX announces a fully-asynchronous event-driven neuromorphic AI processor for low power, always-on, real-time applications. DynapCNN opens new possibilities for dynamic vision processing, bringing event-based vision applications to power-constrained devices for the first time.

DynapCNN is a 12mm^2 chip, fabricated in 22nm technology, housing over 1 million spiking neurons and 4 million programmable parameters, with a scalable architecture optimally suited for implementing Convolutional Neural Networks. It is a first of its kind ASIC that brings the power of machine learning and the efficiency of event-driven neuromorphic computation together in one device. DynapCNN is the most direct and power-efficient way of processing data generated by Event-Based and Dynamic Vision Sensors.

As a next-generation vision processing solution, DynapCNN is said to be 100–1000 times more power efficient than the state of the art, and delivers 10 times shorter latencies in real-time vision processing. Based on fully-asynchronous digital logic, the event-driven design of DynapCNN, together with custom IPs from aiCTX, allow it to perform ultra-low-power AI processing.

For real-time vision processing, almost all applications are for movement driven tasks (for example, gesture recognition; face detection/recognition; presence detection; movement tracking/recognition). Conventional image processing systems analyse video data on a frame by frame basis. “Even if nothing is changing in front of the camera, computation is performed on every frame,” explains Ning Qiao, CEO of aiCTX. “Unlike conventional frame-based approaches, our system delivers always-on vision processing with close to zero power consumption if there is no change in the picture. Any movement in the scene is processed using the sparse computing capabilities of the chip, which further reduces the dynamic power requirements.

Those savings in energy mean that applications based on DynapCNN can be always-on, and crunch data locally on battery powered, portable devices. “This is something that is just not possible using standard approaches like traditional deep learning ASICs,” adds Qiao.

Computation in DynapCNN is triggered directly by changes in the visual scene, without using a high-speed clock. Moving objects give rise to sequences of events, which are processed immediately by the processor. Since there is no notion of frames, DynapCNN’s continuous computation enables ultra-low-latency of below 5ms. This represents at least a 10x improvement from the current deep learning solutions available in the market for real-time vision processing.

Sadique Sheik, a senior R&D engineer at aiCTX, explains why having their processors do the computation locally would be a cost and energy efficient solution, and would bring additional privacy benefits. “Providing IoT devices with local AI allows us to eliminate the energy used to send heavy sensory data to the cloud for processing. Since our chips do all that processing locally, there’s no need to send the video off the device. This is a strong move towards providing privacy and data protection for the end user.

DynapCNN Development Kits will be available in Q3 2019.

3 comments:

  1. Why is there like 260 wire bonds? That's insane...

    ReplyDelete
  2. 4 million programmable parameters? Is it not 4 billion?

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.