Monday, May 07, 2018

Microsoft Announces 3D Camera "Project Kinect for Azure"

PRNewswire: Microsoft announces its latest 3D ToF camera: "A new initiative, Project Kinect for Azure — a package of sensors from Microsoft that contains our unmatched time of flight depth camera, with onboard compute, in a small, power-efficient form factor — designed for AI on the Edge. Project Kinect for Azure brings together this leading hardware technology with Azure AI to empower developers with new scenarios for working with ambient intelligence."

Microsoft AR visionary and architect Alex Kipman reveals the new ToF camera spec in his LinkedIn post:

  • Highest number of pixels (megapixel resolution 1024x1024)
  • Highest Figure of Merit (highest modulation frequency and modulation contrast resulting in low power consumption with overall system power of 225-950mw)
  • Automatic per pixel gain selection enabling large dynamic range allowing near and far objects to be captured cleanly
  • Global shutter allowing for improved performance in sunlight
  • Multiphase depth calculation method enables robust accuracy even in the presence of chip, laser and power supply variation.
  • Low peak current operation even at high frequency lowers the cost of modules

Some of the claims are objectionable. For example, Odos Imaging used to sell a higher resolution 4MP ToF camera few years ago. The global shutter for better sunlight performance is somewhat unclear claim too.

"Earlier this year, Cyrus Bamji, an architect on our team, presented a well-received paper to the International Solid-State Circuits Conference (ISSCC) on our latest depth sensor. This is the sensor... that will give the next version of HoloLens new capabilities."

And some more Microsoft marketing:

"Microsoft announced Project Kinect for Azure, a package of sensors, including our next-generation depth camera, with onboard compute designed for AI on the Edge. Building on Kinect's legacy that has lived on through HoloLens, Project Kinect for Azure empowers new scenarios for developers working with ambient intelligence. Combining Microsoft's industry-defining Time of Flight sensor with additional sensors all in a small, power-efficient form factor, Project Kinect for Azure will leverage the richness of Azure AI to dramatically improve insights and operations. It can input fully articulated hand tracking and high-fidelity spatial mapping, enabling a new level of precision solutions."

A clip from keynote of the company CEO Satya Nadela at Microsoft Build:


  1. Does anyone happen to know the depth accuracy specs?

  2. According to the previously published 2018 ISSCC paper, the error is about +/-2mm up to 4m distance, see fig. 5.8.4.

    1. Thanks for the information

    2. so that would 0.1 % .... have to see it to believe it. And of course that depends on ambient light etc. Sorry if I'm jaded after many many years of hyped up depth sensor specs that were theoretically possible in the most ideal situations, but fell apart in real life conditions.

      ToF is typically pretty noisy in the Z, while pretty good in the XY, so I really will be interested to see how this massive ToF slab does.

  3. Dual lasers for close and far range. Why need two lasers? How does the two lasers work?

  4. What are the dimensions of the new Kinect for Azure device?


All comments are moderated to avoid spam and personal attacks.