Saturday, February 16, 2019

WKA - Last Call for Nominations

International Image Sensor Society Call for Nominations for the Walter Kosonocky Award nearing its deadline on February 18, 2019:

"The Walter Kosonocky Award is presented biennially for THE BEST PAPER presented in any venue during the prior two years representing significant advancement in solid-state image sensors. The award commemorates the many important contributions made by the late Dr. Walter Kosonocky to the field of solid-state image sensors. Personal tributes to Dr. Kosonocky appeared in the IEEE Transactions on Electron Devices in 1997.

Founded in 1997 by his colleagues in industry, government and academia, the award is also funded by proceeds from the International Image Sensor Workshop. (See International Image Sensor Society’s website for detail and past recipients)

The award is selected from nominated papers by the Walter Kosonocky Award Committee, announced and presented at the International Image Sensor Workshop (IISW), and sponsored by the International Image Sensor Society (IISS).
"

Your nominations should be sent to Rihito Kuroda (2019nominations@imagesensors.org), Chair of the IISS Award Committee.

Friday, February 15, 2019

Google Pixel 3 XL Cameras Cost Estimated at 14% of BOM

TechInsights publishes a teardown report of Google Pixel 3 XL smartphone with cameras taking 14.2% of the total cost:


The phone includes a dedicated ISP chip designed by Intel and Google and manufactured in TSMC 28nm process:

TrinamiX Managing Director on Distance Measuring

TrinamiX Managing Director Ingmar Bruder explains how organic solar cells can be used for 3D distance measuring:

Thursday, February 14, 2019

MIT Sub-THz Imager

MIT researchers have developed a sub-terahertz-radiation receiving system that could help driverless cars see through fog and dust clouds.

In a paper published online on Feb. 8 by the IEEE JSSC, the researchers describe a two-dimensional, sub-terahertz receiving array on a chip that’s orders of magnitude more sensitive. To achieve this, they implemented a scheme of independent signal-mixing pixels — called “heterodyne detectors” — that are usually very difficult to densely integrate into chips. The researchers drastically shrank the size of the heterodyne detectors so that many of them can fit into a chip. The trick was to create a compact, multipurpose component that can simultaneously down-mix input signals, synchronize the pixel array, and produce strong output baseband signals.

The researchers built a prototype, which has a 32-pixel array integrated on a 1.2-square-millimeter device. The pixels are approximately 4,300 times more sensitive than the pixels in today’s best on-chip sub-terahertz array sensors. With a little more development, the chip could potentially be used in driverless cars and autonomous robots.

A big motivation for this work is having better ‘electric eyes’ for autonomous vehicles and drones,” says co-author Ruonan Han, an associate professor of electrical engineering and computer science, and director of the Terahertz Integrated Electronics Group in the MIT Microsystems Technology Laboratories (MTL). “Our low-cost, on-chip sub-terahertz sensors will play a complementary role to LiDAR for when the environment is rough.

Joining Han on the paper are first author Zhi Hu and co-author Cheng Wang, both PhD students in Han’s research group.

More about AIStorm

Venturebeat, ElectronicsWeekly, EETimes report more details on AIStorm AI-on-Sensor startup:

  • AIStorm has been founded in 2011 and has been in stealth mode till the recent announcement of $13.2M round A financing.
  • AIStorm’s patented chip design is capable of 2.5 Tera Ops and 10 Tera Ops per watt, which said to be 5x to 10x lower than the average GPU-based system’s power.
  • The company uses a technique called switched charge processing, which allows the chip to control the movement of electrons between storage elements
  • "The TowerJazz pixel is part of our input layer, so the charge comes from sensors, they produce electrons, and we multiply and move them" - says AIStorm
  • AIStorm tested its first chip this month and plans to ship production orders next year
  • The company’s first products are to be made in 65nm or 180nm process
  • AIStorm is planning Series B round for follow up products in 28-nm and possibly finer nodes
  • The production chips are aimed to be compatible with popular AI frameworks such as TensorFlow

Wednesday, February 13, 2019

Digitimes: Sales of Android Smartphones with ToF Camera to Reach 20M Units in 2019

Digitimes: Shipments of Android phones with 3D cameras are set to boom in 2019, propelled by the increasing adoption of rear ToF cameras, according to Digitimes Research.

Oppo led all other Android phone makers with its rear ToF camera introduced in November 2018. Oppo competitors, including Huawei, Xiaomi, and Vivo are likely to follow with their ToF-enabled models in 2019.

Shipments of ToF camera Android smartphones are expected to reach 20M units in 2019, Digitimes Research estimates.

Xenomatix Raises 5M Euros

De Rijkste Belgen newspaper reports that LiDAR startup Xenomatix raises 5M euros. 2M euros comes through the conversion of a bond loan, while 3M is a fresh investment by Carl Van Hool and AGC Automotive Europe, a part of Asahi Japan. XenomatiX and AGC have partnered to develop a windshield-mounted LiDAR.

For a LiDAR commercialization, Xenomatix is said to need 10M euros. After the current round of financing, the company has 6.8M euros. In the fiscal year that ended in June 2018, Xenomatix was profitable with income of 0.6M euros. Xenomatix has 22 employees.

AIStorm Raises $13.2M to Develop AI-on-Sensor Technology

BusinessWire: San Jose, CA-based startup AIStorm raises $13.2M in Series A round from Egis Technology, TowerJazz, Meyer Corpo, and Linear Dimensions Semiconductor Inc.

This investment will help us accelerate our engineering & go-to-market efforts to bring a new type of machine learning to the edge. AIStorm’s revolutionary approach allows implementation of edge solutions in lower-cost analog technologies. The result is a cost savings of five to ten times compared to GPUs — without any compromise in performance,” said David Schie, CEO of AIStorm.

Using sensor data directly—without digitization is said to enable real-time processing at the edge. AI systems require information be available in digital form before they can process data, but sensor data is analog. Processing this digital information requires advanced and costly GPUs that are not suitable for mobile devices: they require continuous digitization of input data, which consumes significant power and introduces unavoidable digitization delay (latency). AIStorm aims to solve these problems by processing sensor data directly in its native analog form, in real time.

It makes sense to combine the AI processing with the imager and skip the costly digitization process. For our customers, this will open up new possibilities in smart, event-driven operation and high-speed processing at the edge,” said Avi Strum, SVP/GM of the sensors business unit of TowerJazz.

The reaction time saved by AIStorm’s approach can mean the difference between an advanced driver-assistance system detecting an object and safely stopping versus a lethal collision,” said Russell Ellwanger, CEO of TowerJazz.

Edge applications must process huge amounts of data generated by sensors. Digitizing that data takes time, which means that these applications don’t have time to intelligently select data from the sensor data stream, and instead have to collect volumes of data and process it later. For the first time, AIStorm’s approach allows us to intelligently prune data from the sensor stream in real time and keep up with the massive sensor input tasks,” said Todd Lin, COO of Egis Technology Inc.

AIStorm’s management includes CEO David Schie, a former senior executive at Maxim, Micrel and Semtech; CFO Robert Barker, formerly with Micrel and WSI; Andreas Sibrai, formerly with Maxim and Toshiba; and Cesar Matias, founder of ARM’s Budapest design center. AIStorm is based in San Jose, CA with offices in Austria, Taiwan, Phoenix and soon Dresden and Israel.

Adobe Unveils AI-based Demosaicing

Adobe presents AI-powered demosaic algorithm for Bayer and Fujifilm X-Trans CFAs:

"...we’re introducing an all-new Sensei-powered feature, Enhance Details. Harnessing the power of machine learning and computational photography, Enhance Details... takes a brand new approach to demosaicing raw photos.

The new Enhance Details algorithm enables you to increase the resolution of both Bayer and X-Trans based photos by up to 30%. Applying Enhance Details to your photos can greatly improve fine detail rendering, improve the reproduction of fine colors, and resolve issues that some customers reported with their Fujifilm X-Trans based cameras.
"


Via Imaging Resource.

Smartsens Interview

Electronic Design publishes an interview with Leo Bai, SmartSens’ AI BU General Manager. Few quotes:

"...single-frame HDR Global Shutter technology is better for image-recognition-based AI applications than conventional CMOS image sensors that use Multiple-Exposure HDR technology. Combined with a DVP/MIPI/LVDS interface, single-frame HDR Global Shutter technology can be adapted to various types of SoC platforms.

...adoption of global-shutter technology is growing rapidly, in comparison to rolling-shutter technology. One of the main reasons is that a global-shutter CMOS image sensor is able to achieve excellent real-time performance without the jelly effect, especially in AI and machine-vision applications. With advanced manufacturing process technology and reduced cost, it’s expected to see increasing market demand for global-shutter CMOS image sensors.
"