Thursday, August 16, 2018

Panasonic Long-Range ToF Sensor Article

Nikkei publishes an article on Panasonic 250m-range ToF solution, first presented 2 months ago.

"Panasonic Corp developed a range image sensor that can take an image of a 10cm object located 250m away in the dark. [there is no info on the range and resolution in a bright sunlight - ISW]


In the field of autonomous driving, the company considers that the sensor can supplement the functions of existing sensors because the new sensor (1) supports a longer distance than LiDAR (light detection and ranging), which enables to obtain range images, and (2) can take images in the complete dark unlike CMOS image sensors.

Panasonic expects to start to ship samples in fiscal 2019 and begin volume production in fiscal 2021.

...the new sensor uses a principle similar to the principle of flash-type LiDAR. In other words, near-infrared-light pulse (wavelength: 940nm, output: 1,200W, pulse width: 10ns, GaAs-based laser device in the case of the prototype) is applied to the entire imaging area.

With the prototype, near-infrared pulse is emitted with a cycle of 167μs to measure distance for each distance range. Based on a calculation conducted by Panasonic, when the viewing angle of the prototype is set at 20°, the number of photons coming from a distance of more than 100m away and entering one pixel is 1 or less. Therefore, in the case of a distance from which the number of incoming photons becomes 1 or less, measurement is carried out several times for the same distance range.
"

ToF APD sensor with 260,000 11.2μm2 pixels

Wednesday, August 15, 2018

SmartSense Raises 10s of Millions Dollar in a New Financing Round

SmartSens reports that it has closed a new investment round of "tens of millions of dollars". The lead investor is the National Core Industry Investment Fund (Big Fund), the Beijing Core Dynamic Energy Investment Fund, and venture capital institutions such as Lenovo Venture Capital Group.

Li Sheng, COO of SmartSens, said: “SmartSens has successfully completed a new round of financing, which reflects the recognition of the capital market. This recognition is not only derived from the achievements of SmartSens in the past, but also from the deep technical accumulation of SmartSens and becoming a global Leading high-performance image sensor supplier's grand vision."

SmartSens and IBM have reached an IP cooperation agreement in July 2018 - SmartSens will receive a total of 14 categories of more than 40 CMOS image sensor related patents. The patents involved are mainly basic technology patents, covering pixel design, semiconductor processing and manufacturing, and chip packaging.

"CIS is a key area for the future development of the semiconductor industry. Under the background of the government's support for local chip companies, the development prospects of SmartSens are undoubtedly worth looking forward to," Core Dynamics Investment Director Manager Wang Jiaheng said. "Core kinetic energy investment will.. continue to help SmartSens's technological innovation and market operation level, and make SmartSens a unicorn enterprise in China's semiconductor industry."

Wang Guangxi, Managing Director of Lenovo Ventures, said: "In the era of smart Internet, with the rise of 5G, Internet of Things, artificial intelligence and edge computing, the importance of image recognition has become more prominent. CIS chips are key components in the field of image recognition. Machine vision, intelligent transportation, autonomous driving, AR/VR and other fields are widely used, and it is a model application of technology innovation and industry integration. We are very optimistic about the development prospects of SmartSens, and are willing to help SmartSens through Lenovo's deep scientific resources and industrial advantages. Become a force that cannot be ignored in the CIS market."

Tuesday, August 14, 2018

DR Extension for SPAD Arrays

OSA Optics Express publishes a paper "Dynamic range extension for photon counting arrays" by Ivan Michel Antolovic, Claudio Bruschini, and Edoardo Charbon from TU Delft and EPFL.

"In this paper, we present a thorough analysis, which can actually be applied to any photon counting detector, on how to extend the SPAD dynamic range by exploiting the nonlinear photon response at high count rates and for different recharge mechanisms. We applied passive, active event-driven and clock-driven (i.e. clocked, following quanta image sensor response) recharge directly to the SPADs. The photon response, photon count standard deviation, signal-to-noise ratio and dynamic range were measured and compared to models. Measurements were performed with a CMOS SPAD array targeted for image scanning microscopy, featuring best-in-class 11 V excess bias, 55% peak photon detection probability at 520 nm and >40% from 440 to 640 nm. The array features an extremely low median dark count rate below 0.05 cps/μm2 at 9 V of excess bias and 0°C. We show that active event-driven recharge provides ×75 dynamic range extension and offers novel ways for high dynamic range imaging. When compared to the clock-driven recharge and the quanta image sensor approach, the dynamic range is extended by a factor of ×12.7-26.4. Additionally, for the first time, we evaluate the influence of clock-driven recharge on the SPAD afterpulsing."

Quanergy Troubles

Bloomberg reports about troubling signs at OPA-based LiDAR developer Quanergy that "has raised $160 million to date at a peak valuation of more than $1.5 billion."

"Quanergy has struggled to deliver products along the timelines it has set out for itself, and has shipped devices that don’t work as well as advertised. Numerous employees have left over the last 18 months, including several at key positions. But Quanergy’s biggest challenge is that its autonomous car business hasn’t developed the way it thought it would.

Quanergy has stopped talking about an IPO and has been pursuing new investments in recent months. It has had talks about finding a buyer, according to people with knowledge of the situation. Quanergy backers Samsung Ventures and Sensata Technologies Holding Plc, an auto sensor maker, have expressed disillusionment with the startup, according to people familiar with those firms.

Bloomberg also spoke to a half-dozen former employees, all of whom asked to remain anonymous, most of them citing the fear of retaliation. They said execution was a consistent problem at Quanergy. Several former employees described Eldada [the CEO] as a combustible and intimidating presence, stymying debate about product development and seeing any disagreement as intolerable dissent.

One former employee said he never saw a single device come off the line at Quanergy that met all of its stated specifications, an allegation the company denies.
"

The company CEO Louay Eldada publishes "Statement from Quanergy on Bloomberg Story" mostly denying Bloomberg analysis and conclusions.

MIT Time-Folded Optics Said to Offer New Possibilities

Optics.org: MIT Media Lab researches propose a new optics for fast cameras, say it adds new capabilities:

Monday, August 13, 2018

SmartSens Article Translation

SmartSens representative, The Hoffman Agency, kindly sent me a more correct translation of the company's article "Let China no longer miss the era of CIS." This comes to replace the half-broken Google translation in my previous post.

"Due to the late start and weak infrastructure of the semiconductor industry in China, the Chinese development of commercial CCD chips was completely buried and behind. The market used to be basically monopolized by Japanese manufacturers such as Sony, Panasonic and Sharp. Therefore, China completely missed the CCD era. With the rise of CIS, how to break the technology and market monopoly by Japanese and European manufacturers in the image sensor field has become the biggest challenge for the Chinese semiconductor industry.

Soon after graduating from the Hong Kong University of Science and Technology with his doctorate, Dr. Xu Chen went to Silicon Valley in the United States to pursue his own engineer dream. He joined the world's first company that launched commercial CIS chips, and engaged in the research and development of pixel components, the most important component in CIS development. During this time, Dr. Xu and his team developed and applied for nearly 30 patents. Since then, Dr. Xu has been engaged in technology research and development at leading CIS companies.

With the rise of Sony in the CIS field, the "Silicon Valley Power" has gradually declined, and "Asian Power" has risen to the front stage. It was at this time that Dr. Xu Chen first developed the idea of creating a Chinese brand to challenge the Japanese and European CIS giants.

In 2011, opportunities arose as China accelerated development in its tech industry. The central government introduced a series of policies designed to attract overseas talents, including the “Thousand Talents Plan.” Local governments also launched various policies to support the homecoming of oversea talents. It is at this prime time that Dr. Xu Chen returned to his motherland with his own visions, beliefs and core CIS innovations.

To Dr. Xu, successful Silicon Valley companies often share such characteristics: tech- and market-savvy founders, cohesive and go-getting teams, generous and people-oriented benefits, and compatible and diverse cultures. Not only has SmartSens, a company founded in China, inherited the Silicon Valley spirits from Dr. Xu Chen, but it continues to absorb globally educated talents to create a "Chinese core" in the CIS field. Founded on quality products and technological innovations, SmartSens is breaking the monopoly of Japanese and European manufacturers and leading China in the CIS era.
"

SmartSens founder Xu Chen

Sunday, August 12, 2018

Sharp ToF SPAD-based Proximity Sensor

Sharp and Socle/Foxconn come up with ST-like SPAD ToF proximity sensor, the MTOF171000C0. Sharp also makes a similar ToF proximity device, the GP2AP01VT10F, with quite a detailed spec. Application guide is available on Github. The samples are supposed to be available in August 2018.

CMOSIS/Fillfactory Key Team Joins Gpixel

Gpixel: A team of CMOS image sensors industry veterans creates Gpixel NV. Gpixel NV is structured as a privately-held company and started operation on August 9th, 2018 providing turn-key solutions to industrial and professional markets ranging from sensor design, prototyping, characterization and packaging to qualification and volume production.

Gpixel NV founders are Tim Baeyens, Tim Blanchaert, Jan Bogaerts, Bart Ceulemans and Wim Wuyts. Together they have more than 75 person-years of relevant experience in CMOS imaging technology, development, operations and commercialization. Gpixel NV is set up with financial and operational backing of Gpixel Inc. (Gpixel Changchun Optotech), a CMOS sensor supplier based in Changchun, China, founded by Xinyang Wang in 2012.

Imaging and CMOS image sensors are ubiquitous today,” states Tim Baeyens, CEO of Gpixel NV. “Nevertheless, there is still a strong need for dedicated companies such as Gpixel to address high end markets like industrial and professional imaging. Through our wide industry network and strong collaboration with Gpixel Inc, we anticipate growing Gpixel rapidly to become one of the key players in solid state imaging.

Xinyang Wang, founder and CEO of Gpixel Inc. states, “I am very pleased to join forces with Gpixel NV to grow Gpixel to become a dominant player in our application areas. I am also convinced that the addition of Jan Bogaerts as Chief Technical Officer (CTO) and Wim Wuyts as Chief Commercial Officer (CCO) for Gpixel worldwide will foster our company’s innovation and global sales significantly.

Saturday, August 11, 2018

Synaptics Rethinks its Under-Display Optical Fingerprint Business in Search for Better ROI

SeekingAlpha: Synaptics quarterly earnings call has interesting info on its optical under-display fingerprint sensor business:

"...we really take a big scrub on all of our products in the ROI and what provides the best investment going forward. And as we did that analysis, it was becoming clear ...that optical was going to be one of those boom and bust cycles. And to a certain degree, we lived through that with our capacitive solutions a few years back, and we did fantastic. But invariably, because it's somewhat of an optional solution and there's alternatives, it quickly went from a multi-dollar solution to a sub-$1 solution. And so, we enjoyed good money.

But if you look over the entire period, it wasn't the type of sticky highly differentiated business that we now seek as a company. And so, it would've taken additional investment or continued investment from our perspective. It somewhat hurts because we clearly were the innovators in the industry, and yes, we do see broader adoption of in-display fingerprint in the marketplace from a unit perspective and so on. But we can see the ASP erosion has begun, and there'll be multiple suppliers in it. Just from a long-term investment, we have better fish to fry right now. And so, it was purely an ROI decision.

...the revenues were fairly minimal. I'd say kind of in the sub $15 million to $20 million range is what's going away. We have bigger plans for it, as you saw at our Analyst Day, so we were expecting it to contribute about $100 million in fiscal 2019, and then more than that in fiscal 2020. But the actual impact year-over-year is fairly minimal at a Synaptics level.

...Now, that doesn't mean we're stopping. From the very beginning, when we went into this business, we said the ultimate solution was when fingerprint was truly integrated into the display. And eventually, when the market was right, we would have TDDI FP, so we're going to continue the investments in research in that particular area when we think the market might be ready, so you could have true in-display across the entire screen with multiple cost to the – minimal cost, excuse me, to the end user.
"

Friday, August 10, 2018

Ouster on LiDAR Specmanship

Ouster presentation at Autonomous Vehicle Sensors Conference held in Jan Jose, CA in June 2018 defines the requirements to LiDARs and proposes their realistic measurement conditions, so that different products and technologies can be compared:


Links to few other interesting presentations at the conference:

- LiDAR for ADAS and Autonomous Driving by Hamamatsu
- AEye iDAR
- Frequency-Modulated Automotive Lidar by Blackmore
- Road to Robots by Yole
- Sony Automotive Sensors
- Camera-based Active Real-Time Driver Monitoring Systems by Seeing Machines

Thursday, August 09, 2018

Himax Compares Smartphone ID Solutions, 3D Sensing

Himax quaterly earnings report presents the company's vision of 3D sensing market:

"Leading Android smartphone makers are exploring various 3D sensing technologies, namely structured light, active stereo camera (ASC) and, to a lesser extent, time-of-flight (ToF), trying to strike a good balance of cost, specifications and application. More software players are entering the ecosystem to develop 3D sensing applications beyond the existing applications, namely facial recognition, online payment and camera performance enhancement.

Himax has been working with an industry leading fingerprint solution provider to develop an under-display optical fingerprint product in the last two years, targeting smartphones using OLED displays. Himax provides a customized low-power image sensor in the solution. The Company is pleased to announce that the solution has entered into mass production with a major Android smartphone OEM for their new flagship model with shipment expected in the coming months. The CMOS image sensor used in the solution will have a notably higher ASP than the Company’s traditional display driver IC products.
"

SeekingAlpha: In Q&A session, Himax CEO Jordan Wu gives more details anout its optical fingerprint business:

"It appears to be to enjoy pretty good momentum right now in the Android market. Okay, because of cost issue. The business can change overnight in how they are, firstly, the cost, I’m talking about the total solution, which includes most importantly the CMOS image sensor, and as I said optics and certainly algorithm chip combined at the module. Initially, when the technology was promoted, we’re talking about $10 in total, but now, we are seeing $10 plus in total.

Now, we’re seeing the total cost is now coming down to $10 or even below. And because of certain cost saving measures adopted by module houses, in particular, in the optics side. So for that reason, again, customers want full screen design and they don’t want their capacity touch to be on the back, which is not convenient to use. So for full screen design, if you feel structured light too expensive, ToF is also quite expensive and even ASC is relatively expensive because ASC now, we’re still talking about $10 plus and this thing, fingerprint is already below $10.

So for that reason, it seems to be picking up momentum. However, I will have to say that the limitation of fingerprint is that it can do nothing out of the fingerprint, whether it’s all three kinds of 3D sensing, you can have a lot of other applications beyond unlocking your smartphone and payment. Right. So I think that is the most important thing. And certainly, under glass fingerprint, one reason why it is getting traction by it, it is not really becoming overwhelming. One of the issues is it still suffers from its lesser satisfactory accuracy, meaning when you try to unlock your phone, the failure rate still is too high. And when it fails, you have to – the user will have to key in the password, which people hate, right.

So in comparison, 3D sensing or face authentication, the accuracy level is a lot higher. And so they are certainly a big wildcard will be our first launch in expected September, right, in the new phones, whether they can introduce interesting attractive new features, application to 3D sensing. But I think it is still slightly too early to tell who is going to dominate which segment of the market as of today.
"

Wednesday, August 08, 2018

How to Bring CIS Industry to China

Update: The correct translation coming from the company representation is posted here. The one below is based on Google automatic one and is not accurate.

SmartSens Founder Xu Chen publishes an article on the company web site "Let China no longer miss the era of CIS." Few interesting quotes with help of Google translation:

"Due to the late start and poor foundation of Chinese semiconductors, the development of commercial CCD chips is completely behind the scenes. The market is basically monopolized by Japanese manufacturers such as Sony, Panasonic and Sharp, completely missing the CCD era. With the rise of CIS , how to break the technology and market monopoly of Japanese and European manufacturers in the field of image sensors has become a difficult problem for Chinese semiconductors.

In the year of graduation from Hong Kong University of Science and Technology, Dr. Xu Chen came to Silicon Valley in the United States with his own dream of engineers. He joined the world's first company to launch commercial CIS chips, and engaged in the research and development of pixel components , the most important component in CIS development. developed and applied for a close, during the 3 0 patents. Since then, Dr. Xu Chen has been engaged in technical development work at the CIS giant.

With the rise of Sony in the CIS field, the "Silicon Valley Power" has gradually declined, and "Asian Power" has entered the stage. It was at this time that Dr. Xu Chen first developed the idea of ​​creating a Chinese brand to challenge CIS Japanese and European giants.

In 2011 , it coincided with the introduction of a series of overseas talents introduction policies including the “Thousand Talents Plan” in order to accelerate the innovation of high-tech fields . Local governments also launched support policies to support the return of overseas talents. It is in this spring of the policy that Dr. Xu Chen returned to the motherland with his own ideals, beliefs and core CIS innovations.

Throughout Silicon Valley's successful innovation companies, they often have such characteristics — the founders who are proficient in technology and market-savvy, the united and powerful innovation team, the generous and people-oriented employee benefits, and the compatible and diverse corporate culture. The native of SmartSens only be obtained from Dr. Xu Chen from Silicon Valley, where such a temperament, but also has the world continue to absorb the educational background of creative talents, inclusive, and finally create a CIS in the field of "Chinese core" team, the quality of quality Based on the core of technological innovation, it has broken the monopoly of Japanese and European manufacturers, so that China will not miss the CIS era.
"

Tuesday, August 07, 2018

TPSCo Pixel Offerings

TowerJazz-Panasonic publishes the parameters of pixels it offers to its customers:

1.12μm-pixel 65nm CIS Platform

Global shutter pixel offerings (* means 4 additional masks for the designated pixel)

Update: TPSCo site has been changed on August 8, 2018, and more pixels are presented now:

Oculus Presents Solution for Recognizing Mirrors in 3D Imaging

Facebook Reality Labs, former Oculus, presents its solutions for mirrors in 3D imaging. Mirrors and other reflective surfaces is a major problem in 3D cameras:

"Mirrors are typically skirted around in 3D reconstruction, and most earlier work just ignores them by pretending they don’t exist. But in the real world, they exist everywhere and ruin the majority of reconstruction approaches. So in a way, we broke the mold and tackled one of the oldest problems in 3D reconstruction head-on," says Research Scientist Thomas Whelan.

It’s surprisingly difficult to describe how a human recognizes a mirror as distinct from simply a window or doorway into a different space,” adds Research Scientist Steven Lovegrove.

Monday, August 06, 2018

Infineon Demos ToF Sensor with 14um Pixels

Infineon demos its new ToF sensor for face recognition based on 14um pixels, the IRS238xC:


Sunday, August 05, 2018

$0.068 for a QVGA Sensor with ISP

I missed this news at the time. On Dec 1, 2017, Superpix announced "the most competitive 80,000 image sensor chip SP0821" that costs just 6.8 cents. The sensor is based on 2.5um pixels and includes ISP with AE, AWB, noise reduction, and other usual ISP functions:


In the beginning of 2018, Superpix reported that the "new 80,000 image sensor chip SP0821, which was launched less than two months ago, set off a wave of enthusiasm in the market and ushered in a good start in 2018."

MIPI A-PHY Aims to 15m Range, 48Gbps Speed

MIPI Alliance has initiated development of a physical layer (up to 15m distance) specification targeted for ADS, ADAS and other surround sensor applications. Initially, the standard supports 12-24 Gbps data rate, with work being under way to extend the speed up to 48 Gbps.

MIPI A-PHY v1.0 is expected to be available to developers in late 2019. It’s anticipated that the first vehicles using A-PHY components will be in production in 2024.

A-PHY v1.0 Features:
  • 12-24 Gbps speed
  • Asymmetric data link layer
  • Wiring, cost and weight optimization
  • High-speed data, control data and optional power share the same physical wiring
  • Point-to-point topology
  • Reuses generations of mobile protocols
  • Low EMI
  • Supports LiDAR, Radar and camera integration for autonomous driving

ADAS in China

ResearchInChina releases "ADAS and Autonomous Driving Industry Chain Report 2018 (II)– Automotive Vision" report.

"As a main sensor of ADAS autonomous driving era, the camera is widely used in lane detection, traffic sign recognition, obstacle monitoring, pedestrian recognition, fatigue driving monitoring, occupant monitoring, rear-view mirror replacement, reverse image, 360-degree panorama and so forth.

Camera installations amounted to 6.39 million units in the Chinese passenger car market in 2017, and the figure is expected to grow to 31.80 million units by 2021 with an AAGR of 49.3%, according to ResearchInChina.

Lens: Sunny Optical Technology, a leading automotive lens manufacturer in the world, shipped 18 million lenses in the first half of 2018. It has been a supplier of automotive lens for Magna, Continental, Delphi, Mobileye, Autoliv, Steel-mate, TTE, Panasonic and Fujitsu. The company also forays into the field of LiDAR optical parts.

Image sensor: ON Semiconductor and Sony are leaders, with the former sweeping a 44% share of the automotive image sensor market. Sony has continued to expand image sensor production capacity in recent years with heavy investment in automotive image sensor.
"

Saturday, August 04, 2018

Friday, August 03, 2018

Oppo Find X Uses Orbbec 3D Camera

Orbbec reports that the new Oppo Find X smartphone uses Orbbec's structured light 3D camera for face recognition. This explains an unusually large funding round for a 3D sensing company:

Thursday, August 02, 2018

How to Reach a Trillion fps

Archiv.org publishes a nice review paper from Reports on Progress in Physics "A trillion frames per second: the techniques and applications of light-in-flight photography" by Daniele Faccio (University of Glasgow, UK) and Andreas Velten (University of Wisconsin, Madison, WI).

"Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light-in-flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost."

Some of the exotic fast imaging devices from the review:

Interview with Albert Theuwissen

IEEE SSCS publishes an interesting interview with Albert Theuwissen taken at ISSCC 2018 and talking about early days of imaging at Philips, IISS, IISW, Harvest Imaging company and Forum and more.

Toyota 2nd Generation ADAS Changes Sensor from ON Semi to Sony

Nikkei: The second generation of "Toyota Safety Sense" (TSS2) ADAS switches to Sony image sensors in its monocular camera. The TSS2 was launched in 2018. The automatic braking function of the TSS2 (one of the main functions of the system) now can detect pedestrians at night. The new low light functionality is said to be enabled by higher DR of Sony sensors and by multiple image combination to increase signal.

Wednesday, August 01, 2018

Sony, Infineon Report Quarterly Results

Sony quarterly earning report does not bring surprises in image sensor business, everything goes as expected:


Infineon reports a major automotive design win for its PMD-based ToF sensor:

Tuesday, July 31, 2018

Mantis Vision Acquires Alces Technology

Globes: Israeli 3D sensing company Mantis Vision completes the acquisition of Jackson Hole, Utah-based Alces Technology. The newspaper's sources suggest the acquisition was for about $10m. Earlier this month Mantis Vision raised $55m bringing its total investment to $84m. Alces is a depth sensing startup developing a high-resolution structured light technology.

Mantis Vision CEO Gur Bitan says, "Mantis Vision is leading innovation in 3D acquisition and sharing. Alces is a great match and we look forward to bringing their innovations to market. Alces will be rebranded Mantis Vision, Inc. and operate as an R&D center and serve as a base for commercial expansion in the US."

Alces CEO Rob Christensen says, “Our combined knowledge in hardware and optics, along with Mantis Vision’s expertise in algorithms and applications, will enable an exciting new class of products employing high-performance depth sensing.


Here is how Alces used to compare its technology with Microsoft Kinect:

Samsung Expects 10% of Smartphones to Have Triple Camera Next Year

SeekingAlpha: Samsung reports an increase in image sensor sales in Q2 2018 and a strong forecast for both its own image sensor and also the image sensors manufactured at Samsung foundry. Regarding the triple camera trends, Samsung says:

"First of all, regarding the triple camera, the triple camera offers various advantages, such as optical zoom or ultra-wide angle, also extreme low-light imaging. And that's why we're expecting more and more handsets to adopt triple cameras not only in 2018 but next year as well.

But by next year, about 10% of handsets are expected to have triple cameras. And triple camera adoption will continue to grow even after that point. Given this market outlook, actually, we've already completed quite a wide range of sensor line-up that can support the key features, such as optical zoom, ultra-wide viewing angle, bokeh and video support so that we're able to supply image sensors upon demand by customers.

At the same time, we will continue to develop higher performance image sensors that would be able to implement even more differentiating and sophisticated features based on triple camera.

To answer your second part of your question about capacity plan from the Foundry business side, given the expected increase of sensor demand going forward, we are planning additional investments to convert Line 11 from Hwaseong DRAM to image sensors with the target of going into mass production during first half of 2019. The actual size of that capacity will be flexibly managed depending on the customers' demand.
"

Monday, July 30, 2018

4th International Workshop on Image Sensors and Imaging Systems (IWISS2018)

4th International Workshop on Image Sensors and Imaging Systems (IWISS2018) is to be held on November 28-29 at Tokyo Institute of Technology, Japan. The invited and plenary part of the Workshop program has many interesting presentations:

  • [Plenary] Time-of-flight single-photon avalanche diode imagers by Franco Zappa (Politecnico di Milano (POLIMI), Italy)
  • [Invited] Light transport measurement using ToF camera by Yasuhiro Mukaigawa (Nara Institute of Science and Technology, Japan)
  • [Invited] A high-speed, high-sensitivity, large aperture avalanche image intensifier panel by Yasunobu Arikawa, Ryosuke Mizutani, Yuki Abe, Shohei Sakata, Jo. Nishibata, Akifumi Yogo, Mitsuo Nakai, Hiroyuki Shiraga, Hiroaki Nishimura, Shinsuke Fujioka, Ryosuke Kodama (Osaka Univ., Japan)
  • [Invited] A back-illuminated global-shutter CMOS image sensor with pixel-parallel 14b subthreshold ADC by Shin Sakai, Masaki Sakakibara, Tsukasa Miura, Hirotsugu Takahashi, Tadayuki Taura, and Yusuke Oike (Sony Semiconductor Solutions, Japan)
  • [Invited] RTS noise characterization and suppression for advanced CMOS image sensors (tentative) by Rihito Kuroda, Akinobu Teranobu, and Shigetoshi Sugawa (Tohoku Univ., Japan)
  • [Invited] Snapshot multispectral imaging using a filter array (tentative) by Kazuma Shinoda (Utsunomiya Univ., Japan)
  • [Invited] Multiband imaging and optical spectroscopic sensing for digital agriculture (tentative) by Takaharu Kameoka, Atsushi Hashimoto (Mie Univ., Japan), Kazuki Kobayashi (Shinshu Univ., Japan), Keiichiro Kagawa (Shizuoka Univ., Japan), Masayuki Hirafuji (UTokyo, Japan), and Jun Tanida (Osaka Univ., Japan)
  • [Invited] Humanistic intelligence system by Hoi-Jun Yoo (KAIST, Korea)
  • [Invited] Lensless fluorescence microscope by Kiyotaka Sasagawa, Ayaka Kimura, Yasumi Ohta, Makito Haruta, Toshihiko Noda, Takashi Tokuda, and Jun Ohta (Nara Institute of Science and Technology, Japan)
  • [Invited] Medical imaging with multi-tap CMOS image sensors by Keiichiro Kagawa, Keita Yasutomi, and Shoji Kawahito (Shizuoka Univ., Japan)
  • [Invited] Image processing for personalized reality by Kiyoshi Kiyokawa (Nara Institute of Science and Technology, Japan)
  • [Invited] Pixel aperture technique for 3-dimensional imaging (tentative) by Jang-Kyoo Shin, Byoung-Soo Choi, Jimin Lee (Kyungpook National Univ., Korea), Seunghyuk Chang, Jong-Ho Park, and Sang-Jin Lee (KAIST, Korea)
  • [Invited] Computational photography using programmable sensor by Hajime Nagahara, (Osaka Univ., Japan)
  • [Invited] Image sensing for human-computer interaction by Takashi Komuro (Saitama Univ., Japan)

Now, once the invited and plenary presentations are announces, IWISS2018 calls for posters:

"We are accepting approximately 20 poster papers. Submission of papers for the poster presentation starts in July, and the deadline is on October 5, 2018. Awards will be given to the selected excellent papers presented by ITE members. We encourage everyone to submit latest original work. Every participant needs registration by November 9, 2018. On-site registration is NOT accepted. Only poster session is an open session organized by ITE."

Thanks to KK for the link to the announcement!

ON Semi Renames Image Sensor Group, Reports Q2 Results

ON Semi renames Image Sensor Group to "Intelligent Sensing Group," suggesting that other businesses might be added to it in search for a revenue growth:


The company reports:

"During the second quarter, we saw strong demand for our image sensors for ADAS applications. Out traction in ADAS image sensors continues to accelerate. With a complete line of image sensors, including 1, 2, and 8 Megapixels, we are the only provider of complete range of pixel densities on a single platform for next generation ADAS and autonomous driving applications. We believe that a complete line of image sensors on a single platform provides us with significant competitive advantage, and we continue working to extend our technology lead over our competitors.

As we have indicated earlier, according to independent research firms, ON Semiconductor is the leader in image sensors for industrial applications. We continue to leverage our expertise in automotive market to address most demanding applications in industrial and machine vision markets. Both of these markets are driven by artificial intelligence and face similar challenges, such as low light conditions, high dynamic range and harsh operating environment.
"

Cepton to Integrate its LiDAR into Koito Headlights

BusinessWire: Cepton, a developer 3D LiDAR based on stereo scanner, announces it will provide Koito with its miniaturized LiDAR solution for autonomous driving. The compact design of Cepton’s LiDAR sensors enables direct integration into a vehicle’s lighting system. Its Micro-Motion Technology (MMT) platform is said to be free of mechanical rotation and frictional wear, producing high-resolution imaging of a vehicle’s surroundings to detect objects at a distance of up to 300 meters away.

We are excited to bring advanced LiDAR technology to vehicles to improve safety and reliability,” said Jun Pei, CEO and co-founder of Cepton. “With the verification of our LiDAR technology, we hope to advance the goals of Koito, a global leader within the automotive lighting industry producing over 20 percent of headlights globally and 60 percent of Japanese OEM vehicles.

Before Cepton, Koito used to cooperate with Quanergy with the similar claims a year ago. Cepton technology is based on mechanical scanning, a step away from Quanergy optical phased array scanning.

Cepton ToF scanning solution is presented in a number of patent applications. 110a,b are the laser sources, while 160a,b are the ToF photodetectors:

Sunday, July 29, 2018

SensibleVision Disagrees with Microsoft Proposal of Facial Recognition Regulation

BusinessWire: SensibleVision, a developer of 3D face authentication solutions, criticized Microsoft President Brad Smith's call for government regulation of facial recognition technology:

Why would Smith single out this one technology for external oversight and not all biometrics methods?” asks George Brostoff, CEO and Co-Founder of SensibleVision. “In fact, unlike fingerprints or iris scans, a person's face is always in view and public. I would suggest it’s the use cases, ownership and storage of biometric data (in industry parlance “templates”) that are critical and should be considered for regulation. Partnerships between private companies and the public sector have always been key to the successful adoption of innovative technologies. We look forward to contributing to this broader discussion.

Saturday, July 28, 2018

Column-Parallel ADC Archietctures Comparison

Japanese IEICE Transactions on Electronics publishes Shoji Kawahito paper "Column-Parallel ADCs for CMOS Image Sensors and Their FoM-Based Evaluations."

"The defined FoM are applied to surveyed data on CISs reported and the following conclusions are obtained:
- The performance of CISs should be evaluated with different metrics to high pixel-rate regions (∼> 1000MHz) from those to low or middle pixel-rate regions.
- The conventional FoM (commonly-used FoM) calculated by (noise) x (power) /(pixel-rate) is useful for observing entirely the trend of performance frontline of CISs.
- The FoM calculated by (noise)2 x (power) /(pixel-rate) which considers a model on thermal noise and digital system noise well explain the frontline technologies separately in low/middle and high pixel-rate regions.
- The FoM calculated by (noise) x (power)/ (intrascene dynamic range)/ (pixel-rate) well explains the effectiveness of the recently-reported techniques for extending dynamic range.
- The FoM calculated by (noise) x (power)/ (gray-scale range)/ (pixel-rate) is useful for evaluating the value of having high gray-scale resolution, and cyclic-based and deltasigma ADCs are on the frontline for high and low pixel-rate regions, respectively.
"

Friday, July 27, 2018

TowerJazz CIS Update

SeekingAlpha publishes TowerJazz Q2 2018 earnings call transcript with an update on its CIS technology progress:

"We had announced the new 25 megapixel sensor using our state-of-the-art and record smallest 2.5 micron global shutter pixels with Gpixel, a leading industrial sensor provider in China.

The product is achieving very high traction in the market with samples having been delivered to major and to customers. Another leading provider in this market, who has worked with us for many years will soon release a new global shutter sensor based on the same platform. Both of the above mentioned sensors are the first for families of sensors with different pixel count resolutions for each of those customers next generation industrial sensor offering ranging from 1 megapixel to above 100-megapixel.

We expect this global shutter with this outstanding performance based on our 65-nanometer 300- millimeter wafers to drive high volumes in 2019 and the years following. We see this as a key revenue driver from our industrial sensor customers. In parallel, e2v is ramping to production with its very successful Emerald sensor family on our 110-nanometer global shutter platform using state-of-the-art 2.8 micron pixel with best in class shutter efficiency and noise level performance. We recently released our 200-millimeter backside illumination for selected customers.

We are working with them on new products based on this technology, as well as on upgrading existing products from our front side illumination version to a BSI version, increase in the quantum efficiency of the pixels by using BSI, especially for the near IR regime within the industrial and surveillance markets, enabling our customers improve performance of their existing products. As a bridge to the next generation family of sensors in our advanced 300-millimeter platform.

The medical X-ray market, we are continually gaining momentum and are working with several market leaders on large panel dental and medical CMOS detectors based on our one dye per wafer sensor technology using our well established and high margin stitching with best in class high dynamic range pixels providing customers with extreme value creation and high yield both in 200-millimeter and 300-millimeter wafer technology.

We presently have a strong business with market leadership in this segment and expect substantial growth in 2019 on 200-millimeter with 300 millimeter initial qualifications that will drive an incremental growth over the next multiple years.

For mid to long-term accretive market growth, we are progressing well with a leading DSLR camera supplier and have as well begun a second project with this customer, using state-of-the-art stacked wafer technology on 300-millimeter wafers. For this DSLR supplier, the first front side illmination project is progressing according to plan, expecting to ramp the volume production in 2020, while the second stacked wafer based project with industry leading alignment accuracy and associated performance benefits is expected to ramp to volume production a year after.

In addition, we are progressing on two very exciting programs in the augmented and virtual reality markets, one for 3D time of flight-based sensors and one for silicon-based screens for a virtual reality, head-mount displays.
"

Thursday, July 26, 2018

Loup Ventures LiDAR Technologies Comparison

Loup Ventures publishes its analysis of LiDAR technologies and how they compete with each other on the market:


There is also a comparison of camera, LiDAR and Radar technologies of autonomous vehicles:


Another Loup Ventures article tries to answer on question "If a human can drive a car based on vision alone, why can’t a computer?"

"While we believe Tesla can develop autonomous cars that “resemble human driving” primarily driven by cameras, the goal is to create a system that far exceeds human capability. For that reason, we believe more data is better, and cars will need advanced computer perception technologies such as RADAR and LiDAR to achieve a level of driving far superior than humans. However, since cameras are the only sensor technology that can capture texture, color and contrast information, they will play a key role in reaching level 4 and 5 autonomy and in-turn represent a large market opportunity."