Sunday, July 15, 2018

Panasonic PIR and Thermopile Sensor Presentation

Panasonic publishes a video presenting its PIR and Thermopile sensor lineup:

Saturday, July 14, 2018

TI Unveils AFE for ToF Proximity Sensor

TI ToF proximity sensor AFE OPT3101 integrates most of the ToF system on a single chip:

Thursday, July 12, 2018

Magic Leap Gets Investment from AT&T

Techcrunch reports that AT&T makes a strategic investment into Magic Leap, a developer of AR glasses. Magic Leap last round D valued the startup at $6.3b, and the companies have confirmed that this AT&T completes the Series D round of $963m.

So far, Magic Leap has raised $2.35b from a number of strategic backers including Google, Alibaba and Axel Springer.

AutoSens Announces its Awards Finalists

AutoSens Awards reveals the shortlisted finalists for 2018 with some of them related to imaging:

Most Engaging Content:

  • Mentor Graphics, Andrew Macleod
  • videantis, Marco Jacobs
  • Toyota Motor North America, CSRC, Rini Sherony
  • 2025AD, Stephan Giesler
  • EE Times, Junko Yoshida

Hardware Innovation:

  • NXP Semiconductors
  • Cepton
  • Renesas
  • OmniVision
  • Velodyne Lidar
  • Robert Bosch

Software Innovation:

  • Dibotics
  • Algolux
  • Brodmann17
  • Civil Maps
  • Dataspeed
  • Immervision
  • Prophesee

Most Exciting Start-Up:

  • Hailo
  • Metamoto
  • May Mobility
  • AEye
  • Ouster
  • Arbe Robotics

Game Changer:

  • Siddartha Khastgir, WMG, University of Warwick, UK
  • Marc Geese, Robert Bosch
  • Kalray
  • Prof. Nabeel Riza, University College Cork
  • Intel
  • NVIDIA and Continental partnership

Greatest Exploration:

  • Ding Zhao, University of Michigan
  • Prof Philip Koopman, Carnegie Mellon University
  • Prof Alexander Braun, University of Applied Sciences Düsseldorf
  • Cranfield University Multi-User Environment for Autonomous Vehicle Innovation (MUEAVI)
  • Professor Natasha Merat, Institute for Transport Studies
  • Dr Valentina Donzella, WMG University of Warwick

Best Outreach Project:

  • NWAPW
  • Detroit Autonomous Vehicle Group
  • DIY Robocars
  • RobotLAB
  • Udacity

Image Sensors America Agenda

Image Sensors America to be held on October 11-12, 2018 in San Francisco announces its agenda with many interesting papers:

State of the Art Uncooled InGaAs Short Wave Infrared Sensors
Dr. Martin H. Ettenberg | President of Princeton Infrared Technologies

Super-Wide-Angle Cameras- The Next Smartphone Frontier Enabled by Miniature Lens Design and the Latest Sensors
Patrice Roulet Fontani | Vice President,Technology and Co-Founder of ImmerVision

SPAD vs. CMOS Image Sensor Design Challenges – Jitter vs. Noise
Dr. Daniel Van Blerkom | CTO & Co-Founder of Forza Silicon

sCMOS Technology: The Most Versatile Imaging Tool in Science
Dr. Scott Metzler | PCO Tech

Image Sensor Architecture
Presentation By Sub2R

Using Depth Sensing Cameras for 3D Eye Tracking
Kenneth Funes Mora | CEO and Co-founder of Eyeware

Autonomous Driving The Development of Image Sensors?
Ronald Mueller | CEO of Vision Markets of Associate Consultant of Smithers Apex

SPAD Arrays for LiDAR Applications
Carl Jackson | CTO and Founder of SensL Division, OnSemi

Future Image Sensors for SLAM and Indoor 3D Mapping
Vitality Goncharuk | CEO & Founder | Augmented Pixels

Future Trends in Imaging Beyond the Mobile Market
Amos Fenigstein | Senior Director of R&D for Image Sensors of TowerJazz

Presentation by Gigajot

Wednesday, July 11, 2018

ST FlightSense Presentation

ST publishes its presentation on ToF proximity sensor products:

Four Challenges for Automotive LiDARs

DesignNews publishes a list of four challenges that LiDARs have to overcome on the way to wide acceptance in vehicles:

Price reduction:

Every technology gets commoditized at some point. It will happen with LiDAR,” said Angus Pacala, co-founder and CEO of LiDAR startup Ouster. “Automotive radars used to be $15,000. Now, they are $50. And it did take 15 years. We’re five years into a 15-year lifecycle for LiDAR. So, cost isn’t going to be a problem.

Increase detection range:

Range isn’t always range,” said John Eggert, director of automotive sales and marketing at Velodyne. “[It’s] dynamic range. What do you see and when can you see it? We see a lot of ‘specs’ around 200 meters. What do you see at 200 meters if you have a very reflective surface? Most any LiDAR can see at 100, 200, 300 meters. Can you see that dark object? Can you get some detections off a dark object? It’s not just a matter of reputed range, but range at what reflectivity? While you’re able to see something very dark and very far away, how about something very bright and very close simultaneously?

Improve robustness:

It comes down to vibration and shock, wear and tear, cleaning—all the aspects that we see on our cars,” said Jada Smith, VP engineering and external affairs at Aptiv, Delphi spin-off. “LiDAR systems have to be able to withstand that. We need perfection in the algorithms. We have to be confident that the use cases are going to be supported time and time again.

Withstand the environment and different weather conditions:

Jim Schwyn, CTO of Valeo North America, said “What if the LiDAR is dirty? Are we in a situation where we are going to take the gasoline tank from a car and replace it with a windshield washer reservoir to be able to keep these things clean?

The potentially fatal LiDAR flaws that need to be corrected:

  • Bright sun against a white background
  • A blizzard that causes whiteout conditions
  • Early morning fog
Another article on somewhat similar matter has been published by Lidarradar.com:

SmartSens Unveils GS BSI VGA Sensor

PRNewswire: SmartSens launches SC031GS calling it "the world's first commercial-grade 300,000-pixel Global Shutter CMOS image sensor based on BSI pixel technology." While other companies announced GS BSI sensors, they have higher than VGA resolution.

The SC031GS is aimed to a wide range of commercial products, including smart barcode readers, drones, smart modules (Gesture Recognition/vSLAM/Depth Information/Optical Flow) and other image recognition-based AI applications, such as facial recognition and gesture control.

SC031GS uses 3.75um large pixels (1/6" optical size) and SmartSens' single-frame HDR technology, combined with a global shutter. The maximum frame rate is 240fps.

Leo Bai, GM of SmartSens' AI Image Sensors Division, stated: "SmartSens is not only a new force in the global CMOS image sensor market, but also a company that commits to designing and developing products that meet the market needs and reflect industry trends. We partnered with key players in the AI field to integrate AI functions into the product design. SC031GS is such a revolutionary product that is powered by our leading Global Shutter CMOS image sensing technology and designed for trending AI applications."

SC031GS is now in mass production.

Tuesday, July 10, 2018

SenseTime to Expand into Automotive Applications

South China Morning Post: Face recognition startup SenseTime announces its plans to expand in automotive applications.

Our leading algorithms for facial recognition have already proven a big success,” said SenseTime co-founder Xu Bing, “and now comes [new technologies for] autonomous driving, which enable machines to recognise images both inside and outside cars, and an augmented reality engine, integrating know-how in reading facial expressions and body movement.

SenseTime raised $620m in May caling itself world’s most valuable AI start-up, with a valuation of $4.5b. Known for providing AI-powered surveillance software for China’s police, SenseTime said it achieved profitability last year, selling AI-powered applications for smart cities, surveillance, smartphones, internet entertainment, finance, retail and other industries.

Last year, Honda announced a partnership with SenseTime for automated driving technologies.

Nvidia AI-Enhanced Noise Removal

DPReview quotes Nvidia blog presenting a joint research with MIT and Aalto University on AI-enhanced noise removal with pretty impressive results:


Omnivision Releases Sensor Optimized for Structured Light FaceID Applications

OmniVision announces a global shutter sensors targeting facial authentication in mobile devices, along with other machine vision applications such as AR/VR, drones and robotics. The high-resolution OV9286 sensor, with 20% more pixels than the previous-generation sensor, is said to enable a new level of accuracy in facial authentication for smartphone applications requiring the high levels of security. The OV9286 is optimized for payment-level facial authentication using a structured light solution for high-quality 3D images.

The market for facial recognition components is expected to grow rapidly to $9.2b by 2022, according to a report from Allied Market Research.

A higher level of image-sensing accuracy is required to safely authenticate smartphones for payment applications, compared to using facial authentication for unlocking a device,” said Arun Jayaseelan, senior marketing manager at OmniVision. “The increased resolution of the OV9286 image sensor meets these requirements, while using the global shutter technology to optimize system power consumption as well as to eliminate motion artifacts and blurring.

The sensor is available in two versions: the OV9286 for smartphone applications, and the OV9285 for other machine vision applications that also need high-resolution sensors to enable a broad range of image-sensing functions. The OV9286 has a high CRA of 26.7 degrees for low z-height and slim-profile smartphone designs. The OV9285 has a lower CRA of 9 degrees for applications where that tight z-height restriction does not apply, supporting wide field-of-view lens designs.

Both the OV9285 and the OV9286 incorporate 1328 x 1120 resolution at 90 fps, an optical format of 1/3.5-inch and 3x3-micrometer OmniPixel3-GS technology. These global shutter sensors, in combination with excellent NIR sensitivity at 850 nm and 940 nm, reduce device power consumption to extend battery life.

The OV9285 and OV9286 sensors are available now.

Monday, July 09, 2018

3D Imaging Fundamentals (Open Access)

OSA Advances in Optics and Photonics publishes "Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems" by Manuel Martínez-Corral (University of Valencia, Spain) and Bahram Javidi (University of Connecticut, Storrs).

"This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy."


OSA Advances in Optics and Photonics site also has a 2011 open access paper "Structured-light 3D surface imaging: a tutorial" by Jason Geng. Since the structured light approach has progressed a lot over the recent years, the information in this tutorial is largely obsolete. Still, it could be good start for learning basics or for history-inclined readers.

Sunday, July 08, 2018

Espros on ToF FaceID Calibration Challenges

Espros Dieter Kaegi presentation "3D Facial Scanning" at Swiss Photonics Workshop, held at Chur on June 21, 2018, talks about many challenges on the way of ToF-based FaceID module development:

Saturday, July 07, 2018

Friday, July 06, 2018

Leti-SNRS Full-Frame Curved Sensor Paper

Leti-SNRS curved sensor paper "Curved detectors developments and characterization: application to astronomical instruments" by Simona Lombardo, Thibault Behaghel, Bertrand Chambion, Wilfried Jahn, Emmanuel Hugot, Eduard Muslimov, Melanie Roulet, Marc Ferrari, Christophe Gaschet, Stephane Caplet, and David Henry is available on-line. This work was first announced a year ago.

"We describe here the first concave curved CMOS detector developed within a collaboration between CNRS-LAM and CEA-LETI. This fully-functional detector 20 Mpix (CMOSIS CMV20000) has been curved down to a radius of Rc =150 mm over a size of 24x32 mm2. We present here the methodology adopted for its characterization and describe in detail all the results obtained. We also discuss the main components of noise, such as the readout noise, the fixed pattern noise and the dark current. Finally we provide a comparison with the flat version of the same sensor in order to establish the impact of the curving process on the main characteristics of the sensor.

The curving process of these sensors consists of two steps: firstly the sensors are thinned with a grinding equipment to increase their mechanical flexibility, then they are glued onto a curved substrate. The required shape of the CMOS is, hence, due to the shape of the substrate. The sensors are then wire bonded keeping the packaging identical to the original one before curving. The final product is, therefore, a plug-and-play commercial component ready to be used or tested (figure 1B).
"


"The PRNU factor of the concave sensor shows an increase of 0.8% with respect to the flat sensor one. The difference between the two is not significant. However more investigations are required as it might be due to the curving process and it could explain the appearance of a strong 2D pattern for higher illumination levels."

Yole Webcast on Autonomous Driving

Yole Developpement publishes a recording of its April 2018 webcast "Core Technologies for Robotic Vehicle" that talks about cameras and LiDARs among the other key technologies:


Thursday, July 05, 2018

Harvard University Proposes Flat Lens

Photonics.com: Harvard University Prof. Federico Capasso and his group present "a single flat lens that can focus the entire visible spectrum of light in the same spot and in high resolution. Professor Federico Capasso and members of the Capasso Group explain why this breakthrough in metalenses could have major implications in the field of optics, and could replace bulky, curved lenses currently used in optical devices."

CEA-Leti with Partners to Develop LiDAR Benchmarks

LiDAR performance claims are a bit of Wild West today as there are no standardized performance tests. Every company can claim, basically, anything measuring the performance in its own unique way. Not anymore. Leti is aiming to change that.

CEA-Leti and its partner companies Transdev and IRT Nanoelec are to develop a list of criteria and objective parameters by which various commercial LiDAR systems could be evaluated and compared. Leti teams will focus on perception requirements and challenges from a LiDAR system perspective and evaluate the sensors in real-world conditions. Vehicles will be exposed to objects with varying reflectivity, such as tires and street signs, as well as environmental conditions, such as weather, available light, and fog.

e2v Unveils 67MP APS-C Sensor with 2.5um Global Shutter Pixels

Teledyne e2v announces its Emerald 67MP CMOS image sensor. The new sensor features the smallest global shutter pixel (2.5µm) on the market, ideal for high end automated optical inspection, microscopy and surveillance.

Emerald 67M has 2.8e- of readout noise, 70% QE, and high speed, which significantly enhances production line throughput.

Vincent Richard, Marketing Manager at Teledyne e2v, said, “We are very pleased to widen our sensor portfolio with the addition of Emerald 67M, the first 8192 x 8192 global shutter sensor, running at high frame rates and offering a comprehensive set of features. Developed through close discussions with leading OEM’s in the automated optical inspection market, this new sensor offers application features such as our unique Region of Interest mode, which helps to improve customer yield. Combined with its 67M resolution, our newest Emerald sensor tackles the challenge of image instability as a result of inspection system vibration.

Wednesday, July 04, 2018

EVG Wafer Bonding Machine Alignment Accuracy Improved to 50nm

PRNewswire: EV Group (EVG) unveiled the SmartView NT3 aligner, which is available on the company's GEMINI FB XT integrated fusion bonding system for high-volume manufacturing (HVM) applications. The SmartView NT3 aligner provides sub-50-nm wafer-to-wafer alignment accuracy—a 2-3X improvement—as well as significantly higher throughput (up to 20 wafers per hour) compared to the previous-generation platform.

Eric Beyne, imec fellow and program director 3D system integration says "area of particular focus is wafer-to-wafer bonding, where we are achieving excellent results in part through our work with industry partners such as EV Group. Last year, we succeeded in reducing the distance between the chip connections, or pitch, in hybrid wafer-to-wafer bonding to 1.4 microns, which is four times smaller than the current standard pitch in the industry. This year we are working to reduce the pitch by at least half again."

"EVG's GEMINI FB XT fusion bonding system has consistently led the industry in not only meeting but exceeding performance requirements for advanced packaging applications, with key overlay accuracy milestones achieved with several industry partners within the last year alone," stated Paul Lindner, executive technology director, EV Group. "With the new SmartView NT3 aligner specifically engineered for the direct bonding market and added to our widely adopted GEMINI FB XT fusion bonder, EVG once again redefines what is possible in wafer bonding—helping the industry to continue to push the envelope in enabling stacked devices with increasing density and performance, lower power consumption and smaller footprint."

Digitimes Image Sensor Market Forecast

Digitimes Research forecasts global CMOS sensors and CCDs sales to reach $15b in 2020. The shipments increased by over 15% YoY to $12.2b in 2017. Sony market share in CMOS sensors is estimated at 45% in both 2016 and 2017.

As smartphone market slows down, Sony moves its resources to automotive CIS market where its share is relatively low 9% in 2017. Sony sells its image sensors to Toyota and looks to expand its customer base to include Bosch, Nissan and Hyundai this year.

Apple to Integrate Rear 3D Camera in Next Year iPhone

DeviceSpecifications quotes Korean site ETNews saying that Hynix group assembly house JSCK works with Apple on the next generation 3D sensing camera:

"Apple has revealed the iPhone of 2019 will have a triple rear camera setup with 3D sensing capability that will be a step ahead of the technology that was used for the front-facing camera of the iPhone X released in 2017. The front camera will be used for unlocking purposes, and the rear ones will be used to provide augmented reality (AR) experience. According to industry sources, Jesset Taunch Chippak Korea (JSCK), a Korean company in China, has been developing the 3D sensing module since the beginning of this year. It will be placed in the middle of the rear triple camera module... Apple used infrared (IR) as a light source for the iPhone's front-facing camera 3D sensing, but the rear camera plans to use a different light source than the IR because it needs to sense a wider range."

Tuesday, July 03, 2018

Peter Noble to Publish a Book

Peter Noble, the inventor of active pixel sensor in 1966, is about to publish his autobiography book:

MTA Special Section on Advanced Image Sensor Technology

Japanese ITE Transactions on Media Technology and Applications publishes a Special Section on Advanced Image Sensor Technology with many interesting papers, all in open access:

Statistical Analyses of Random Telegraph Noise in Pixel Source Follower with Various Gate Shapes in CMOS Image Sensor
Shinya Ichino, Takezo Mawaki, Akinobu Teramoto, Rihito Kuroda, Shunichi Wakashima, Tomoyuki Suwa, Shigetoshi Sugawa
Tohoku University

Random telegraph noise (RTN) that occurs at in-pixel source follower (SF) transistors and column amplifier is one of the most important issues in CMOS image sensors (CIS) and reducing RTN is a key to the further development of CIS. In this paper, we clarified the influence of transistor shapes on RTN from statistical analysis of SF transistors with various gate shapes including rectangular, trapezoidal and octagonal structures by using an array test circuit. From the analysis of RTN parameter such as amplitude and the current-voltage characteristics by the measurement of a large number of transistors, the influence of shallow trench isolation (STI) edge on channel carriers and the influence of the trap location along source-drain direction are discussed by using the octagonal SF transistors which have no STI edge and the trapezoidal SF transistors which have an asymmetry gate width at source and drain side.

Impacts of Random Telegraph Noise with Various Time Constants and Number of States in Temporal Noise of CMOS Image Sensors
Rihito Kuroda, Akinobu Teramoto, Shigetoshi Sugawa
Tohoku University

This paper describes the impacts of random telegraph noise (RTN) with various time constants and number of states to temporal noise characteristics of CMOS image sensors (CISs) based on a statistical measurement and analysis of a large number of MOSFETs. The obtained results suggest that from a trap located relatively away from the gate insulator/Si interface, the trapped carrier is emitted to the gate electrode side. Also, an evaluation of RTN using only root mean square values tends to underestimate the effect of RTN with large signal transition values and relatively long time constants or multiple states especially for movie capturing applications in low light environment. It is proposed that the signal transition values of RTN should be incorporated during the evaluation.

Quantum Efficiency Simulation and Electrical Cross-talk Index Development with Monte-Carlo Simulation Based on Boltzmann Transport Equation
Yuichiro Yamashita, Natsumi Minamitani, Masayuki Uchiyama, Dun-Nian Yaung, Yoshinari Kamakura
TSMC and Osaka University

This paper explains a new method to model a photodiode for accurate quantum efficiency simulation. Individual photo-generated particles are modeled by Boltzmann transport equation, and simulated by Monte-Carlo method. Good accuracy is confirmed in terms of similarities of quantum efficiency curves, as well as color correction matrices and SNR10s. Three attributes - "initial energy of the electron", "recombination of electrons at the silicon surface" and "impurity scattering" - are tested to examine their effectiveness in the new model. The theoretical difference to the conventional method with drift-diffusion equation is discussed as well. Using the simulation result, the relationship among the cross-talk, potential barrier, and distance from the boundary has been studied to develop a guideline for cross-talk suppression. It is found that a product of the normal distance from the pixel boundary and the electric field perpendicular to the Z-axis needs to be more than 0.02V to suppress the probability of electron leakage to the adjacent pixel to less than 10%.

A Multi Spectral Imaging System with a 71dB SNR 190-1100 nm CMOS Image Sensor and an Electrically Tunable Multi Bandpass Filter
Yasuyuki Fujihara, Yusuke Aoyagi, Maasa Murata, Satoshi Nasuno, Shunichi Wakashima, Rihito Kuroda, Kohei Terashima, Takahiro Ishinabe, Hideo Fujikake, Kazuhiro Wako, Shigetoshi Sugawa
Tohoku University

This paper demonstrates a multi spectral imaging system utilizing a linear response, high signal to noise ratio (SNR) and wide spectral response CMOS image sensor (CIS), and an electrically tunable multi bandpass optical filter with narrow full width at half maximum (FWHM) of transmitted waveband. The developed CIS achieved 71dB SNR, 1.5x107 e- full well capacity (FWC), 190-1100nm spectral response with very high quantum efficiency (QE) in near infrared (NIR) waveband using low impurity concentration Si wafer (~1012 cm-3). With the developed CIS, diffusion of 5mg/dl glucose into physiological saline solution, as a preliminary experiment for non-invasive blood glucose measurement, was successfully visualized under 960nm and 1050nm wavelengths, at which absorptions of water molecules and glucose appear among UV to NIR waveband, respectively.

Single Exposure Type Wide Dynamic Range CMOS Image Sensor With Enhanced NIR Sensitivity
Shunsuke Tanaka, Toshinori Otaka, Kazuya Mori, Norio Yoshimura, Shinichiro Matsuo, Hirofumi Abe, Naoto Yasuda, Kenichiro Ishikawa, Shunsuke Okura, Shinji Ohsawa, Takahiro Akutsu, Ken Wen-Chien Fu, Ho-Ching Chien, Kenny Liu, Alex YL Tsai, Stephen Chen, Leo Teng, Isao Takayanagi
Brillnics Japan

In new markets such as in-vehicle cameras, surveillance camera and sensing applications that are rising rapidly in recent years, there is a growing need for better NIR sensing capability for clearer night vision imaging, in addition to wider dynamic range imaging without motion artifacts and higher signal-to-noise (S/N) ratio, especially in low-light situation. We have improved the previously reported single exposure type wide dynamic range CMOS image sensor (CIS), by optimizing the optical structure such as micro lens shape, forming the absorption structure on the Si surface and adding the back side deep trench isolation (BDTI). We achieved high angular response of 91.4%, high Gr/Gb ratio of 98.0% at ±20°, 610nm, and high NIR sensitivity of QE 35.1% at 850nm, 20.5% at 940nm without degrading wide dynamic range performance of 91.3dB and keeping low noise floor of 1.1e-rms.

Separation of Multi-path Components in Sweep-less Time-of-flight Depth Imaging with a Temporally-compressive Multi-aperture Image Sensor
Futa Mochizuki, Keiichiro Kagawa, Ryota Miyagi, Min-Woong Seo, Bo Zhang, Taishi Takasawa, Keita Yasutomi, Shoji Kawahito
Shizuoka University

This paper demonstrates to separate multi-path components caused by specular reflection with temporally compressive time-of-flight (CToF) depth imaging. Because a multi-aperture ultra-high-speed (MAUHS) CMOS image sensor is utilized, any sweeping or changing of frequency, delay, or shutter code is not necessary. Therefore, the proposed scheme is suitable for capturing dynamic scenes. A short impulse light is used for excitation, and each aperture compresses the temporal impulse response with a different shutter pattern at the pixel level. In the experiment, a transparent acrylic plate was placed 0.3m away from the camera. An objective mirror was placed at the distance of 1.1 m or 1.9m from the camera. A set of 15 compressed images was captured at an acquisition rate of 25.8 frames per second. Then, 32 subsequent images were reconstructed from it. The multi-path interference from the transparent acrylic plates was distinguished.

CMOS Image Sensor with Pseudorandom Pixel Placement for Image Measurement using Hough Transform
Junichi Akita, Masahi Toda
Kanazawa University, Kumamoto University

The pixels in the conventional image sensors are placed at lattice positions, and this causes the jaggies at the edge of the slant line we perceive, which is hard to resolve by pixel size reduction. The authors have been proposing the method of reducing the jaggies effect by arranging the photo diode at pseudorandom positions, with keeping the lattice arrangement of pixel boundaries that are compatible with the conventional image sensor architecture. In this paper, the authors discuss the design of CMOS image sensor with pseudorandom pixel placement, as well as the the evaluation on image measurement accuracy of line parameters using Hough transform.

Monday, July 02, 2018

2018 Harvest Imaging Forum Agenda

The 6th Harvest Imaging Forum is to be held on Dec. 6th and 7th, 2018 in Delft, the Netherlands. The agenda includes two topics, each one taking one day:

"Efficient embedded deep learning for vision applications" by Prof. Marian VERHELST (KU Leuven, Belgium)
Abstract:

Deep learning has become popular for smart camera applications, showing unprecedented recognition, tracking and segmentation capabilities. Deep learning however comes with significant computational complexity, making it until recently only feasible on power-hungry server platforms. In the past years, we however see a trend towards embedded processing of deep learning networks. It is crucial to understand that this evolution is not enabled by either novel processing architecture or novel deep learning algorithms alone. The breakthroughs clearly come from a close co-optimization between algorithms and implementation architectures.

After an introduction into deep neural network processing and its implementation challenges, this forum will give an overview of recent trends enabling efficient network evaluations in embedded platforms such as smart camera's. This discussion involves a tight interplay between newly emerging hardware architectures, and emerging implementation-driven algorithmic innovations. We will review a wide range of recent techniques which make the learning algorithms implementation-aware towards a drastically improved inference efficiency. This forum will give the audience a better understanding into the opportunities and implementation challenges from embedded deep learning, and enable to follow research on deep learning processors.


"Image and Data Fusion" by Prof. Wilfried PHILIPS (Ghent University, Belgium)
Abstract:

Large scale video surveillance networks are now common place and smart cameras and advanced video have been introduced to alleviate the resulting problem of information overload. However, the true power of video analytics comes from fusing information from various cameras and sensors, with applications such people tracking over wide areas or inferring 3D shape from multi-view video. Fusion also helps to overcome the limitations of individual sensors. For instance, thermal imaging helps to detect pedestrians in difficult lighting conditions pedestrians are more easily (re)identified in RGB images. Automotive sensing and traffic control applications are another major driver for sensor fusion. Typical examples include lidar, radar and depth imaging to complement optical imaging. In fact, as the spatial resolution of lidar and radar is gradually increasing, these devices these days (can) produce image like outputs.

he workshop will introduce the theoretical foundations of sensor fusion and the various options for fusion ranging from fusion at the pixel level, over decision fusion to more advanced cooperative and assistive fusion. It will address handling heterogeneous data, e.g., video with different spatial, temporal or spectral resolution and/or representing different physical properties. It will also address fusion frameworks to create scalable systems based on communicating smart cameras and distributed processing. This cooperative and assistive fusion facilitates the integration of cameras in the Internet-of-Things.

Interview with Eric Fossum

LDV Capital publishes an interview with Eric Fossum:



Update: IEEE Spectrum posts that Photobit PB-100 CMOS sensor has been added to its Chip Hall of Fame: "PB-100 popularized the tech that became the way people capture photos and video."

Omnivision Publishes its Patent Statistics

Omnivision adds Intellectual Property page to its web site. The big jump between 2010 and 2011 is probably a purchase of Kodak CMOS sensor patent portfolio:

Sunday, July 01, 2018

Light Co. Prepares 9-cameras Smartphone

Washington Post presents Light Co. future plans: Light "showed me concept and working prototype phones with between five and nine lenses — yes, nine — on the back. It says its phone design is capable of capturing 64 megapixel shots, better low-light performance and sophisticated depth effects.

Light, which counts giant phone manufacturer Foxconn as an investor, says a smartphone featuring its multi-lens array will be announced later this year.
"

Saturday, June 30, 2018

3D Camera Maker Orbbec Raises $200m

AllTechAsia, ChinaMoneyNetwork: Alibaba investment arm Ant Financial leads Orbbec financing round D of $200m. Other investors include SAIF Partners, R-Z Capital, Green Pine Capital Partners, and Tianlangxing Capital. The structured light 3D camera maker Orbbec is said to be the fourth largest company in the world to mass produce 3D sensors for consumer use. Orbbec says that its solution is now used by over 2,000 companies globally, and can be applied in various fields, including unmanned retail, auto-driving, home systems, smart security, robotics, Industry 4.0, VR/AR, etc.

Friday, June 29, 2018

Hayabusa2 Takes Pictures of Ryugu

JAXA: Japanese asteroid explorer Hayabusa2 has arrived to the distance of 20km to Ryugu, the target asteroid, after three and a half years en route. The explorer landing module is expected to bring a sample of asteroid back to Earth in December 2020.

Here is one of the first Ryugu images from a short distance:


Talking about Hayabusa2 cameras, there are 4 of them:


Thursday, June 28, 2018

Omnivision Loses Lawsuit Against SmartSens

SmartSens kindly sent me an official update on Omnivision lawsuit against Smartsens:

"SHANGHAI, June 25, 2018 – SmartSens, a leading provider of high-performance CMOS image sensors, responded to its patent infringement lawsuit today.

With regard to the recent lawsuit that accused SmartSens of infringing patents No. 200510052302.4 (titled “CMOS Image Sensor Using Shared Transistors between Pixels”) and No. 200680019122.9 (titled “Pixel with Symmetrical Field Effect Transistor Placement”), the Patent Reexamination Board of State Intellectual Property Office of the People’s Republic of China has ruled the two patents in question invalid.


According to the relevant judicial interpretation of the Supreme People’s Court, the infringement case regarding the above-mentioned patents will be dismissed.


“As a company researching, developing and utilizing technology, we pay due respect to intellectual property. However, we will not yield to any false accusations and misuse of intellectual property law,” said SmartSens CTO Yaowu Mo, Ph.D. “SmartSens will take legal measures to not only defend the company and its property, but also protect our partners’ and clients’ interest. SmartSens attributes its success to our talent and intellectual property. In this case, we are pleased to see that justice is served and a law-abiding company like SmartSens is protected.”

Thus far, SmartSens has applied for more than 100 patents, of which 75 were submitted in China and more than 20 have been granted. More than 30 patent applications were submitted in the United States, and more than 20 have been granted.

About SmartSens

SmartSens Technology Co., Ltd, a leading supplier of high-performance CMOS imaging systems, was founded by Richard Xu, Ph.D. in 2011. SmartSens’ R&D teams in Silicon Valley and Shanghai develop industry-leading image sensing technology and products. The company receives strong support from strategic partners and an ISO-Certified supply chain infrastructure, and delivers award-winning imaging solutions for security and surveillance, consumer products, automotive and other mass market applications.
"

LiDARs in China

ResearchInChina publishes "Global and China In-vehicle LiDAR Industry Report, 2017-2022." Few quotes:

"Global automotive LiDAR sensor market was USD300 million in 2017, and is expected to reach USD1.4 billion in 2022 and soar to USD4.4 billion in 2027 in the wake of large-scale deployment of L4/5 private autonomous cars. Mature LiDAR firms are mostly foreign ones, such as Valeo and Quanergy. Major companies that have placed LiDARs on prototype autonomous driving test cars are Velodyne, Ibeo, Luminar, Valeo and SICK.

Chinese LiDAR companies lag behind key foreign peers in terms of time of establishment and technology. LiDARs are primarily applied to autonomous logistic vehicles (JD and Cainiao) and self-driving test cars (driverless vehicles of Beijing Union University and Moovita). Baidu launched Pandora (co-developed with Hesai Technologies), the sensor integrating LiDAR and camera, in its Apollo 2.5 hardware solution.

According to ADAS and autonomous driving plans of major OEMs, most of them will roll out SAE L3 models around 2020. Overseas OEMs: PAS SAE L3 (2020), Honda SAE L3 (2020), GM SAE L4 (2021+), Mercedes Benz SAE L3 (Mercedes Benz new-generation S in 2021), BMW SAE L3 (2021). Domestic OEMs: SAIC SAE L3 (2018-2020), FAW SAE L3 (2020), Changan SAE L3 (2020), Great Wall SAE L3 (2020), Geely SAE L3 (2020), and GAC SAE L3 (2020). The L3-and-above models with LiDAR are expected to share 10% of ADAS models in China in 2022. The figure will hit 50% in 2030.
"


13NewsNow talks about Tesla denying the need in LiDAR altogether: "Tesla has looked to cameras and radar — without lidar — to do much of the work needed for its Autopilot driver assistance system.

But other automakers and tech companies rushing to develop autonomous cars — Waymo, Ford and General Motors, for instance — are betting on lidar.

"Tesla's trying to do it on the cheap," [Sam Abuelsamid, an analyst with Navigant Research] said. "They're trying to take the cheap approach and focus on software. The problem with software is it's only as good as the data you can feed it.
"

Wednesday, June 27, 2018

Velodyne CTO Promotes High Resolution LiDAR

Velodyne publishes a video with its CTO Anand Gopalan talking about VLS-128 improvements and features:

Vivo Smartphone ToF Camera is Official Now

PRNewswire: Vivo reveals its TOF 3D Sensing Technology "with the promise of a paradigm shift in imaging, AR and human-machine interaction, which will elevate consumer lifestyles with new levels of immersion and smart capability."

Vivo's TOF 3D camera features 300,000 depth pixel resolution, which is said to be 10x the number of existing Structured Light Technology. It enables 3D mapping at up to 3m from the phone while having a smaller baseline than Structured Light. TOF 3D Sensing Technology is also simpler and smaller in structure and allows for more flexibility when embedded in a smartphone. This will enable much broader application of this technology than was previously possible.

Vivo's TOF 3D Sensing Technology is no mere proof of concept. The technology is tested and meets industry standards required for integration with current apps soon. Beyond facial recognition, TOF 3D Sensing Technology will open up new possibilities for entertainment as well as work.


Samsung Cooperates with Fujifilm to Improve Color Separation

BusinessWire: Samsung introduces its new 'ISOCELL Plus' technology. The original ISOCELL circa 2013 forms a physical barrier between the neighboring pixels, reducing color crosstalk and expanding the full-well capacity.

With the introduction of ISOCELL Plus, Samsung improves isolation through an optimized pixel architecture. In the existing pixel structure, metal grids are formed over the photodiodes to reduce interference between the pixels, which can also lead to some optical loss as metals tend to reflect and/or absorb the incoming light. For ISOCELL Plus, Samsung replaced the metal barrier with an innovative new material developed by Fujifilm, minimizing optical loss and light reflection.

We value our strategic relationship with Samsung and would like to congratulate on the completion of the ISOCELL Plus development,” said Naoto Yanagihara, CVP of Fujifilm. “This development is a remarkable milestone for us as it marks the first commercialization of our new material. Through continuous cooperation with Samsung, we anticipate to bring more meaningful innovation to mobile cameras.

The new ISOCELL Plus delivers higher color fidelity along with up to a 15% enhancement in light sensitivity. The technology is said to enable pixel scaling down to 0.8µm and smaller without a loss in performance.

Through close collaboration with Fujifilm, an industry leader in imaging and information technology, we have pushed the boundaries of CMOS image sensor technology even further,” said Ben K. Hur, VP of System LSI marketing at Samsung Electronics. “The ISOCELL Plus will not only enable the development of ultra-high-resolution sensors with incredibly small pixel dimensions, but also bring performance advancements for sensors with larger pixel designs.

Tuesday, June 26, 2018

Vivo to Integrate ToF Front Camera into its Smartphone

Gizchina reports that Vivo is about to announce a ToF 3D camera integration into its smartphone at the end of this month:

"In addition to being able to recognize human faces better and more quickly, the recognition distance can be enhanced. Moreover, it does not require the face to be in front of the screen to process. this technology is very likely to be used in a flagship model launched in the second half of the year."

Monday, June 25, 2018

Samsung to Use EUV in Image Sensors Manufacturing

Samsung investors presentation on June 4, 2018 in Singapore talks about an interesting development at the company's S4 fab:

"S4 line provides CMOS image sensor using 45-nanometer and below process node, and we are building EUV line. We started constructing in February this year."

Samsung has been using 28nm design rules in its image sensors for quite a some time. EUV seems to be a natural next step to avoid double patterning limitations on the pixel layout (double patterning is commonly used starting from 22nm node.)

Sunday, June 24, 2018

EUV Lithography Drives EUV Imaging

CNET: As Samsung endorses EUV at its 7nm process node, the new generation of photolithography is finally here, after 30 years in development:


ASML cooperates with Imec to develop 13.5nm-wavelength image sensors for its EUV machines: