Monday, January 23, 2017

Sony Kumamoto Fab after April 2016 Earthquake

Nikkei publishes a picture from Sony Kumamoto fab after the April 2016 earthquake. The damage is quite significant, explaining why it took Sony so long time to recover from it:

Sony fab after April 16, 2016 earthquake

"The clean room on the third floor was shaken far more intensely than it was designed to withstand -- 1,396 Gal versus 900 Gal, a gal being a measure of ground acceleration.

With the walls and ceiling damaged, equipment toppled and semiconductor wafers scattered around, "I thought we might have to withdraw from Kumamoto when I first stepped inside," said Yasuhiro Ueda, president of Sony Semiconductor Manufacturing, which runs the factory in the town of Kikuyo.
"

Rice University FlatCam Lecture

Austrian JKU publishes Rice University Professor Ashok Veeraraghavan webinar of flat cameras and long-distance imaging array cameras:

PerkinElmer Interview

MediSens posts an interview with PerkinElmer representatives talking about its medical image sensors developed in its London, UK design center (former Dexela):

Thursday, January 19, 2017

Rumor: iPhone X to Feature Gesture Recognition, Optical Fingerprint Sensor

Mashable, Apple Insider, and Business Insider quote Cowen and Company analyst Timothy Arcuri claiming that Apple iPhone X, to be released this fall, is "to include some form of facial/gesture recognition supported by a new laser sensor and an infrared sensor mounted near the front-facing camera."

The iPhone X is said to feature a fingerprint sensor hidden under the OLED screen. "Apple may switch to Synaptec's optical-based fingerprint reader for the new Touch ID sensor, citing it as "currently the only workable solution" for detecting a fingerprint through a smartphone screen."

Just released Yole Developpement report "Fingerprint Sensor Applications and Technologies – Consumer Market Focus" too points to the optical sensing as one of contenders for under the screen fingreprint devices:


Thanks to AM for the link!

Forza on CIS Trends

Semiconductor Packaging News posts Forza Silicon President Barmak Mansoorian view on the image sensor market trends:

"This year we celebrate Forza Silicon's 16th anniversary in the business of delivering innovative custom CMOS image sensor (CIS) and IC design solutions.

Technological advancements in devices and architectures are still remarkable in the CIS marketplace. Stacked sensor design technology was once a new concept and is now a viable option in high volume applications.

Forza has been working on stacked CIS for several years and we believe it will dominate advances in the CIS marketplace for the next 3-5 years. While stacking has its challenges, Forza is developing custom stacking flows involving intelligent design choices and good product engineering to help our customers take advantage of enhanced technology options.

Additionally, while the cellphone camera has taken center stage with its upgrades we expect a strong response from the DSLR/digital still camera with high-speed 4K video and higher dynamic range. Its performance might also challenge the incumbents in the digital cinema and broadcast markets.

Lastly, as more devices get connected everyday, the Internet of Things (IoT) will remain an important application for the growing network of sensors. At Forza, we continue to look at ways to leverage the stacked chip technology for our customers while still focusing on power, pixel performance, yield, noise and other specific application requirements.
"

Panasonic Aims to Supply Organic Image Sensor to Tesla

Reuters: Panasonic CEO Kazuhiro Tsuga said in an interview that the company would like to adapt its organic photoconductive film CMOS sensors for automotive applications as they can capture high-speed moving objects without distortion. Panasonic believes that these sensors are a good fit for Tesla cars.

Light Co. Talks about L16 Camera Internals

Light Co. posts an update on the progress to mass production of its L16 multi-aperture camera, giving some details on its internal design:

"The ASIC is the “brain” of the camera and is what we use to control all of the L16’s camera modules. Consider that your traditional (non-computational) camera only has to control a single lens and a single sensor as you compose, focus, adjust and capture. The L16 requires simultaneous control of at least 10 discrete cameras (lens barrels, sensors, mirrors, etc.). Needless to say this requires an extremely advanced “brain,” which is why we designed a highly advanced ASIC specifically for this purpose.

There are 3 ASIC’s in each L16 Camera, each made using industry leading semiconductor processes. Each ASIC is comprised of a 533 MHz processor with multiple levels of internal caches and has up to 4GB of DDR memory support. Light has devised a proprietary MIPI data handling mechanism to be very power efficient. In fact, Light’s ASIC has more MIPI camera interfaces than any leading media or application processor in the semiconductor industry. In addition, each ASIC is loaded with Light’s exclusive lens, mirror, and sensor controls that enable the L16 to work its magic. The development of this chip marks major breakthrough and required an enormous amount of effort from the Light team.
"

Melexis Announces ToF Chipset and Evaluation Kit

Melexis announces a chipset and its evaluation kit for ToF 3D vision. Representing a complete ToF sensor and control solution, the chipset supports QVGA resolution and offers unsurpassed sunlight robustness and up to -40°C to +105°C temperature range operation, so that designers can test this automotive-qualified chipset.

The Melexis chipset includes MLX75023 1/3-inch optical format ToF sensor and the MLX75123, a companion IC that controls the sensor and illumination unit and delivers data to a host processor. The EVK75123 QVGA evaluation kit combines a sensor board featuring the chipset, an 12-LED illumination module, an interface board and a processor module:


The MLX75023 sensor has QVGA resolution and background light rejection capabilities of up to 120klux. This IC can provide raw data output in less than 1.5 ms, giving it capacity to track rapid movement.

Melexis also publishes a Youtube video with their ToF system demo:



Update: EVB pictures changed, as written in comments.

Technavio Market Report

BusinessWire: Technavio publishes a report on global optoelectronics market, expected to grow at a CAGR of close to 17% over 2017-2021. Technavio compares image sensor market size with other optoelectronic components:


In some parts, Technavio report looks like coming from a time capsule from 15 years ago:

"CCD image sensors were the first high-quality image sensors, which were initially used in cameras. They are being replaced by CMOS sensors gradually in every application. Though CCD sensor is superior in factors like light sensitivity, quality, and noise, CMOS image sensors have low power consumption and low manufacturing cost, leading to their increased adoption. So, in the future, CCD sensors are likely to be replaced entirely by CMOS sensors.

The growing advances in the image sensors are attributed to their increasing implementation in several imaging devices such as camera modules for consumer electronic devices and digital cameras.
"

Wednesday, January 18, 2017

TSMC RTS Noise Research

TSMC publishes an open source paper "CMOS Image Sensor Random Telegraph Noise Time Constant Extraction From Correlated To Uncorrelated Double Sampling" by Calvin Yi-Ping Chao, Honyih Tu, Thomas Wu, Kuo-Yu Chou, Shang-Fu Yeh, and Fu-Lung Hsueh in IEEE Journal of the Electron Devices Society, Jan. 2017 issue.

Abstract:

A new method for on-chip random telegraph noise (RTN) characteristic time constant extraction using the double sampling circuit in an 8.3 Mpixel CMOS image sensor is described. The dependence of the measured RTN on the time difference between the double sampling and the key equation used for time constant extraction are derived from the continuous time RTN model and the discrete event RTN model. Both approaches lead to the same result and describe the data reasonably well. From the detailed study of the noisiest 1000 pixels, we find that about 75% to 85% of them show the signature of a single-trap RTN behavior with three distinct signal levels, and about 96% of the characteristic time constants fall between 1 μs and 500 μs with the median around 10 μs at room temperature.

AltaSens Announcement

AltaSens website posted an official JVC-Kenwood announcement:

"Dear Valued Customer:

Please be advised that, in connection with restructuring our business strategy concerning CMOS sensors at our wholly-owned subsidiary AltaSens, Inc. (California, USA), we have made changes to our management team at AltaSens, Inc. as well as changes to our strategy for operations in the United States.

With respect to our current product offerings (AL41410C, AL-CM460), as well as products that are still under warranty, JVC KENWOOD will support AltaSens in honoring its commitments.

We appreciate your continued patronage.

JVC KENWOOD Corporation
"

Thanks to SD for the link!

Image Sensor Auto 2017 Confirmed Speakers

Image Sensor Auto event to be held in Dusseldorf, Germany, on April 24-26, 2017 publishes a list of confirmed speakers:
  • Jens Benndorf
    COO, Managing Director
    DreamChip Technologies GmbH
  • Judd Heape
    Senior Director, Imaging and Vision Group
    ARM
  • Carl Jackson
    CTO and Founder
    SensL
  • Marco Jacobs
    VP of Marketing
    Videantis
  • Akhilesh Kona
    Senior Analyst, Automotive Electronics & Semiconductor
    IHS Markit Technology
  • Frédéric Large
    Research and Advanced Engineering Department, Vision Systems and ADAS Applications
    PSA Group
  • Gregory Roffet
    Camera System Expert
    STMicroelectronics
  • Igor Tryndin
    Camera Architect
    NVIDIA
  • Senthil Yogamani
    Technical Lead
    Valeo Vision Systems
  • Young-Jun Yoo
    Head of Strategic Planning for R&D
    Nextchip

Digitimes on CMOS Sensors Demand

Digitimes publishes an infro from its sources on image sensor market this year:
  • Handsets with dual-lens cameras spur demand for CMOS sensors
  • More mid-range smartphones are expected to feature a dual lens cameras in 2017
  • Mobile devices consume 70% of CMOS sensors
  • Automotive market has expanded for CMOS image sensors
  • The supply of CMOS sensors becomes tight

Tuesday, January 17, 2017

MediSens Conference Videos

MediSens conference held in Dec 2016 in London, UK, publishes ON Semi and Hamamatsu interviews in its Youtube channel:





In addition, MediSens filmed all the presentations, they can be accessed by attendees for free, and for non-attendees for a fee.

Also, News-Medical publishes its review of the conference.

EETimes on ADAS Vision Processors Race

EETimes article on ADAS processors race talks about emerging competitors of Mobileye ADAS vision processor:

"During the Consumer Electronics Show earlier this month, companies ranging from MediaTek and Renesas to NXP and Ambarella, told us they are working on “alternatives” to Mobileye’s EyeQ chips. On the opening day of CES, On Semiconductor announced that it has licensed CEVA's imaging and vision platform for its automotive advanced driver assistance (ADAS) product lines."

Monday, January 16, 2017

Oscar-Winning Image Sensors

Oscars.org: The Academy of Motion Picture Arts and Sciences announces its scientific and technical awards. Among them, two are image sensor related: RED for upgradeable image sensor, and Sony CineAlta F65 for high performance and unique color pattern.

  • To RED Digital Cinema for the pioneering design and evolution of the RED Epic digital cinema cameras with upgradeable full-frame image sensors.
    RED’s revolutionary design and innovative manufacturing process have helped facilitate the wide adoption of digital image capture in the motion picture industry.
  • To Sony for the development of the F65 CineAlta camera with its pioneering high-resolution imaging sensor, excellent dynamic range, and full 4K output.
    Sony’s unique photosite orientation and true RAW recording deliver exceptional image quality.

Update: The ARRI and Panavision camera awards are said to be image sensor related too:

  • To ARRI for the pioneering design and engineering of the Super 35 format Alexa digital camera system.
    With an intuitive design and appealing image reproduction, achieved through close collaboration with filmmakers, ARRI’s Alexa cameras were among the first digital cameras widely adopted by cinematographers.
  • To Panavision and Sony for the conception and development of the groundbreaking Genesis digital motion picture camera.
    Using a familiar form factor and accessories, the design features of the Genesis allowed it to become one of the first digital cameras to be adopted by cinematographers.

Sunday, January 15, 2017

Reportedly, Altasens Closed Down

As written in comments to Altasens news post, apparently, the company has been closed down on Friday, the 13th of January, 2017.

From Altasens web site, still active for now:

"AltaSens is a wholly owned subsidiary of JVC KENWOOD Corporation.

The company was originally founded in 2004 as a fabless global supplier of imaging sensors. Our earliest roots trace back to our original formation as the CMOS Imaging Sensors Group within the Rockwell Scientific Company in 2000.

AltaSens spun off from Rockwell in 2004, at the precise time to make its mark in the nascent HD broadcasting market, just as the FCC set its mandate for broadcast-industry HD transmission. Adding to the technology provided by AltaSens’ technical team and the associated intellectual property from Rockwell were the business acumen and initial funding that were provided by ITX Corporation of Tokyo, Japan. The new company introduced the world’s first 1080p60 CMOS sensors at the NAB show in April of 2004.

Subsequently, AltaSens became a wholly owned subsidiary of ITX Corporation. During this time, the company delivered several unique sensors meeting specialized requirements for prototype imaging cameras that were not made available for commercial sale.

AltaSens is currently a leading supplier of imaging sensors for HD videoconferencing and has supplied imaging sensors for many types of HDTV cameras, including the first Blu Ray camcorders offered in the global marketplace.

The leadership team at AltaSens has decades of international business and advanced technology development experience in the semiconductor industry. We have successfully created many innovative imaging sensor designs for leading-edge cameras and scientific instruments. Working closely with our customers as well as our talented sensor design, wafer production, sensor packaging, sensor production, logistics, and quality assurance teams, we bring together world-class expertise and creativity to supply the best possible CMOS imaging sensor for each specific application.
"

Lester Kozlowski and Gregory Chow
in an early AltaSens Lab

Nintendo Switch Controller Features 3D Camera

Arstechnica: The newly announced Nintendo Switch gaming console features 3D camera in its controllers:

"The most intriguing surprise inside the Joy-Con controller is a motion-depth infrared camera, which Nintendo's designers insist can differentiate between distinct hand shapes. To illustrate this, Nintendo reps showed off the controller recognizing hand shapes for rock, paper, and scissors. The tracker will also be able to detect exactly how far an object is from the controller. Nintendo says these will be able to record full video "in the future."

The Joy Con's motion-tracking IR camera.
The IR camera is able to recognize hand shapes
The camera also measures distance to an object

Framos to Distribute Sony Consumer Sensors in North America

Pressebox: FRAMOS becomes Sony North American Stocking Distributor of consumer image sensors, in both packaged and bare die form.

"We are very proud to grow with Sony and its Semiconductor and Sensor division with the addition of the Consumer Imaging Sensors offering. We believe that easy access to world leading sensors such as the IMX177, IMX277, IMX377 and IMX477 will be popular with clients in all consumer based verticals," says Sebastien Dignard, President of FRAMOS Technologies.

Saturday, January 14, 2017

Altasens News

PRNewswire: AltaSens has adopted the Cadence Modus Test Solution for its mixed-signal next-generation 90nm image sensors. The Modus enabled AltaSens to deliver its first digital-on-top (DOT) image sensor much more efficiently. AltaSens design team was able to meet fault coverage goals with greater than 98 percent static coverage.

Nikkei: JVC Kenwood Corp will release a lens-interchangeable 4k GW-MD100 camera module targeted at various systems such as the aerial cameras of cranes and drones, observation cameras for academic use, surveying/monitoring cameras for roads, bridges, etc. It is equipped with a Super 35mm 13.5MP image sensor developed by AltaSens which is affiliated with JVC Kenwood.

Altasense published a demo of this sensor about 3 years ago, while the GW-MD100 is scheduled to be released in late March 2017:

Google RAISR and Draco Reduce Amount of 2D and 3D Data

Google says it has developed RAISR machine learning technology that can upsample the image by 4x with visually pleasing results:

Top: Original, Bottom: RAISR super-resolved 2x.
Original image from Andrzej Dragan

The smart machine learning algorithm is said to be able to recognize and remove the aliasing artifacts from the downsampled image:

Left: Low res original, with strong aliasing.
Right: RAISR output, removing aliasing.

Google paper "RAISR: Rapid and Accurate Image Super Resolution" by Yaniv Romano, John Isidoro, Peyman Milanfar is to be published in IEEE Transactions on Computational Imaging.

Google also announces Draco - an open-source algorithm for 3D image compression.

"With Draco, applications using 3D graphics can be significantly smaller without compromising visual fidelity. For users this means apps can now be downloaded faster, 3D graphics in the browser can load quicker, and VR and AR scenes can now be transmitted with a fraction of the bandwidth, rendered quickly and look fantastic."

Friday, January 13, 2017

World's First Graphene Integration onto CMOS Image Sensor

archiv.org publishes a paper "Image sensor array based on graphene-CMOS integration" by Stijn Goossens, Gabriele Navickaite, Carles Monasterio, Shuchi Gupta, Juan José Piqueras, Raúl Pérez, Gregory Burwell, Ivan Nikitskiy, Tania Lasanta, Teresa Galán, Eric Puma, Alba Centeno, Amaia Pesquera, Amaia Zurutuza, Gerasimos Konstantatos, and Frank Koppens. The authors are affiliated with Barcelona Institute of Science and Technology, Institució Catalana de Recerça, and Graphenea SA, all based in Spain.

From the abstract:

"Here, we show for the first time the monolithic integration of a CMOS integrated circuit with graphene, operating as a high mobility phototransistor. We demonstrate a high-resolution image sensor and operate it as a digital camera that is sensitive to UV, visible and infrared light (300 – 2000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2d materials into the next generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and even terahertz frequencies."

At a closer look, the sensing layer is, actually, "PbS colloidal quantum dots: upon light absorption an electron-hole pair is generated, due to the built in electric field the hole transfers to the graphene while the electron remains trapped in the quantum dots:"


"Due to the high mobility of graphene (here ~1000 cm2/Vs), this photoconductor structure exhibits ultra-high gain of 10^8 and responsivity above 10^7 A/W, which is a strong improvement compared to photodetectors and imaging systems based on quantum dots only."

The sample images are quite nice for a first graphene image sensor ever produced:


"Future graphene-based image sensors can be designed to operate at higher resolution, in a broader wavelength range, and potentially even with a form factor that fits inside a smartphone or smartwatch (Supplementary Notes, Figure S9). In contrast to current hybrid imaging technologies (which are not monolithic), we do not encounter fundamental limits with respect to shrinking the pixel size and increasing the imager resolution. Graphene patterning and contacting, i.e. lithography, will ultimately be the limiting factor. Therefore, competitively performing image sensors with multi-megapixel resolutions and pixel pitches down to 1 µm are within reach."

A somewhat similar paper, albeit from different authors, is going to be presented at ISSCC 2017 Session 15 on Feb. 7:

"15.7 Heterogeneous Integrated CMOS-Graphene Sensor Array for Dopamine Detection," B. Nasri, T. Wu, A. Alharbi, M. Gupta, R. Ranjit Kumar, S. Sebastian, Y. Wang, R. Kiani, D. Shahrjerdi, New York University, Brooklyn, NY.

Thursday, January 12, 2017

Corephotonics Raises $15M

Reuters, Globes: Israeli startup Corephotonics developing dual camera technologies for smartphones raises a $15M round bringing the total raised amount to $50M since it was founded in 2012.

The investors list in this round is impressive: Samsung Ventures, Foxconn, and MediaTek. Corephotonics’ current investors include Magma VC, Samsung Ventures, Amiti Ventures, Chinese billionaire Li Ka-shing and Solina Chau’s Horizon Ventures, OurCrowd, SanDisk, Chinese telephony services provider CK Telecom, and additional private investors.

The investment, along with the existing cash on hand and revenue forecast for 2017, will be used for developing next generation cameras for smartphones, and for expanding existing products’ penetration. In addition, the new funding will help Corephotonics expand into the automotive, drone, surveillance, and action camera markets.

Corephotonics, which currently has 50 employees, intends to recruit dozens of additional engineers for the company’s headquarters in Tel Aviv, as well as support and integration engineers in China and South Korea, following an expected significant growth in sales. In addition, the company is exploring opportunities to acquire complementary technologies.

The company co-founder and CEO David Mendelovic said, “We established Corephotonics in order to improve the image quality in smartphones, and to provide consumers with a unique user experience, after we identified this need of device manufacturers. We successfully predicted the use of dual cameras, and we currently see such cameras being integrated into a broad range of smartphones by all leading manufacturers. We are pleased that top tier investors have expressed confidence in our capabilities, allowing us to develop next generation camera technologies, which will reach the market within the next few years.

Corephotonics Hummingbird module

IISW 2017 Final Call for Papers

2017 International Image Sensor Workshop to be held on May 30 - June 2 2017 in Hiroshima, Japan announces Final Call for Papers and the Workshop registration details:

"Abstracts should be submitted electronically by January 19, 2017 (JST).

Registration is limited to approximately 160 attendees on a first-come, first-served basis. Registration will be guaranteed for presenters, but they are still required to register. Past experience shows that registration is often filled to capacity within a few days’ time.

Pre-registration will start on January 24, 2017 and end on February 10, 2017.

You will receive confirmation of your registration, and registration payment instructions, after all 160 attendees have been identified. It will be around February 20, 2017.
"

Grand Prince Hotel Hiroshima, IISW 2017 place

Wednesday, January 11, 2017

DxOMark Puzzled by Red Helium 8K Sensor Score

DxOMark publishes its scores for Red Helium 8K image sensor prototype, and tries to explain exceptionally high Raw performance of this APS-H sensor, even in comparison with Sony full-frame ones:


Puzzled by seemingly impossible improvements in SNR and DR over Sony BSI full-frame sensor, DxO checks Row files for pixel noise correlation, a sure sign of applied spatial noise reduction. However, there was no any correlation:


Then, DxO suspects that Red has applied a temporal noise reduction: "This technique, called temporal noise reduction (TNR), is most commonly used in video, since there are many successive frames to work with. However, temporal correlations across a time axis are not relevant when analyzing the image quality of a single RAW image, as they do not impact any RAW converters.

Whatever noise reduction system RED employs creating the RAW images from the Helium sensor, its presence means that we aren’t measuring just the RED sensor, so its results aren’t directly comparable to those from camera sensors we have tested from other vendors, whose RAW results come straight from the sensor with no prior noise reduction processing.
"

Apple AR Glasses Rumored to be Released in 2017

Forbes, CNET and other news sites quote veteran tech journalist Robert Scoble Facebook post claiming that "A Zeiss employee confirmed the rumors that Apple and Carl Zeiss AG are working on a light pair of augmented reality/mixed reality glasses that may be announced this year."

Few other bits of info originated from Robert Scoble, talking about 3D camera integration into Apple products:

"they are building the PrimeSense sensor right into the television, into the iPad and into the iPhone… so you can do this kind of mixed reality."

I interviewed the guy who runs PrimeSense and they have 600 engineers in Israel working on just the 3D sensor. What’s coming in 11 months is going to blow even my mind.

Apple Swift Head Becomes Tesla Autopilot SW Lead

Tesla welcomes Chris Lattner joining the company as VP of Autopilot Software. He comes to Tesla after 11 years at Apple where he was primarily responsible for creating Swift, the programming language for building apps on Apple platforms. Prior to Apple, Chris was lead author of the LLVM Compiler Infrastructure, an open source umbrella project that is widely used in commercial products and academic research today.

Tesla believes that the new appointment will accelerate the future of autonomous driving.

Huawei on Smartphone Camera Future

Image Sensors Europe conference publishes an interview with Atsushi Kobayashi, Senior Image Sensor Specialist from Huawei, talking about the future trends for smartphones. Few quotes:

"Most of key players like Huawei are eager to find out new camera feature and new key technology which improves camera performance.

Another area we are very excited about is ‘depth sensing’, especially due to the current demand for AR/VR applications. We anticipate in the next few years 3D cameras will become standard on all smart phones.

Improvements of pixel performance is still important, especially as Huawei is very open to new ideas which improves camera especially image sensor.
"

Tuesday, January 10, 2017

Canon on 250MP APS-H Sensor Plans

Imaging Resource publishes a PGN video interview with Canon representative at CES on how Canon plans to commercialize its 250MP CMOS sensor developed a few years ago:

Monday, January 09, 2017

IBM 5 in 5

IBM 5 in 5 - five innovations that IBM predicts will change our lives in the next five years - include, at least, two ones based on the image sensor technology:

- Hyperimaging - In five years, new imaging devices using hyperimaging technology and AI will help us see broadly beyond the domain of visible light by combining multiple bands of the electromagnetic spectrum to reveal valuable insights or potential dangers that would otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and accessible, so superhero vision can be part of our everyday experiences.



- Medical labs “on a chip” - In 5 years, new medical labs on a chip will serve as nanotechnology health detectives – tracing invisible clues in our bodily fluids and letting us know immediately if we have reason to see a doctor. The goal is to shrink down to a single silicon chip all of the processes necessary to analyze a disease that would normally be carried out in a full-scale biochemistry lab. Most of the sensing elements in these chips are similar to the pixel designs and based on the image sensor technology.

Augmented Reality Vanity Mirror

PRNewswire: Element Electronics, in partnership with FaceCake Marketing Technologies, debutes a NextGen AR vanity mirror, based on a camera inside:

"Most of us have been there. Wishing for a magic mirror that shows you what your makeup looks like before you apply it – potentially avoiding that dreaded lipstick or eye shadow mistake.

This traditional and sophisticated vanity mirror with adjustable LED lights and a sensor that accounts for the natural or artificial lighting conditions of the room can instantly transition into a personalized AR environment that lets consumers virtually see cosmetics and accessories – whether they own them or not – on their real-time reflection while receiving personalized recommendations and purchase options.
"

Sunday, January 08, 2017

PMD/Infineon ToF Sensor in Asus ZenFone AR

Infineon and PMD REAL3 ToF image sensor is at the heart of Asus ZenFone AR. The ASUS Zenfone AR is said to be the world’s thinnest smartphone that offers a 3D ToF camera and the first Google Daydream and Tango ready smartphone.

3D scanning with semiconductors from Infineon helps to interconnect the real and virtual worlds,” said Martin Gotschlich, Director, 3D Imaging at Infineon Technologies. “Mobile devices with an integrated 3D image sensor have spatial awareness of their surroundings and the capability for augmented reality applications with an impressive realistic quality. They pave the way for numerous applications and innovations that were not previously possible.

Scandy publishes a use case example for the new Asus smartphone:



Omnivision Announces 1.4MP HDR Sensor for Automotive Apps

PRNewswire: OmniVision introduces the OV9716 sensor, which brings 1392 x 976 resolution at up to 60fps in a 1/3.8" optical format with more than 120dB dynamic range to automotive imaging applications. The sensor is built on 2.8um OmniBSI-2 Deep Well pixel technology.

Saturday, January 07, 2017

Jot Pixel to Reach 0.15e- Noise with 1.38 mV/e- Conversion Gain

Open access IEEE Journal of the Electron Devices Society publishes a paper "Analytical Modeling and TCAD Simulation of a Quanta Image Sensor Jot Device With a JFET Source-Follower for Deep Sub-Electron Read Noise" by Jiaju Ma and Eric R. Fossum, Dartmouth College. From the abstract:

"The device is proposed to further reduce the read noise of QIS jots and ultimately realize a read noise of 0.15e-r.m.s. for accurate photoelectron counting. We take advantage of the small gate capacitance in a p-channel JFET SF to reduce the total capacitance of the floating diffusion, which yields a greatly improved conversion gain of 1.38 mV/e- in TCAD simulation compared to MOSFET SF with the same pitch size. Lower 1/ f noise is also anticipated yielding a low input-referred read noise. The device is designed in a 45 nm CMOS image sensor process."

ToF Camera Error Analysis and Correction

Sensors journal publishes an open-access paper "Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras" by Ying He, Bin Liang, Yu Zou, Jin He, and Jun Yang from Harbin Institute of Technology and Tsinghua University, China. From the abstract:

"This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5–5 m)."

The authors use Mesa/Heptagon/AMS SR-4000 camera to get their experimental data:

Nvidia, Toyota Self-Driving Car Progress

EETimes: Nvidia CEO Jen-Hsun Huang CES keynote was largely devoted to the company automotive platform for ADAS and autonomous driving. Most of the Nvidia's Auto-Pilot and Co-Pilot features are vision based:



Nvidia also announces its new parntners on car computing platform: Bosch, ZF, and Audi.

EETimes: In contrast, Gil Pratt, head of Toyota Research Institute, expressed skepticism about the speed of progress in this area: "Acknowledging that every carmaker is shooting for Level 5, Pratt said, “None of us is close. Not even close.” He added, “It's going to take many years of machine learning and many, many more miles" of tested." Pratt's talk starts at 14:40 in Toyota CES keynote video.

Omnivision Announces Pair of Sensors for Dual Cameras, High Speed GS Sensor

PRNewswire: OmniVision announces the OV12A10 and OV12A1B, two 1.25um PureCel Plus pixel sensors specifically designed for dual-cameras, which OmniVision sees as a booming trend in both high-end and mainstream mobile markets.

"Mainstream devices are offering exceptional image quality by adopting powerful imaging technologies such as dual cameras with sensors that feature optical zoom and low-light enhancements," said James Liu, senior technical marketing manager at OmniVision. "By equipping the OV12A10 and OV12A1B with these features, OmniVision hopes to democratize high-end imaging, no longer limiting it to flagship phones."

The OV12A family's PureCel Plus technology implements buried color filter array (BCFA) and DTI for dramatically reduced color crosstalk, as well as improved SNR and sensor angular response.

The OV12A10 and OV12A1B (monochrome sensor) are currently available for sampling and are expected to enter volume production in the Q1 2017.


PRNewswire: OmniVision introduces the OV9281 and OV9282 global shutter sensors built on OmniPixel3-GS technology. The new 1MP sensors are aimed to consumer and industrial computer vision applications, such as AR, VR, collision avoidance in drones, bar code scanning and factory automation.

Friday, January 06, 2017

Varioptic Changes Hands

GlobeNewsWire, IMV-Europe, Optics.org: Invenios LLC, a micro-fabrication foundry, announces that it has completed the acquisition of Varioptic, previously a Business Unit of Parrot Drones SAS for an undisclosed amount.

Founded in 2002 in Lyon, France, Varioptic was acquired in May 2011 by Parrot. Varioptic is the manufacturer of the liquid lens technology which enables variable focus, variable tilt, or variable cylindrical lenses for compact cameras used in industrial or consumer devices. The main markets for the company today are barcode readers, medical devices, industrial cameras and defense.

Invenios has been working on improving Varioptic’s OIS lens with an enhanced, low cost and scalable design. This acquisition will streamline liquid lens development efforts, and enable broader market access with an extended product portfolio.

We are very happy to have completed the acquisition of Varioptic” says Ray Karam, President of Invenios LLC, “combining the expertise of Varioptic and Invenios will further grow the Liquid Lens business, and permit us to access new markets; therefore, this acquisition is excellent news for Varioptic, Invenios, and for our customers.

The Varioptic business, assets and employees, have been transferred to Invenios France SAS, a fully owned subsidiary of Invenios LLC. The Liquid Lens products will continue to be promoted under the Varioptic and Optilux brands.

The videos from Optilux website explain the elecrowetting principle used in the lens: Voltage variation induces a change in the contact angle of the fluid and creates a lens with the desired properties:



Intel PERC Head on Merged Reality and RealSense 3D Camera

Intel publishes a video featuring Achin Bhowmik, VP and GM of Intel's Perceptual Computing Group, on merged reality future. One of the promises in the video is to increase the next generation RealSense camera throughput to 50M 3D points per second from the today's 18M points per second:

Mobileye CES 2017 Presentation

Mobileye publishes the CES keynote by its CTO and Chairman Amnon Shashua. The presentation itself starts at 13:15 time in the video. An interesting part about on-road negotiation challenges starts at ~41:00 time.

Update #2: The above video has been removed by Mobileye. It's still available here, not sure for how long. The only remaining one on Youtube is the short version with no introduction and Q&A session:



In terms of camera requirements, Mobileye setup is evolving quite rapidly:


Thanks to DS for the news!

Update #1: Mobileye publishes a shorter version of this video with no introduction and questions parts.

Update #3: Mobileye now publishes a Q&A session in a separate video:


Omnivision Launches Two New Sensors

PRNewswire: OmniVision announces the 5MP OS05A BSI sensor for commercial and consumer surveillance applications. Built on PureCel pixel, the 1080p120-capable OS05A (aka OS05A10) features very low power consumption and a double-exposure staggered mode for HDR rendering.


PRNewswire: OmniVision also introduces a 8MP OS08A (aka OS08A10) BSI sensor built on PureCel technology, delivering true 4K2K 60fps video for a variety of applications, including: commercial surveillance, smart home and IoT cameras, drones, action cameras, and AR/VR cameras.


Both sensors are available for sampling and is expected to enter volume production in Q2 2017.

Heptagon Expands its 3D Vision Platforms, Introduces Compact Spectrometer

BusinessWire: Heptagon announced the BELICE, a stereoscopy illumination platform including a suite of advanced algorithms and software solutions for cost-efficient 3D vision. The proprietary infrared dot pattern reduces the computational complexity compared to passive stereoscopy. This extends battery life in mobile applications and increases refresh frequencies. The solution is said to deliver a high-quality 3D vision at competitive price points, enabling 3D vision in new applications. The platform can be optimized according to cost, resolution, field-of-illumination, power, and frequency, depending on specific application needs.

One of the main challenges in infrared-based image systems is interference by other systems (e.g., other 3D vision systems, security cameras). The BELICE proprietary pattern solves this issue without needing to enable communications between the systems. In addition, the pattern allows fast identification of outdoor issues through a simple Fast Fourier Transform, enabling adaptive stereo where the stereoscopy system incrementally moves from active to passive stereo for outdoor performance.

We are delighted to introduce the new BELICE platform,” says Matthias Gloor, GM of the Illumination Business Unit. “The defocus-free dot pattern is generated by Heptagon’s wafer-level integration with proprietary replicated optics, and with its high density of dots, BELICE delivers superior performance.


BusinessWire: Heptagon also announces the Smart Handheld Spectrometer solution, the first in Heptagon’s family of Smart Spectral Solutions targeting both demanding industrial uses as well as consumer applications.

We have identified spectrometry as a new technology that may eventually enable a new wave of innovation in mobile phones. This handheld spectrometer is the precursor to a highly miniaturized consumer model currently under development,” says Peter Roentgen, Manager of Advanced Research at Heptagon.