Wednesday, August 31, 2016

In Light of Camera Market Decline, Canon to Start Selling its CMOS Sensors

Nikkei reports that Canon will supply image sensors to other manufacturers for the first time, anticipating demand for the technology in building self-driving cars, robots and other smart devices. The plan is to start sell sensors within two years. The company has already assembled a team to launch the business.

Canon manufactures its image senors at two plants in Kanagawa Prefecture near Tokyo and one in southern Japan's Oita Prefecture. So far, all the capacity is used in the company's digital cameras and some video cameras. With the camera market shrinking, Canon aims to offset a decline in sensor output.

Not to clash with Sony and others already holding substantial shares of the market for general-purpose CMOS sensors, Canon intends to supply specialized devices for automotive and industrial applications. Besides cars and robots, it envisions its sensors helping guide drones, as well as sharpening the vision of traffic-monitoring systems. However, other image sensor manufacturers are also pursuing automotive and industrial applications.

Canon's in-house supply of CMOS sensors ranks fifth in the world in terms of value, with a roughly 5% market share, according to Tokyo-based TSR. Sony leads the market, with a 40%-plus share, followed by Samsung Electronics at nearly 20%.

Canon is working on Super Machine Vision (SMV), a next-generation vision system that surpasses the abilities of the human vision system, by leveraging its dual-pixel AF from cameras and business machines while also taking advantage of the image-recognition and data-processing capabilities employed in face-detection and character-recognition technologies.

In an unrelated news, Canon develops a global shutter CMOS sensor that achieves expanded DR through new drive method. When the newly developed CMOS sensor converts light into electrical signals and stores the signal charge in memory, the new drive system is said to achieve a significant expansion in full well capacity. Also, because it employs a structure that efficiently captures light and each pixel incorporates an optimized internal configuration, the sensor makes possible increased sensitivity with reduced noise. The expanded full well capacity, realized through the sensor’s new drive system, and substantial reduction in noise, enabled by the new pixel structure, combine to deliver a wide dynamic range, facilitating the capture of high-image-quality, high-definition footage even when shooting scenes containing large variances in brightness.

Canon will explore various industrial and measurement applications for the newly developed CMOS sensor and consider deploying it in the field of video production for cinema production applications, TV dramas, commercials and more.

Canon GS WDR sensor prototype

Update: The GS WDR sensor announcement appear to be Canon marketing answer on the critics of its new full-frame 5D IV DSLR. DPReview, among other sites, says that the rolling shutter artifacts are quite significant and the DR is noticeably lower than the competing full-format cameras.

Demand in Iris and Face Recognition Solutions Grows

Digitimes reports that the demand for iris and face recognition processors for smartphones is to surge, according to the newspaper's industry sources.

Pixart is expected to be among the first China and Taiwan-based players capable of rolling out related solutions. Pixart has already submitted iris recognition and eye tracking patent applications in the US, said the sources, and is set to launch related solutions as early as 2017.

Another Digitimes article says that Xintec is to start fulfilling orders for iris-recognition solutions in Q4 2016, according to a Chinese-language report. The mass production of the iris-recognition chips is expected in 2017, which will boost the backend house's revenues for the year. New orders for the iris-recognition sensors include those for the 2017 model of iPhone, the watchers were also quoted in the report.

Tuesday, August 30, 2016

Low Cost AR Design Challenges

Innsbruck University, Austria publishes BSc thesis "Developing a Low-Cost Augmented Reality System" by Carsten Fischer, talking about camera design issues, among other stuff:

"The goal of this document is to give the reader a better understanding of the underlying theory of augmented reality systems and which adaptations can decrease the cost of such systems, while maintaining a good experience. The requirements on the including hardware parts will be explained, before summarizing the manufacturing process and highlighting features of the including software. Finally the system will be evaluated, by conducting a user study on depth perception."

Monday, August 29, 2016

FiveFocal Offers Camera Simulator

Ex-CDM Optics (Omnivision) employees have started FiveFocal company offering Imager, a camera simulator software. The image sensor pixel model is fairly basic, making it simple enough for general public to use and understand:

The company also has a very nice blog covering different camera and optics design topics, such as camera optimizations for vision algorithms:

Sunday, August 28, 2016

Peter Centen Receives SMPTE David Sarnoff Medal

SMPTE announces its 2016 Honors & Awards Recipients. The David Sarnoff Medal Award recognizes outstanding contributions to the development of new techniques or equipment that have improved the engineering phases of television technology, including large-venue presentations. The award will be presented to Peter G.M. Centen (Grass Valley VP R&D, Cameras) in recognition of his work in image sensors, imaging, and broadcast camera innovation. Centen has been at the forefront of the CCD and CMOS sensor technology, and in 2003 he was awarded an Emmy for the development of high-definition dynamic pixel management (HD-DPM) for CCD sensors.

Below is Peter Centen's HPA 2015 presentation on 4K HDR image sensors:

Update: Peter G.M. Centen (left), GV's VP R&D, Cameras receiving the SMPTE David Sarnoff Medal Award in recognition of his work in image sensors:

Interview with ULIS PM

Yole Developpement publishes an interview with Cyrille Trouilleau, Product Manager at ULIS. Few quotes:

"ULIS, a subsidiary of Sofradir, specializes in designing and manufacturing innovative thermal image sensors for commercial and defense applications... Founded in 2002, ULIS has grown to become the second largest producer of thermal image sensors (microbolometers)... ULIS is active in the surveillance, thermography, defense and outdoor leisure markets where we already sold more 500,000 thermal sensors worldwide.

...ULIS experienced strong growth in 2015, with close to a 20% increase in volume sales over 2014. Not due to exceptional event, growth we saw is supported by an increase of the demand coming from all the markets; 2016 signs remain as so positive and we expect to reach at least the same growth.

Saturday, August 27, 2016

Huawei P9 Dual Camera Reverse Engineering, More

Systemplus publishes a reverse engineering report on the dual camera module extracted from Huawei P9 smartphone:

"The P9 camera module, with dimensions of 18 x 9.2 x 5.1mm, is equipped with two sub-modules each including a Sony CIS, a closed loop voice coil motor (VCM) and a 6-element lens. Doubling the number of cameras gives more light, vivid colors and crisper details. Moreover, it compensates for the fact that the module is provided without optical image stabilization (OIS). The CISs are assembled on a copper metal core 4-layer PCB using a wire bonding process. An external image processor chip is present on the phone’s printed circuit board (PCB)."

Another Systemplus report talks about I3system's Thermal Expert camera for smartphones:

"The thermal camera uses a new 17µm pixel design from I3system. The I3BOL384_17A microbolometer features 384 x 288 pixel resolution, 6 times the resolution of the FLIR Lepton 3. The sensor technology in the I3system component is a titanium oxide microbolometer, technology which is not covered by Honeywell patents. The I3BOL384_17A is the consumer version of a military microbolometer."

Friday, August 26, 2016

Race to Self-Driving Car Accelerates

FoxNews: US startup Nutonomy managed to beat Uber starting its autonomous taxi trial in Singapore. Currently, their autonomous fleet has just 6 Mitsubishi i-MiEv electric cars, with the planned full launch of the service in 2018.

Meanwhile, Electronics Weekly published the details on Uber's acquisition of Otto, a startup retrofitting tracks with self-driving equipment. The 6-month old startup based in a garage south of Market Street in San Francisco was acquired for $680M plus 20% of any profits it makes from trucking. Otto has retrofitted five trucks so far.

Thursday, August 25, 2016

Uncooled IR Imaging Market Report

Yole Developpement publishes report "Uncooled IR imaging industry: the market is taking off." Few quotes:

After a strong downturn in 2012 and 2013 due to the collapse of the military market, the uncooled IR imaging industry came back into a growth phase in 2014 and 2015. Today, the infrared business is still driven by commercial markets, which will continue to expand quickly, with shipments growing at 16.8% CAGR to account for 92% of the overall market by 2021. The commercial market is divided into three major sub-segments:

  • Thermography, which will account for 521,000 units in 2021. In 2015, thermography was still by far the main commercial market in terms of both value and shipments. “Since 2013, Fluke and FLIR have introduced several new products with lower pricing, which has boosted sales,” comments Dr Mounier. The trend towards lower-end thermography cameras has also prompted the introduction of low-resolution technologies such as pyroelectric sensors, thermopiles, and thermodiodes.
  • From its side, the automotive market segment will account for 284,000 units by 2021, according to Yole’s analysts. Automotive market shipments grew 15% in 2015, although the growth rate was down from 30% in 2014. Total automotive sales, including OEM and aftermarket, accounted for less than 100,000 units in 2015, generating US$61 million, which reflects strong price erosion.
  • Ultimately, surveillance and security applications will account for 248,000 units in 2021. Surveillance market shipments grew 32% in 2015 due to price erosion and the growing number of suppliers.
Until recently, thermal cameras have primarily been used in high-end surveillance for critical and government infrastructure. However, new municipal and commercial applications with lower price points are now appearing, including traffic, parking, power stations and photovoltaic plants.

Wednesday, August 24, 2016

Graphene Photodetectors Review

Open-access Sensors journal publishes a paper "Towards a Graphene-Based Low Intensity Photon Counting Photodetector" by Jamie O. D. Williams, Jack A. Alexander-Webber, Jon S. Lapington, Mervyn Roy, Ian B. Hutchinson, Abhay A. Sagade, Marie-Blandine Martin, Philipp Braeuninger-Weimer, Andrea Cabrero-Vilatela, Ruizhi Wang, Andrea De Luca, Florin Udrea, and Stephan Hofmann from University of Leicester and University of Cambridge, UK. The paper reviews graphene photodetecting approaches for visible, Terahertz and X-ray bands.

"The future applications of single photon counting photodetectors requires high detection efficiency with wavelength specificity, good temporal resolution and low dark counts. Graphene’s high mobility, tunable band gap (in bilayer graphene), strong dependence of conductivity on electric field, and other properties make it particularly suitable for this application. Here graphene acts as an (indirect) photoconductor with a high gain of transconductance due to the sharp field."

Microsoft Talks About Hololens Vision Processing

EETimes: Microsoft says that HoloLens processing unit (HPU) fuses input from five cameras, a depth sensor and motion sensor, compacting and sending it to the Intel SoC. It also recognizes gestures and maps environments including multiple rooms. The TSMC 28nm HPU packs 24 Tensilica DSP cores and 8MB cache into a 12x12mm package with 65M transistors. A GByte of LPDDR3 is included in the HPU’s package.

HPU die

Mobileye and Delphi Partnership to Invest Hundreds of Million of Dollars into Self Driving Technology

SeekingAlpha transcript on Mobileye and Delphi partnership announcement has a statement that the two companies invest significant funds to develop self-driving car technology:

"When you think about what is needed to bring the Level 4/5 autonomy to series production, there is sensing – interpreting sensing on one hand, building an environmental model where all the moving objects and obstacles and all the path and symmetric meaning, but there is another component to it, which is being able to merge into traffic in a way that mimics human driving behavior. And there is machine intelligence to be ported into this. And there is a synergy between Delphi’s core IT in that area and Mobileye's core IT in that area and together we can bring a new class of machine intelligence into this project. you can imagine just given the level of technology required and the amount of integration, on a combined basis [our investment] is hundreds of millions of dollars.

Tuesday, August 23, 2016

TowerJazz and TPSCo Announce Stacked Deep PD Technology

GlobeNewsWire: TowerJazz and TowerJazz Panasonic Semiconductor Co. (TPSCo) announce a new state of the art CIS process based on stacked deep PD, allowing customers to achieve very high NIR sensitivity and realize extremely low cross-talk while keeping low dark current characteristics, using small pixels and high resolution.

This solution targets 3D gesture recognition and gesture control for the consumer, security, automotive and industrial sensors markets. NIR is becoming more and more popular in 3D gesture recognition applications and in automotive active vision applications for better visibility in harsh weather conditions. These ToF applications are using a NIR light source and ToF, creating a 3D image.

Current solutions generally use a thick epi on p-type substrate to achieve high sensitivity, but this creates high cross talk (low resolution) and high dark current values. The novel pixel structure developed by TowerJazz and TPSCo has a stacked deep photodiode, providing both high sensitivity and low cross talk at NIR. This allows very low dark current values, especially at elevated temperatures, required in the automotive market.

The tremendously fast growth of 3D gesture application in the consumer market such as PC and mobile as well as in the automotive area will allow us to attract many customers with this technology that is the best the market has to offer,” said Avi Strum, SVP and GM, CMOS Image Sensor Business Unit, TowerJazz.

The process was developed on TPSCo’s 65nm CIS technology on 300mm wafers in its Uozu, Japan fab and is already in production for leading edge automotive and security sensors. It will also be available for new designs in TPSCo’s 110nm fab in Arai, Japan and in TowerJazz’s 180nm fab in Migdal Haemek, Israel.

Monday, August 22, 2016

2016 Harvest Imaging Forum

Agenda of 2016 Harvest Imaging Forum has been published. The Forum is devoted to "Robustness of CMOS Technology and Circuitry outside the Imaging Core : integrity, variability, reliability." The 2016 Harvest Imaging forum is split into two parts, divided over two days:
  1. As all CMOS robustness topics are related to the basic CMOS devices and their operation, an in-depth knowledge of the most important fundamentals of CMOS physics, CMOS device and circuit operation, fabrication and design are necessary to ease the understanding of the robustness topics. For that reason time the first part of the forum will concentrate on the topics that have to do with CMOS physics, devices, circuits, fabrication and design, such as:

    CMOS device physics including the basic MOS device operation of nMOS and pMOS transistors, transistor current expression, the MOS diode and the MOS capacitor, the temperature dependence of the devices, the effect of the continuous scaling of CMOS technology and its problems, such as mobility reductions and leakage mechanisms,

    CMOS process technology including the basic CMOS process flow, advanced planar and FinFET technologies,

    CMOS circuit design, including basic logic gates, cell libraries, design flow and terminology.

  2. The robustness of advanced CMOS integrated circuits. The second part of the forum includes a lot of CMOS problems that can show up as artefacts in the final captured image. Most of the imaging engineers are familiar with the effects on a display or hard-copy, but what can be root cause of the image quality problems? Topics that will be discussed in the forum are:

    Signal integrity issues such as cross-talk, signal propagation, interference between ICs, current peaks, supply noise, substrate and ground bounce, on-chip decoupling capacitors and design consequences,

    Variability issues including difference between random variations and systematic variations, causes of process parameter spread, proximity effects, random dopant fluctuations, transistor matching and design consequences

    Reliability issues and topics such as electro-migration, latch-up, hot-carrier effects, NBTI, soft-errors (by cosmic rays and alpha-particles), electro-static discharge, etc.

The 2016 Harvest Imaging Forum will include a copy of: “Nanometer CMOS ICs, from Basics to ASICs” (Springer 2016) and “Bits on Chips” (Springer, 2016), as well as a hard copy of all sheets presented.

Hikvision Secures $6b Credit Lines

China-based surveillance camera maker Hikvision secures a credit facility of RMB ¥20b (more than USD $3b) with Export-Import Bank of China. In November 2015, Hikvision secured another USD $3b line of credit with China Development Bank.

So large credit lines from China state-owned banks reportedly raise some concerns in the industry that Hikvision gets an unfair advantage over its competitors.

The company's 2015 revenues were USD $3.88b, representing a YoY growth rate of 47%. Hikvision also has liquid funds available in the amount of RMB ¥11.8b (more than USD $1.78b). Hikvision is the world’s largest provider of video surveillance products and solutions for the fifth consecutive year, and the No. 1 global provider of IP cameras, according to IHS Research.

Sunday, August 21, 2016

High Speed Image Sensor Applications

Tokyo University Ishikawa Watanabe Lab publishes a couple of Youtube video exploring high speed image sensor applications. "High-speed 3D Sensing with Three-view Geometry Using a Segmented Pattern" demos 1000fps 3D camera:

"High-Speed Image Rotator for Blur-Canceling Roll Camera" demos rotation-compensating camera:

Saturday, August 20, 2016

e2v Onyx 10um Pixel Demo

e2v publishes a Youtube video showing night vision capabilities of its NIR 1.3MP Onyx sensor with 10um pixels:

Friday, August 19, 2016

Canon Proposes "Teardrop" Microlens

Canon patent application US20160233259 "Solid state image sensor, method of manufacturing solid state image sensor, and image capturing system" by Yasuhiro Sekine proposes a "teardrop" shaped microlens in order to reduce lens shading on the periphery of the pixel array:

A Race to Self-Driving Taxi Has Begun

Bloomberg, IEEE Spectrum: Starting later this month, Uber will allow customers in downtown Pittsburgh to summon self-driving cars from their phones, crossing an important milestone that no automotive or technology company has yet achieved.

The minute it was clear to us that our friends in [Google] Mountain View were going to be getting in the ride-sharing space, we needed to make sure there is an alternative [self-driving car],” says Uber CEO Travis Kalanick. “Because if there is not, we’re not going to have any business.” Developing an autonomous vehicle, he adds, “is basically existential for us.

Uber’s modified Volvo XC90 SUV.

Thursday, August 18, 2016

Ford Self-Driving Car Plans

Ford announces the steps to mass produce a fully autonomous driving car in 2021. To get there, the company is investing in or collaborating with four startups to enhance its autonomous vehicle development, doubling its Silicon Valley team and more than doubling its Palo Alto campus.

On the imaging side, Ford announcing four key investments and collaborations in advanced algorithms, 3D mapping, LiDAR, and radar and camera sensors:

  • Velodyne: Ford has invested in Velodyne, the Silicon Valley-based company dealing with light detection and ranging (LiDAR) sensors. The aim is to quickly mass-produce a more affordable automotive LiDAR sensor. Ford has a longstanding relationship with Velodyne, and was among the first to use LiDAR for both high-resolution mapping and autonomous driving beginning more than 10 years ago
  • SAIPS: Ford has acquired the Israel-based computer vision and machine learning company to further strengthen its expertise in artificial intelligence and enhance computer vision. SAIPS has developed algorithmic solutions in image and video processing, deep learning, signal processing and classification. This expertise will help Ford autonomous vehicles learn and adapt to the surroundings of their environment
  • Nirenberg Neuroscience LLC: Ford has an exclusive licensing agreement with Nirenberg Neuroscience, a machine vision company founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code the eye uses to transmit visual information to the brain. This has led to a powerful machine vision platform for performing navigation, object recognition, facial recognition and other functions, with many potential applications. For example, it is already being applied by Dr. Nirenberg to develop a device for restoring sight to patients with degenerative diseases of the retina. Ford’s partnership with Nirenberg Neuroscience will help bring humanlike intelligence to the machine learning modules of its autonomous vehicle virtual driver system
  • Civil Maps: Ford has invested in Berkeley, California-based Civil Maps to further develop high-resolution 3D mapping capabilities. Civil Maps has pioneered an innovative 3D mapping technique that is scalable and more efficient than existing processes. This provides Ford another way to develop high-resolution 3D maps of autonomous vehicle environments

Project Alloy Headset Has Two RealSense Cameras

VentureBeat has had a chance to get a closer look at Intel Project Allow headset and publishes a clear picture of its two RealSense cameras at the front. The headset is said to be controlled by hand gestures:

Wednesday, August 17, 2016

Novatek to Improve Vision Capabilities of its Camera Processors

PRNewswire: CEVA announces that Novatek, Taiwan's 2nd largest fabless IC design house, has licensed and deployed the CEVA-XM4 intelligent vision DSP for its next-generation vision-enabled SoCs targeting a range of end markets requiring advanced visual intelligence capabilities. Novatek's current camera SoC lineup for car DVR and surveillance systems integrates the 3rd generation CEVA-MM3101 imaging & vision DSP and is shipping in volume.

By integrating CEVA-XM4 as a dedicated vision processor in their next-generation SoC designs, Novatek deploys vision algorithms to enable advanced applications such as surveillance systems with face detection and authentication, drone anti-collision systems and ADAS. These types of applications are built utilizing CEVA's Deep Neural Network (CDNN2), a proprietary framework that enables deep learning tasks to run on the CEVA-XM4 and is said to outperform any GPU or CPU-based system in terms of speed, power consumption and memory bandwidth requirements.

Intel RealSense and Project Alloy Highlights

Intel publishes highlights of RealSense camera technology features in its CEO Brian Krzanich presentation. Among other stuff, it features the company Project Alloy "merged reality" headset with embedded RealSense 3D camera. Project Alloy will be offered as an open platform in 2017.

Tuesday, August 16, 2016

Intel Presents RealSense 400 Camera

Intel announces the next-generation RealSense 400 3D camera offering improved accuracy with more than double the number of 3D points captured per second and more than double the operating range compared with the previous generation. Coupled with support for indoor and outdoor uses, RealSense 400 will enable developers to create new applications.

poLight Raises $20M

ArcticStartup: Norway’s poLight raises $20M at valuation of $74M. The company has raised a total of $60.4M, and it aims to go public in the near future. “poLight aims to complete an IPO within one year,” the company said.

Over the past 18 months, poLight has been able to significantly mature its technology and have brought a commercial breakthrough for the company’s autofocus lens closer,” the company said. In cooperation with STMicro and THEIL, the company’s assembly partner in Taiwan, the production of the first product, TLens Silver, has been qualified.

The priorities going forward will be to ramp up production and secure the first customer. poLight is in dialog with several potential customers, and several processes to qualify the company’s technology are ongoing,” the firm said.

TLens Silver Spec @ 25C

Automotive Night Vision System Industry Report

ResearchInChina releases "Global and China Automotive Night Vision System Industry Report, 2016-2020." Few quotes:

"Night vision system can solve the vision problem in night driving and thus is the first to be used in Mercedes-Benz, BMW, Audi, as well as other luxury cars such as Rolls-Royce Ghost/Wraith, Cadillac CT6, Lexus LS/GS and Maybach S Class. As the core part detector is costly, night vision system hasn't been popularized yet. According to the survey, in 2016, the penetration rate of global automotive night vision system is only 0.47%, of which, Mercedes-Benz, Audi and BMW boast the highest assembly volume, Autoliv serves as the uppermost system provider, and FLIR is the primary supplier of thermal infrared imagers.

Global automotive night vision system suppliers are mainly Autoliv, Delphi, Bosch, Valeo and Visteon. Autoliv as the biggest one serves primarily Audi and BMW and accounts for roughly 60% of the market share. In 2016, Autoliv has launched the third-generation night vision solutions, which is said to be the world’s first night vision system that can detect traffic danger and living things in total darkness or fog.

In the future, with the growth of ADAS market, night vision system will usher in new development opportunities, resulting in fast-growing demand but also a change in product form e.g. fusion as a function of driving safety assistance system, integration with HUD and intelligent headlamp. Besides, whether active or passive night vision systems the technical defects haven’t been effectively improved, without ruling out the possibility of being replaced by other technologies such as millimeter-wave radar and camera in years to come.

High Speed Imaging 50 Years Ago

A 1965 film shows how high speed cameras were implemented and used at that time. To my surprise, they were able to achieve 1ns exposure time and speeds of millions fps in the purely mechanical systems:

Monday, August 15, 2016

Samsung Research Uses IBM Multicore Processor for Vision Apps

CNET: Samsung has adapted IBM's TrueNorth 4,096-core processor into its Dynamic Vision Sensor that processes video imagery quite differently than traditional digital cameras. "Each pixel operates independently and pipes up only if it needs to report a change in what it's seeing," said Eric Ryu, a VP of research at the Samsung Advanced Institute of Technology. The camera is able to track objects at 2,000fps speed while consuming 300mW power.

Saturday, August 13, 2016

ON Semi Image Sensor Business Results

ON Semi Q2 2016 earnings report updates on image sensor business:

"Now let me provide you an update on performance of our business units, starting with Image Sensor Group, or ISG. Revenue for ISG was approximately $173 million, up approximately three percent as compared to the first quarter.

We have clearly established ourselves as a technology and market leader in ADAS. We continue to reinforce our leadership position and we are now enabling future autonomous driving vehicles through our expertise in automotive CMOS image sensors. We are working with all major auto OEMs and tier-1 integrators to define next generation platforms. In Korea, we are benefitting from adoption of surround view cameras in vehicles, with strong wins for our CMOS Image Sensor in multiple upcoming models. We are seeing acceleration in revenue for our recently launched 1MP and 2MP CMOS image sensors for in-cabin driver monitoring applications.

In the machine vision market, our Python series of CMOS image sensor continues to grow at a rapid pace. Our CCD image sensors for industrial applications also grew at an impressive pace in the second quarter driven by demand for machine vision applications, such as flat panel inspection. We expect continued growth in our machine vision revenue driven by increased automation in manufacturing, and investments by industrial companies in upgrading their manufacturing capabilities.

However, it appears that other product groups were more successful than ISG, so the share of image sensor business has shrunk to 20%:

Himax on AR/VR Market Opportunity

Himax Q2 2016 earnings call and an official press release have few unusually long statements on AR/VR business potential, unusual for such kind of documents, that is:

"The recent staggering success of Pokémon Go has provided a looking glass into the future trajectory of the AR technology and given one early answer for why and how you’d want it to. Since its launch just over a month ago, the AR game has taken the digital world by storm with already more than 100 million app downloads and 20 million active users. Thanks to the viral popularity of Pokémon Go, AR is now getting the attention and consumer validation that we, at Himax, have always known to be possible. While we must give credit where it is due, the AR technology used by Pokémon Go today is still quite primitive.

Compared to the AR/MR technologies being developed by our customers and partners, Pokémon Go pales in comparison in terms of how AR can bring alive the consumer experience to interact directly with the physical environment with more sophisticated holographic imagery, 3D sensing and real-time surroundings detection. If you have not seen demonstration of AR devices already, its holographic imagery will actually appear on your desk, your chair or walking next to you on the street. Moreover, the world of AR is much more than just gaming. It represents a next generation computing platform. Future versions of the technology will cover both commercial and consumer uses and will be much more sophisticated and produce an endless stream of uses. These could include daily computing in a virtual office, social networking, teleconferencing, etc.

Due to the eye-opening effect of Pokémon Go, those who thought AR required several more years to gain traction are changing their models as the game, almost overnight, elevated AR to mass-market and added 10's of billions of dollars to its market potential in the next few years. A new and lucrative marketing tool on top of AR software and applications are being created that will catapult AR device development and intensify further investment in the sector. We believe the path Pokémon Go started will prompt an AR industry that most didn’t think possible before.

"Last but not least, we continue to make good progress in two new smart sensor areas which we announced earlier by collaborating with certain heavyweight partners, including leading consumer electronics brands and a leading international smartphone chipset maker. By pairing a DOE integrated WLO laser diode collimator with a near infrared (NIR) CIS, we are offering the most effective total solution for 3D sensing and detection in the smallest form factor which can be easily integrated into next generation smartphones, AR/VR devices and consumer electronics. Similarly, the ultra-low-power QVGA CMOS image sensor can also be bundled with our WLO module to support super low power computer vision to enable new applications across mobile devices, consumer electronics, surveillance, drones, IoT and artificial intelligence. We will report the business developments in these new territories in due course. Regarding other CIS products, we maintain a leading position in laptop application and will increase shipments for multimedia applications."

On the earnings side, a decline has been reported in the company's image sensor sales.

Talking about AR/VR investments, in a somewhat unrelated news, a VR sports broadcasting startup NextVR closed an Asian-centric Series B round of around $80M, valuing the company at $800M. The round included CITIC Group, Softbank Corp and China Assets Holdings, Time Warner Ventures and The Madison Square Garden Company. (source: MIDIA Research, Techcrunch)

Pixart Reports Rise in Non-Mouse Products

Pixart reports its Q2 2016 results with a rise in sales of non-mouse products. The revenue increased by 2.3% QoQ to NT$1,051.7 million. The gross margin decreased from 50.9% in previous quarter to 49.9% in Q2 2016.

Friday, August 12, 2016

Almalence on Future of Mobile Imaging publishes an interview with Eugene Panich, CEO of Almalence, on mobile imaging trends. Few quotes:

"The software onboard smartphones today is revolutionary. The ability to take pictures in low light, digitally zoom (or crop) without significant resolution loss, and capture moving objects without blur is in large part a function of software, not hardware. In fact many smartphone cameras are designed from the ground up based on software requirements."

But Panich believes that the value added by software is likely to slow in the future. The great achievements of the last five years cannot be matched going forward, at least not without a total reconfiguration of the smartphone camera, which includes the hardware.

“There are a lot of ideas being tested right now, but the industry has not picked a direction yet. But the industry understands that the next step is a total overhaul of the smartphone camera.”

Some of the ideas being tested are wide angle cameras, attachable camera pieces, pop up cameras, and array cameras. Each has its merits and demerits, and none has set itself apart from the pack so far.

SMIC on LFoundry Plans

SMIC quarterly earnings call does not give many details on LFoundry acquisition and plans. Just few small statements from the call:

"Since the acquisition of LFoundry, we have officially entered into the auto IC market. LFoundry manufacturers above 25% of world's auto CIS.

So they would contribute two months of revenue to us in Q3, which is roughly about $20 million to $30 million. In terms of the profit contributions, because right now actually they are running at not high utilization right now, so we expect the contribution to our profit is quite minimal.

There will be a limited amount of our CapEx in LFoundry to bring technology that is aligned to SMIC. So there are a few missing tools in LFoundry that we need to procure to ensure the technology alignment. But the total CapEx for the next year will be fairly limited. We'll be trying to leverage the present unused capacity in the most efficient way.

Let me just also comment a little bit more on LFoundry. Right now we actually have identified some technology and products that we're going to transfer to LFoundry, that hopefully would help to bring up the utilization in the next three to four quarters. At the same time, we are also leveraging the strength of LFoundry in the auto and CIS area, to re-cross-sell the technologies to our own customer set in China. So we believe actually this acquisition will be -- would create good value for us.

AutoSens Interview on ISO 26262 Standard

AutoSens conference to be held in Brussels, Belgium on Sept. 20-22, 2016 is about to sold-out with roughly twice as many people registered as the original target.

The conference site publishes an interview on the new ISO 26262 standard with ON Semi's Michael Brading, Technology Strategist, and Kenneth Boorom, Functional Safety Manager at the company’s facility in Corvallis, Oregon. Few quotes:

More and more image sensors are going into vehicles, covering applications from backup-cameras to pedestrian detection and lane keeping. Combining that mix of signals into an integrated single system is a challenge.

One example of a failure mode that we have identified is concerned with the readout mechanism. CMOS sensors are essentially designed around a CMOS memory architecture. The data flows off the chip one row at a time and that means that these can be susceptible to duplication – at first glance, the visual output might seem OK, but the net effect is that an error on multiple rows might obscure an object in the field of view."

View from in-car camera, without any visible defects
in signal.
In this illustration, a row address aliasing fault can lead
to a failure in which the full scene is replaced by
a subset of the scene replicated several times.

"The safety design process, common in industry and also established within ISO 26262, helps us find the problems that otherwise might not be detectable: A defect invisible to the human eye could upset the behavior of an algorithm. And critically, a defect in the behavior of an algorithm could impact the resulting data.

This image down not exhibit unexpected noise artefacts.
The same street scene showing faint bands of noise
caused by ‘bit-flip’ a symptom of a signal timing error.

"Image sensor safety mechanisms that ON Semiconductor provides in support of the ISO 26262 design process can detect some random hardware failures which can result in image quality degradation, such as that shown here, where a ‘bit flip’ in the design signal timing, produces noise in the second image."

Computer Vision Cores for Licensing

PRNewswire: CEVA announces that Rockchip, one of China’s low-cost SoC companies, has licensed the CEVA-XM4 imaging and vision DSP to enhance the imaging and computer vision capabilities of its’ SoC products for smartphones, ADAS, drones, robotics and other smart camera devices.

Rockchip will leverage the CEVA-XM4 for low-light enhancement, digital video stabilization, object detection and tracking, and 3D depth sensing. In addition, the CEVA-XM4 will enable Rockchip to use the deep learning technologies utilizing CEVA’s Deep Neural Network (CDNN2) software framework.

"Rockchip is determined to deliver ever-more compelling solutions for mobile and consumer devices, using the best-of-breed technologies available,” said Feng Chen, CMO of Rockchip. “The CEVA-XM4 imaging and vision processor and comprehensive software offering allows us to truly embrace the potential of computational photography, computer vision and machine learning in our product designs, seamlessly handling even the most complex use cases and algorithms.

CEVA and Rockchip have a long and successful partnership incorporating multiple generations of our imaging and vision DSPs shipping in tens of millions of Rockchip-based devices to date,” said Gideon Wertheizer, CEO of CEVA. “This latest agreement will enable Rockchip to significantly strengthen its offering in the exciting realm of computer vision and provide the platform with which they can improve the performance, power consumption and feature sets of their next-generation SoCs.

Synopsys publishes a video on its vision cores platform, too available for licensing:

Imaging Conference at SEMICON Europa

SEMICON Europa is to be held in Grenoble, France on October 25-26, 2016. The Imaging Conference at the event has quite an impressive image sensor agenda:

  • Drivers for Vision based Applications in the Automotive Environment
    Heinrich Gotzig, Valeo Master Expert, Valeo Schalter und Sensoren GmbH
  • The direction of CMOS image sensor evolution.
    Teruo Hirayama, Corporate Executive/ Pesident of Device & Material R&D Group, Sony Corporation
  • Camera module technologies and trends comprising assembly technologies, testing, and automation
    Kathrin Rieken, Design Engineer, Jabil Optics Germany GmbH
  • Driving by numbers
    Richard Bramley, Safety Architect, NVIDIA
  • Why image quality KPIs are a must for digital camera tuning
    Nicolas Touchard, VP Marketing, DxO Labs
  • Next Generation Human Activity Sensing For Smart Buildings
    Guillaume CROZET, VP Sales & Marketing, IRLYNX
  • Introducing vivaMOS - a CMOS image sensor spin-out from Rutherford Appleton Laboratory (RAL)
    Dan Cathie, CEO, vivaMOS Ltd
  • Enhanced features of camera modules using ultrasonic ceramic motors
    Jean-Michel Meyer, CEO, miniswys SA
  • Design strategies for low cost infrared cameras
    Guillaume DRUART, Research Scientist, ONERA
  • Is there anything beyond? Terahertz imaging: potential and perspectives
    Matteo Perenzoni, Senior Researcher, FBK
  • Democratization of optical spectroscopy for material analysis
    Damian Goldring, CTO, Consumer Physics Inc.
  • A compact, 4 channels fluorescence imaging acquisition system with no moving parts for molecular biology applications.
    Marco Bianchessi, R&I Manager, STMicroelectronics
  • Innovation in imaging on sensors, spectral filters, software and vision systems
    Maarten Willems, business director, imec
  • Analogue and Digital Pixels for Time Resolved SPAD Sensors
    Robert Henderson, Professor, University of Edinburgh
  • Neural Networks for Industry 4.0 : Analytics at the edge of the network
    Philippe LAMBINET, CEO, Cogito Instruments SA
  • Event-Driven Sensing and Processing for Vision
    Bernabe Linares-Barranco, Reseacher, CSIC
    Beat De Coi, CEO, ESPROS Photonics AG
  • A scientific HDR Multi-spectral imaging platform
    Benoit Dupont, Business Development, Pyxalis
  • Optimization of CMOS image sensor utilizing Variable Temporal Multi-Sampling Partial Transfer Technique to Achieve Full-frame High Dynamic Range with Superior Low Light and Stop Motion Capability
    Salman Kabir, Systems Engineer, Imaging Division, Rambus
  • Zero delay Focus with poLight TLens
    Jacques Dumarest, System Principal Engineer, poLight
  • CMOS Image Sensor Scaling Enabled by Direct Bond Technology
    Paul Enquist, VP 3D R&D, Invensas
  • Advanced Wafer Level Chip Scale Packaging Solution for Industrial CMOS Image Sensors.
    Jérôme VANRUMBEKE, Profesionnal Imaging Sensors Project Manager, e2v Grenoble
  • High Dynamic Range (HDR) stereo camera system for applications in robotics
    Markus Strobel, Head of Department Vision Sensors, Institut für Mikroelektronik Stuttgart
  • Deep submicron CMOS for novel types of smart image sensors
    Peter Seitz, Adjunct professor of optoelectronics, EPFL - Institute of Microengineering

Thursday, August 11, 2016

Image Sensor Training at CEI-Europe

CEI-Europe publishes a Youtube video on image sensor courses by Albert Theuwissen:

Omnivision Announces 1/18-inch Wafer Level Module for Endoscopic Applications

PRNewswire: OmniVision announces the OVM6946, its first wafer-level camera module for medical applications. With a compact size of 1.05 mm x 1.05 mm, a z-height height of 2.27 mm, 120-deg FOV and an extended focusing range of 3 mm to infinity, the 1/18-inch 400x400 1.75um pixel OVM6946 is suited for minimally-invasive endoscopes. Built on OmniVision's OmniBSI+T pixel architecture, the OVM6946 is said to be the industry's most cost-effective single-chip imaging solution for single-use endoscopes.

"Cross-contamination risks, downtime inefficiencies, and high costs associated with repairs, pre-procedure testing, and sterilization of reusable endoscopes are fueling market interests in disposable endoscopes," said Aaron Chiang, director of marketing at OmniVision. "As a cost-effective and compact camera module with excellent image quality, we view the OVM6946 as an ideal solution for next-generation single-use endoscopes."

The OVM6946 is currently available for sampling, and is expected to enter volume production in Q1 2017.

DIN Standard for Gesture Control Interface

Gestigon reports that German standard office DIN has approved a DIN SPEC 91333 standard for gesture control interfaces. From Google automatic translation:

"DIN SPEC provides instructions and recommendations for the design of the user interface of touchless gesture control in human-system interaction. Components of this document are central concepts, illustrating the process of contactless gesture control and the description, identification and representation (illustration) of human gestures. Furthermore, to define general rules for the design of usable gestures. Furthermore, examples presented touchless gesture. The DIN SPEC does not constitute a comprehensive catalog gesture nor a comprehensive list of applications, since these must be created industry- or application-specific. This DIN SPEC is intended for developers, product manufacturers, buyers, testers and end users of systems with gesture control. This DIN SPEC should together with E DIN EN ISO 9241-960 and ISO / IEC 30113-1 are applied and deals exclusively with the touchless gesture control specific aspects of the user interface in the human-system interaction."

The title page of the document shows a list of contributors to the standard:

Wednesday, August 10, 2016

Every Photon Counts

Sensors Journal publishes a paper "The Quanta Image Sensor: Every Photon Counts" by Eric Fossum, Jiaju Ma, Saleh Masoodian, Leo Anzagira and Rachel Zizza from Dartmouth College. "This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture."

Illustration of photoelectron counting. The signal is the continuously sampled FD voltage from a TPG jot (with 0.28 e− r.m.s. read noise when operated in a CDS mode.) The FD voltage was changed by photoelectrons from SW (and possibly dark generated electrons.) Each single electron generates a fixed voltage jump on FD, and with deep sub-electron read noise, the electron quantization effect is visible.
Scatter plot of voltage read noise vs. CG for PG jots and TPG jots. The read noise in e− r.m.s. levels are shown with dashed lines.

Tuesday, August 09, 2016

Sony to Use New Funds to Expand Stacked Sensor Manufacturing and Image Sensor R&D

The Wrap, Nikkei: Sony plans to raise $3.6b (440b Yen) through the issuance of new shares and bonds, all a part of its ongoing Mid-Term Corporate Strategy. "Sony plans to use the money from the issuance of new shares to increase its production capacity of stacked CMOS censors in its Devices segment, which it hopes will lead to additional profits. Further, the company wants to use the funds raised by the convertible bonds portion against capital expenditures for the segment and the repayment of debts.

Specifically, Sony aims to use about 188 billion Yen from the common stock offering to fund capital expenditures in the Devices segment, and the remainder to fund R&D there. From the convertible bonds issuance, Sony will use 51 billion Yen to fund capital expenditures on devices, 25 billion Yen to redeem outstanding bonds upon maturity, and the remainder to repay longterm debts.

Socionext Demos 8K Video Encoder Solution

Socionext publishes a NAB presentation of its 8K real time video encoding solution based on 4 boards with MB86M31 processor:

Monday, August 08, 2016

Pockemon Go and Imaging Industry

Yole Developpement publishes its take on how Pockemon Go can transform mobile imaging industry. Few quotes:

"In the quest for Pokemon, some have suggested that 3D cameras would make a better job at catching the small monsters. The share of Lenovo did take a good 10% upward due to this rumor. With the recent release of Phab 2 Pro, Lenovo integrated a PMD made 3D camera making it the first Google Tango enabled phablet.

We are entering into a new era of mixed reality interactions. Cameras and sensors that fit our smartphones will play a major role in this new era, actually they might even be one of the centerpiece.

Oppo F1s Front Camera Has Higher Resolution Than Rear One

Newly announced Oppo F1s "Selfie Expert" smartphone features 16MP selfie front camera, while the rear one is only 13MP. The front camera has a number of software extensions, such as selfie panorama mode.

Socionext Demos 4-Camera Fusion

Socionext publishes a Youtube video showing 4-camera image fusion to create different views in automotive applications:

Saturday, August 06, 2016

Recent Image Sensor Theses

There are two more interesting image sensor theses published recently:

"Low-Power and Compact CMOS Circuit Design of Digital Pixel Sensors for X-Ray Imagers" PhD Thesis by Roger Figueras Bague, Barcelona University, Spain. Here is the pixel proposal:

"A 1-Mega Pixels HDR and UV Sensitive Image Sensor With Interleaved 14-bit 64Ms/s SAR ADC" MSc Thesis by Ruijun Zhang. Delft University, The Netherlands, presents his work at Caeleste with fairly complete circuit-level overview of all blocks of the image sensor from pixel to digital output.