Tuesday, December 06, 2016

ON Semi Q3 2016 Results

ON Semi reported its Q3 results in the beginning of November. This time, the company presents the sales figures in $M for its main business groups:


SeekingAlpha earnings call transcript mentions imaging business status in Q&A session:

Tristan Gerra - Robert W. Baird & Co., Inc. (Broker)

Great. And then if I heard the number well, your Image Sensor Group revenue implies a little bit of a year-over-year decline in the quarter. Could you talk about the trends in that business? Are we still on track in terms of the accretion for the year and also the type of growth drivers that you see going forward?

Keith D. Jackson - ON Semiconductor CEO

Yeah. We had a very substantial double-digit growth in the automotive sector, which is the focus in that business. The slight declines year-on-year were all coming out of handsets in the consumer side, which again we mentioned at the time of purchase we wanted to de-emphasize those because they were quite low margin. And so, really, everything remains on track as we look at it another year long.

Mobileye on Challenges for Artificial Intelligence in Self-Driving Cars

Mobileye publishes a Youtube lecture by its CTO Amnon Shashua on challenges in creating algorithms for autonomous cars. The company keeps to be optimistic about self-driving car in 5-7 years from now and somewhat critical of the current autonomous driving experiments, such as one in Pittsburgh:



One of the slides at 11:20 time talks abut 8-camera configuration that Mobileye is developing for self-driving car:


Thanks to DS for the link!

Monday, December 05, 2016

Galaxycore Announces 1.12um Pixel

With the new Galaxycore product line, it seems that most of 1st, 2nd and 3rd tier mainstream image sensor makers have 1.12um pixel products in their portfolio. Galaxycore's first 1.12um product appears to be 5MP 1/5-inch GC5005 based on TSI technology:

Sunday, December 04, 2016

Apple Admits Working on Autonomous Driving Cars

BBC: A company spokesman for Apple said that a letter to US transport regulators, to the National Highway Traffic Safety Administration (NHTSA) was prompted by its "heavy investment in machine learning and autonomous systems" and that it wanted to help define best practices in the industry. The company has already registered several car-related internet domains, including apple.car and apple.auto.

Quotes from the letter:

"Apple uses machine learning to make its products and services smarter, more intuitive, and more personal. The company is investing heavily in the study of machine learning and automation, and is excited about the potential of automated systems in many areas, including transportation.

Apple affirms that, in order to best protect the traveling public and keep up with the pace of innovation, NHTSA should expedite requests for exemption and interpretation and petitions for rulemaking.

Apple agrees that companies should share de-identified scenario and dynamics data from crashes and near-misses. By sharing data, the industry will build a more comprehensive dataset than any one company could create alone.
This will allow everyone in the industry to design systems to better detect and respond to the broadest set of nominal and edge-case scenarios. Apple looks forward to collaborating with other stakeholders to define the specific data that should be shared.
"

Thanks to ND for the link!

Friday, December 02, 2016

DR Explanation

Albert Theuwissen discusses imager's Dynamic Range definition and factors that often make a measured DR be lower than one specified in the datasheet.

Thursday, December 01, 2016

Canon Applies for Curved Edge Sensor Patent

Canon Watch site noticed Egami article on Canon sensor with curved periphery portion, said to improve lens vignetting:


Update: Mirrorless Rumors site posted another patent from Egami , this time applied by Toshiba for a curved sensor:

Tamron Develops 140dB DR Image Sensor

Nikkei: Tamron announces successful development of an innovative image sensor technology, with support of Japan Science and Technology Agency (JST) of National Research and Development Agency, that enhances imaging capability far exceeding that of human vision, featuring ultra-high light sensitivity and WDR.

The new technology is said to enhance both sensitivity and DR, which has been considered difficult in the past. The ultra-high sensitivity enables capturing clear color images even under an extreme low brightness of 0.003 lux, which is darker than starlight. The WDR performance is exceeding 140dB.


Three key technologies - Optics, Image Sensor, and Image Processing - have been identified essential, and their effective integration has been studied for the new imaging technology:


This development is supported by Japan Science and Technology Agency (JST), Adaptable and Seamless Technology transfer Program through Target-driven R&D (A-STEP).

Wednesday, November 30, 2016

Brigates Publishes Image Sensor Datasheets

Brigates publishes fairly detailed datasheets of its image sensors with characterization data. It's a brave move as some of the parameters are quite far from the state of the art.

BG0703 is a 1MP 1/2.7-inch rolling shutter image sensor aimed to high-end surveillance and industrial imaging markets:


A newer and improved 1080p 1/3-inch BG0803 targets the same market:

Challenges: ST Signs Huge Contract with Apple

French-language site Challenges reports that ST has signed a huge contract with Apple for image sensors, which will boost the workload at its Crolles plants, where fifty new machines are being installed. The influx of this and other new orders allows the Crolles 200mm fab to reach utilization rate of 100%, while Crolles 300mm plant utilization exceeds 80%. When questioned, ST group refused to comment.

Possibly, the French site talks about ST AF ToF sensors used in iPhone 7.

Thanks to SL for the link!

Tuesday, November 29, 2016

Mediatek to Enter ADAS Vision Solutions Market

MediaTek announces its plan to bring ADAS vision solutions to the automotive industry beginning Q1 2017:

"Reimagined from the ground up, MediaTek’s ADAS system will feature cutting-edge, decentralized Vision Processing Unit (VPU) solutions to optimally handle large amounts of real-time visual streaming data. MediaTek employs Machine Learning to increase the accuracy and speed of detection, recognition and tracking, making it more comparable to human decision-making performance."