Wednesday, February 20, 2019

Samsung Galaxy S10 5G Features 6 Cameras

Samsung announces S10 generation of its flagship Galaxy phones, including the top of the range 5G model with ToF rear camera. CNET put together a nice table summarizing the camera differences in the S10 family:


"The Galaxy S10 5G truly packs the power of a professional-grade camera into your phone by offering a total of six lenses – two on the front and four on the back. In addition to featuring all of the lenses included in the Galaxy S10+, the 5G model introduces Samsung’s next-generation 3D Depth Camera.

This innovative camera allows the device to accurately capture depth by measuring the length of time it takes for an infrared light signal to bounce off the photograph’s subject. The camera uses the resulting depth information to improve the quality of portrait-style images, and to power exciting new features like Video Live focus and Quick Measure. The former allows you to apply cinema-quality bokeh effects to recorded videos, while the latter enables you to use your phone to instantly measure distance, area or volume.
"

Albert Theuwissen Reviews ISSCC 2019 - Part 2

The second part of Albert Theuwissen's review of image sensor session at ISSCC 2019 talks about “A Data Compressive 1.5b/2.75b Log-Gradient QVGA Image Sensor with Multi-Scale Readout for Always-On Object Detection” by Christopher Young, Alex Omid-Zohoor, Pedram Lajevardi, and Boris Murmann from Stanford University and Robert Bosch:


“A 76mW 500fps VGA CMOS Image Sensor with Time-Stretched Single-Slope ADCs Achieving 1.95 e– Random Noise” by Injun Park, Chanmin Park, Jimin Cheon, and Youngcheol Chae form Yonsei University and Kumoh National Institute of Technology, Korea:

Omnivision Announces 32MP 0.8um Pixel Sensor for Smartphones

PRNewswire: OmniVision announces the OV32A, its first 0.8um pixel sensor with 32MP resolution. The new sensor is built on PureCel Plus stacked die technology and aimed to high-end smartphones. The sensor also uses a 4-cell CFA, and features on-chip re-mosaic, which can provide full-resolution, 32MP Bayer output in normal lighting conditions. In low-light conditions, the OV32A can use near-pixel binning to output an 8MP image with 4 times the sensitivity.

The demand for high-performance smartphone cameras is very strong, with consumers clamoring for ever-higher resolution in high-end mobile phones—including selfie cameras,” said Arun Jayaseelan, senior marketing manager at OmniVision. “At the same time, circuitry space within smartphones is at a premium. With the OV32A’s submicron 0.8ยต pixel size, it can provide leading-edge 32MP resolution with a 1/2.8 inch optical format. This compact form factor, combined with many advanced features such as on-chip re-mosaic and near-pixel binning, make the OV32A ideal for high-end smartphones.

The OV32A speed is 15fps at full 32MP resolution or 8MP with 4-cell binning at 60 fps, 4K2K video at 60 fps, 1080p video at 120 fps and 720p video at 240 fps. Pad locations on the top and bottom of the image sensor reduce module size in the x-direction, which is ideal for front-facing cameras in thin-bezel, infinity-display smartphones.

OV32A samples are expected to be available in March 2019.

Trioptics Demos its Camera Module Tester

Trioptics video shows how its camera module tester for flexible low to mid volume production works:

TowerJazz Reports Q4 Results

SeekingAlpha: TowerJazz quarterly earning report gives some info on its image sensor business:

"Moving on to the sensors business, which includes both visual CMOS image sensors and non-visual sensors. In 2018, these revenues represented 18% of corporate revenue or approximately $230 million compared with 15% in 2017 or approximately $210 million, with majority of revenues related to industrial, medical, security and high end professional markets.

In 2019, we will begin to harvest from the previous year’s efforts as many of these are customers excellent products ramp into mass production. We provide the most advanced global shutter pixel platform, posting a 2.5 micron pixel size, built on a 65 nanometer platform.

These sensors are geared toward both industrial and commercial markets, covering applications such as machine vision, facial recognition, 3D mapping and augmented and virtual reality. We expect our global shutter platform to drive increased sales volume in 2019 and beyond and with the strong related market growth we see this as a key revenue driver from our industrial sensor customers.

In the medical X-ray market, we are continually gaining momentum working with several market leaders on large panel dental and medical CMOS detectors. These are based on our one-die per wafer sensor technology using our well established higher margins stitching with best in class high dynamic range pixels providing our customers with extreme value creation and high yield both in 200 millimeter and 300 millimeter way for technology.

We presently have strong business with market leadership in this segment and expect substantial growth in 2019 on 200 millimeter and with 300 millimeter single die per wafer initial qualifications that will drive incremental growth over the next multiple years.

In terms of upcoming growth drivers, there is now a major trend in the market, terms time of flight and structured light 3D sensing technologies. Markets driven by these technologies are generally mobile mainly facial recognition, but also front looking 3D mapping, commercial and augmented reality and virtual reality.

We're well-positioned with our unique global shutter pixel technology to address the structured light market and have developed single photon avalanche diode for direct time of flight. In particular we are progressing well on two very exciting opportunities in the augmented and virtual reality markets one for 3D time of flight based sensors and one for silicon based screens for VR head mount displays.

In the neural network field, we have made some substantial progress in partnership with AI Storm, demonstrating the unique building block for full onboard analog AI that can be embedded into our sensors to make smarter sensors.
"

Tuesday, February 19, 2019

Albert Theuwissen Reviews ISSCC Presentations - Part 1

Albert Theuwissen reviews two papers from ISSCC 2019 Image Sensor session held yesterday:

1. Smartsens "A Stacked Global-Shutter CMOS Imager with SC-Type Hybrid-GS Pixel and Self-Knee Point Calibration Single-Frame HDR and On-Chip Binarization Algorithm for Smart Vision Applications" by C. Xu, Y. Mo, G. Ren, W. Ma, X. Wang, W. Shi, J. Hou, K. Shao, H. Wang, P. Xiao, Z. Shao, X. Xie, X. Wang, and C. Yiu:


2. University of Michigan “Energy-efficient low-noise CMOS image sensor with capacitor array-assisted charge-injection SAR ADC for motion-triggered low-power IoT applications” by Kyojin D. Choo, Li Xu, Yejoong Kim, Ji-Hwan Seol, Xiao Wu, Dennis Sylvester, and David Blaauw:

ST ALS Senses Light Flicker

GlobeNewswire: STM releases a multispectral ambient light sensor (ALS) that simultaneously provides scene color temperature, ultra-violet (UVA) radiation level, and lighting frequency information. The new VD6281 helps the camera to set appropriate exposure times to avoid flicker artefacts and eliminate banding in pictures and videos, especially in scenes lit with contemporary LED sources.

"Leveraging the Company's extensive camera system know-how, ST's new VD6281 offers customers a state-of-the-art multispectral ambient light sensor," said Eric Aussedat, GM of ST's Imaging Division. "Our roadmap for ALS and flicker sensors is an ideal complement to ST's market-leading FlightSense Time-of-Flight (ToF) product portfolio. With a growing number of high-quality, high-resolution cameras per phone, our goal is to offer an advanced solution to assist white-balance correction and remove flicker artefacts in smartphone camera images. Mobiles equipped with VD6281 were already released in 2018, and many others will be coming soon."

With its form factor of 1.83 x 1.0 x 0.55mm, the VD6281 is said to be the smallest multispectral ambient light sensor available, allowing integration in bezel-free smartphones with small notches and inside smartwatches, where a high screen ratio is at a premium. The sensor uses direct-deposition interferometric filters to create 6 independent color channels: Red, Green, Blue, NIR, UVA and Clear for color-sensing capability and CCT (Color-Correlated Temperature) over a wide field-of-view of 120 degrees. The VD6281 is in production and available now.

Himax Quarterly Update

Himax updates on its imaging business:

"On CMOS image sensor business updates, we continue to make great progress with our machine-vision sensor product lines. Combining Himax’s industry leading super low power CIS and ASIC designs with Emza’s unique AI-based, ultra-low power computer vision algorithm, we are uniquely positioned to provide ultra-low power, smart imaging sensing total solutions. We are pleased with the status of engagement with leading players in areas such as connected home, smart building and security, all of which new frontiers for Himax.

For traditional human vision segments, we see strong demands in laptop and increasing shipment for multimedia applications such as car recorders, surveillance, drones, home appliances, and consumer electronics, among others.


3D Sensing Business

We have participated in most of the smartphone OEMs’ ongoing 3D sensing projects covering all three types of technologies, namely structured light, active stereo camera (ASC) and time-of-flight. Depending on the customers’ needs, we provide 3D sensing total solution or just the projector module or optics inside the module. We have highlighted in the last earnings call that the 3D sensing adoption for Android smartphone market remains low. The adoption is hindered primarily by the prevailing high hardware cost of 3D sensing and the long development lead time required to integrate it into the smartphone. Instead of 3D sensing, most of the Android phone makers have chosen the lower cost fingerprint technology which can achieve similar phone unlock and online payment functions with somewhat compromised user experience.

Reacting to their lukewarm response, we are working on the next generation 3D sensing with our platform partners aiming to leapfrog the market by providing high performance, easy to adopt and yet cost friendly total solutions, targeting the majority of Android smartphone players. We have a solid product roadmap and plan including new architecture, new algorithm to make it happen. The development progress is on track and the new solution is aiming for smartphone customers’ 2020 models.

We believe that 3D sensing will be widely used by more Android smartphone makers when more killer applications become available and the ecosystem is able to substantially lower the cost of adoption while offering easy-to use, fully-integrated total solutions, for which Himax is playing a key part.

I have mentioned previously that 3D sensing can have a wide range of applications beyond smartphone. We have started to explore business opportunities in various industries by leveraging our SLiMTM 3D sensing total solution. Such industries are typically less sensitive to cost and always require a total solution. We are collaborating with Kneron, an industry leader in edge-based artificial intelligence in which we have made an equity investment, to develop an AI-enabled 3D sensing solution targeting security and surveillance markets. We are also working with partners/customers on new applications covering home appliances and industrial manufacturing.
"

Monday, February 18, 2019

AIStorm Patent Application

AIStorm sent me a nice summary of its recent press announcements:

AIStorm addresses the enormous opportunity of enabling more 'intelligence' right at the edge of the network.

1. Company and team
  • This low-profile AI tech startup is poised to tackle a big problem: enabling faster processing of complex AI problems at the very edge of the network — within sensors.
  • AIStorm has developed and patented a new approach that will disrupt the GPU-based approach that is typically used today.
  • Founded/led by David Schie, an expert in analog and mixed-signal hardware design who has led large teams at Maxim, Micrel, and Semtech.
  • The team also includes proven veterans that round out the company's design and fabrication expertise.

2. Market & technology
  • AIStorm has solved a growing problem: the need to process sensor information at the edge of the network to reduce the cost and security risk of transmitting large amounts of raw data.
  • AI systems require information be available in digital form before they can process data, but sensor data is generally analog. AIStorm solves this problem by processing sensor data directly in its native analog form, in real time.
  • AIStorm is targeting some of the world’s largest handset, machine vision, wearable, IoT, automotive, food service, AI assistant, security, biometric device, & imager applications.
  • Gryfalcon, Mythic-AI, Syntiant and Google have all announced they are pursuing AI engine platforms at the edge. But... none of their approaches incorporate sensors into the solution. AIStorm changes that.

3. Investors
AIStorm is backed by four large sensor and industrial companies that are eager to integrate AIStorm's technology into upcoming products…
  • Egis Technology, a major biometrics supplier to handsets, gaming, and advanced driver-assistance systems.
  • TowerJazz, the global specialty foundry leader that specializes in image sensors for commercial, industrial, AR, and medical markets.
  • Meyer Corporation, a world leader in food preparation equipment.
  • Linear Dimensions Semiconductor, a leader in biometric authentication and digital health products.

David Schie's US patent application 20140344200 "Low power integrated analog mathematical engine" shows a convolution engine based on partial charging of caps by the controlled current sources, said to eliminate the need in capacitor scaling to change coefficients:

"The switched capacitor charge controls allow for nodal control of charge transfer based switched capacitor circuits. The method reduces reliance on passive component programmable arrays to produce programmable switched capacitor circuit coefficients. The switched capacitor circuits are dynamically scaled without having to rely on unit passives, such as unit capacitors, and the complexities of switching these capacitors into and out of circuit. The current, and thus the charge transferred is controlled at a nodal level, and the current rather than the capacitors are scaled providing a more accurate result in addition to saving silicon area. Furthermore, the weightings and biases now set as currents may be saved and recalled by coupling current source bias circuits to non-volatile memory means such as analog non-volatile memory."

Saturday, February 16, 2019

WKA - Last Call for Nominations

International Image Sensor Society Call for Nominations for the Walter Kosonocky Award nearing its deadline on February 18, 2019:

"The Walter Kosonocky Award is presented biennially for THE BEST PAPER presented in any venue during the prior two years representing significant advancement in solid-state image sensors. The award commemorates the many important contributions made by the late Dr. Walter Kosonocky to the field of solid-state image sensors. Personal tributes to Dr. Kosonocky appeared in the IEEE Transactions on Electron Devices in 1997.

Founded in 1997 by his colleagues in industry, government and academia, the award is also funded by proceeds from the International Image Sensor Workshop. (See International Image Sensor Society’s website for detail and past recipients)

The award is selected from nominated papers by the Walter Kosonocky Award Committee, announced and presented at the International Image Sensor Workshop (IISW), and sponsored by the International Image Sensor Society (IISS).
"

Your nominations should be sent to Rihito Kuroda (2019nominations@imagesensors.org), Chair of the IISS Award Committee.

Friday, February 15, 2019

Google Pixel 3 XL Cameras Cost Estimated at 14% of BOM

TechInsights publishes a teardown report of Google Pixel 3 XL smartphone with cameras taking 14.2% of the total cost:


The phone includes a dedicated ISP chip designed by Intel and Google and manufactured in TSMC 28nm process:

TrinamiX Managing Director on Distance Measuring

TrinamiX Managing Director Ingmar Bruder explains how organic solar cells can be used for 3D distance measuring:

Thursday, February 14, 2019

MIT Sub-THz Imager

MIT researchers have developed a sub-terahertz-radiation receiving system that could help driverless cars see through fog and dust clouds.

In a paper published online on Feb. 8 by the IEEE JSSC, the researchers describe a two-dimensional, sub-terahertz receiving array on a chip that’s orders of magnitude more sensitive. To achieve this, they implemented a scheme of independent signal-mixing pixels — called “heterodyne detectors” — that are usually very difficult to densely integrate into chips. The researchers drastically shrank the size of the heterodyne detectors so that many of them can fit into a chip. The trick was to create a compact, multipurpose component that can simultaneously down-mix input signals, synchronize the pixel array, and produce strong output baseband signals.

The researchers built a prototype, which has a 32-pixel array integrated on a 1.2-square-millimeter device. The pixels are approximately 4,300 times more sensitive than the pixels in today’s best on-chip sub-terahertz array sensors. With a little more development, the chip could potentially be used in driverless cars and autonomous robots.

A big motivation for this work is having better ‘electric eyes’ for autonomous vehicles and drones,” says co-author Ruonan Han, an associate professor of electrical engineering and computer science, and director of the Terahertz Integrated Electronics Group in the MIT Microsystems Technology Laboratories (MTL). “Our low-cost, on-chip sub-terahertz sensors will play a complementary role to LiDAR for when the environment is rough.

Joining Han on the paper are first author Zhi Hu and co-author Cheng Wang, both PhD students in Han’s research group.

More about AIStorm

Venturebeat, ElectronicsWeekly, EETimes report more details on AIStorm AI-on-Sensor startup:

  • AIStorm has been founded in 2011 and has been in stealth mode till the recent announcement of $13.2M round A financing.
  • AIStorm’s patented chip design is capable of 2.5 Tera Ops and 10 Tera Ops per watt, which said to be 5x to 10x lower than the average GPU-based system’s power.
  • The company uses a technique called switched charge processing, which allows the chip to control the movement of electrons between storage elements
  • "The TowerJazz pixel is part of our input layer, so the charge comes from sensors, they produce electrons, and we multiply and move them" - says AIStorm
  • AIStorm tested its first chip this month and plans to ship production orders next year
  • The company’s first products are to be made in 65nm or 180nm process
  • AIStorm is planning Series B round for follow up products in 28-nm and possibly finer nodes
  • The production chips are aimed to be compatible with popular AI frameworks such as TensorFlow

Wednesday, February 13, 2019

Digitimes: Sales of Android Smartphones with ToF Camera to Reach 20M Units in 2019

Digitimes: Shipments of Android phones with 3D cameras are set to boom in 2019, propelled by the increasing adoption of rear ToF cameras, according to Digitimes Research.

Oppo led all other Android phone makers with its rear ToF camera introduced in November 2018. Oppo competitors, including Huawei, Xiaomi, and Vivo are likely to follow with their ToF-enabled models in 2019.

Shipments of ToF camera Android smartphones are expected to reach 20M units in 2019, Digitimes Research estimates.

Xenomatix Raises 5M Euros

De Rijkste Belgen newspaper reports that LiDAR startup Xenomatix raises 5M euros. 2M euros comes through the conversion of a bond loan, while 3M is a fresh investment by Carl Van Hool and AGC Automotive Europe, a part of Asahi Japan. XenomatiX and AGC have partnered to develop a windshield-mounted LiDAR.

For a LiDAR commercialization, Xenomatix is said to need 10M euros. After the current round of financing, the company has 6.8M euros. In the fiscal year that ended in June 2018, Xenomatix was profitable with income of 0.6M euros. Xenomatix has 22 employees.

AIStorm Raises $13.2M to Develop AI-on-Sensor Technology

BusinessWire: San Jose, CA-based startup AIStorm raises $13.2M in Series A round from Egis Technology, TowerJazz, Meyer Corpo, and Linear Dimensions Semiconductor Inc.

This investment will help us accelerate our engineering & go-to-market efforts to bring a new type of machine learning to the edge. AIStorm’s revolutionary approach allows implementation of edge solutions in lower-cost analog technologies. The result is a cost savings of five to ten times compared to GPUs — without any compromise in performance,” said David Schie, CEO of AIStorm.

Using sensor data directly—without digitization is said to enable real-time processing at the edge. AI systems require information be available in digital form before they can process data, but sensor data is analog. Processing this digital information requires advanced and costly GPUs that are not suitable for mobile devices: they require continuous digitization of input data, which consumes significant power and introduces unavoidable digitization delay (latency). AIStorm aims to solve these problems by processing sensor data directly in its native analog form, in real time.

It makes sense to combine the AI processing with the imager and skip the costly digitization process. For our customers, this will open up new possibilities in smart, event-driven operation and high-speed processing at the edge,” said Avi Strum, SVP/GM of the sensors business unit of TowerJazz.

The reaction time saved by AIStorm’s approach can mean the difference between an advanced driver-assistance system detecting an object and safely stopping versus a lethal collision,” said Russell Ellwanger, CEO of TowerJazz.

Edge applications must process huge amounts of data generated by sensors. Digitizing that data takes time, which means that these applications don’t have time to intelligently select data from the sensor data stream, and instead have to collect volumes of data and process it later. For the first time, AIStorm’s approach allows us to intelligently prune data from the sensor stream in real time and keep up with the massive sensor input tasks,” said Todd Lin, COO of Egis Technology Inc.

AIStorm’s management includes CEO David Schie, a former senior executive at Maxim, Micrel and Semtech; CFO Robert Barker, formerly with Micrel and WSI; Andreas Sibrai, formerly with Maxim and Toshiba; and Cesar Matias, founder of ARM’s Budapest design center. AIStorm is based in San Jose, CA with offices in Austria, Taiwan, Phoenix and soon Dresden and Israel.

Adobe Unveils AI-based Demosaicing

Adobe presents AI-powered demosaic algorithm for Bayer and Fujifilm X-Trans CFAs:

"...we’re introducing an all-new Sensei-powered feature, Enhance Details. Harnessing the power of machine learning and computational photography, Enhance Details... takes a brand new approach to demosaicing raw photos.

The new Enhance Details algorithm enables you to increase the resolution of both Bayer and X-Trans based photos by up to 30%. Applying Enhance Details to your photos can greatly improve fine detail rendering, improve the reproduction of fine colors, and resolve issues that some customers reported with their Fujifilm X-Trans based cameras.
"


Via Imaging Resource.

Smartsens Interview

Electronic Design publishes an interview with Leo Bai, SmartSens’ AI BU General Manager. Few quotes:

"...single-frame HDR Global Shutter technology is better for image-recognition-based AI applications than conventional CMOS image sensors that use Multiple-Exposure HDR technology. Combined with a DVP/MIPI/LVDS interface, single-frame HDR Global Shutter technology can be adapted to various types of SoC platforms.

...adoption of global-shutter technology is growing rapidly, in comparison to rolling-shutter technology. One of the main reasons is that a global-shutter CMOS image sensor is able to achieve excellent real-time performance without the jelly effect, especially in AI and machine-vision applications. With advanced manufacturing process technology and reduced cost, it’s expected to see increasing market demand for global-shutter CMOS image sensors.
"

Tuesday, February 12, 2019

Autosens Detroit Agenda

Autosens Detroit conference to be held on May 14-16, 2019 announces its agenda with a number of interesting imaging presentations:

  • Infrared Camera Sensing for ADAS and Driverless Vehicles: Applications, Challenges and Design Considerations. Workshop by Rajeev Thakur, OSRAM Opto
  • Material interactions for autonomous sensor applications. Workshop by Jim Howard and Jonah Shaver, 3M
  • Keeping eyes on the passengers - developing an in-cabin omni-sensor, Guy Raz, Guardian Optical Technologies
  • The FIR Revolution: How FIR Technology Will Bring Fully Autonomous Vehicles to the Mass Market, Yakov Shaharabani, Adasky
  • The Future of Driving: Enhancing Safety on the Road with Thermal Sensors, Tim Lebeau, Seek Thermal
  • RGB-IR Sensors for in Cabin Automotive Applications, Boyd Fowler, Omnivision
  • The Next Generation of SPAD Arrays for Automotive LiDAR, Wade Appleman, ON Semi
  • What’s in Your Stack? Why Lidar Modulation Should Matter to Every Engineer, Randy Reibel, Blackmore
  • Addressing LED flicker, Brian Deegan, Valeo
  • From Camera to LiDAR systems alignment and testing in mass production of ADAS Sensors, Dirk Seebaum, Trioptics
  • The influence of colour filter pattern and its arrangement on resolution and colour reproducibility, Tsuyoshi Hara, Sony
  • Highly Efficient Autonomous Driving with MIPI Camera Interfaces, Hezi Saar, Synopsys
  • Tuning image processing pipes (ISP) for automotive use, Manjunath Somayaji, GEO Semiconductor
  • Computational imaging through occlusions; seeing through fog, Guy Satat, MIT Media Lab
  • ISP optimization for ML/CV automotive applications, Alexis Lluis Gomez, ARM

Productive Use of Black Sun Effect

Finally, somebody found the use for black sun effect in a productive way. MDPI publishes Sungkyunkwan University, Korea paper "Accurate and Cost-Effective Micro Sun Sensor based on CMOS Black Sun Effect" by Rashid Saleem and Sukhan Lee.

"An accurate and cost-effective micro sun sensor based on the extraction of the sun vector using a phenomenon called the “black sun” is presented. Unlike conventional image-based sun sensors where there is difficulty in accurately detecting the sun center, the black sun effect allows the sun center to be accurately extracted even with the sun image appearing irregular and noisy due to glare. This allows the proposed micro sun sensor to achieve high accuracy even when a 1 mm × 1 mm CMOS image sensor with a resolution of 250 × 250 pixels is used."

Monday, February 11, 2019

Multispectral CFA Fabrication With Grayscale Mask

University of Cambridge paper "Grayscale-to-color: Single-step fabrication of bespoke multispectral filter arrays" by Calum Williams, George Gordon, Sophia Gruber, Timothy Wilkinson, and Sarah Bohndiek proposes grayscal photolitography for multispectral color filter manufacturing:

"Conventional cameras, such as in smartphones, capture wideband red, green and blue (RGB) spectral components, replicating human vision. Multispectral imaging (MSI) captures spatial and spectral information beyond our vision but typically requires bulky optical components and is expensive. Snapshot multispectral image sensors have been proposed as a key enabler for a plethora of MSI applications, from diagnostic medical imaging to remote sensing. To achieve low-cost and compact designs, spatially variant multispectral filter arrays (MSFAs) based on thin-film optical components are deposited atop image sensors. Conventional MSFAs achieve spectral filtering through either multi-layer stacks or pigment, requiring: complex mixtures of materials; additional lithographic steps for each additional wavelength; and large thicknesses to achieve high transmission efficiency. By contrast, we show here for the first time a single-step grayscale lithographic process that enables fabrication of bespoke MSFAs based on the Fabry-Perot resonances of spatially variant metal-insulator-metal (MIM) cavities, where the exposure dose controls insulator (cavity) thickness. We demonstrate customizable MSFAs scalable up to N-wavelength bands spanning the visible and near-infrared with high transmission efficiency (~75%) and narrow linewidths (~50 nm). Using this technique, we achieve multispectral imaging of several spectrally distinct target using our bespoke MIM-MSFAs fitted to a monochrome CMOS image sensor. Our unique framework provides an attractive alternative to conventional MSFA manufacture, by reducing both fabrication complexity and cost of these intricate optical devices, while increasing customizability."

Sunday, February 10, 2019

Smartsens Announces 4MP 2um BSI Pixel Sensor

PRNewswire: After releasing a 4MP sensor with NIR-enhanced 3um pixels 2 months ago, Smartsens adds a similarly specked sensor with 2um pixels. Aimed to AIoT (AI+IoT) market, the new 1/3-inch SC4238 4MP BSI sensor supports a 2-exposure HDR mode with a DR up to 100dB, and has improved QE in 850nm to 940nm band.

Compared with other 1/3-inch 4MP HDR products on the market, SC4238 has an advantage of good performance in low light — SNR1 0.47 vs 1.22, which is about 2.6 times better performance than our competitor.

SC4238 is in mass production now.

ADAS to Double Automotive Image Sensor Shipments by 2023

Counterpoint Research forecasts the demand for image sensors for passenger cars globally to double by 2023. The shipments of these image sensors will grow at a 19% CAGR, crossing 230M units by 2023:

"With rear cameras for basic parking assistance in cars becoming almost a standard feature on newer models, we are now seeing the advent of a higher proportion of cars fitted with front-facing and side cameras to enable enhanced ADAS features. For example, surround view cameras, which are currently an option on higher-end models, will see significant adoption over the next five to six years. The key factors driving this trend are, firstly, government encouraging OEMs to integrate advanced safety features, and, secondly, growing awareness and preference for advanced safety features among customers.

With higher adoption of front ADAS camera solutions, by 2023, all new cars sold in the US are likely to have more than three cameras per car.
"

Saturday, February 09, 2019

Curve-One to Bring Curved Sensors to Market

Cea-Leti spin-off Curve-One startup aims to commercialize curved sensor technology and products:

"Fruit of 6 years of research and development, the Curve-One Fish-Eye benefits from the most optimized process. Based on not less than 8 patents, mass-production oriented, Curve-One is the most technically advanced wide field camera.

In terms of technologies, this is the dawn of a new era for cameras, camera phones, autonomous cars, drones, military instruments and bio-medical instruments, with the access to wider fields and exquisite homogeneity of the optical properties across the images, and faster systems not possible with classical flat focal planes.

Also, fewer components are needed, and the remaining ones are less complex. This increases economical and technical performance for the optical systems optimizations.

Soon to be off-the-shelf components for civil applications (cameras, civil drone) these breakthrough components will blossom in the technical needs for autonomous cars, military drones and advanced bio-medical applications.
"

11 Myths about LiDARs

ElectronicDesign: Cepton CEO and co-founder Jung Pei presents 11 myths about LiDARs as a background to show the company advantages:
  1. LiDAR is a very high-tech solution.
  2. LiDAR is expensive.
  3. Solid-state LiDAR is the best approach because it has no moving parts.
  4. Flash LiDAR is the best LiDAR for imaging.
  5. LiDAR must operate infrared wavelengths.
  6. LiDAR isn’t safe for the human eye.
  7. LiDAR can’t work in poor weather conditions.
  8. LiDAR can only be used for automobiles.
  9. LiDAR won’t be incorporated into vehicles for another decade.
  10. LiDAR can be fully replaced by cameras, radar, or a combination of the two.
  11. FMCW LiDAR is better than ToF LiDAR.

Friday, February 08, 2019

LG Smartphone to Integrate Inifineon ToF Camera for FaceID

PRNewswire: LG, Infineon, and PMD have teamed up to introduce ToF camera in the LG G8 ThinQ smartphone. Infineon's REAL3 image sensor chip will be integrated into the front-facing camera of the upcoming LG G8 ThinQ.

"Infineon is poised to revolutionize the market," said Andreas Urschitz, division president of Infineon's Power Management & Multimarket division. "We have demonstrated service beyond the mere product level – specifically catering to phone OEMs, associated reference design houses and camera module manufacturers. Within five years, we expect 3D cameras to be found in most smartphones and Infineon is poised to contribute a significant share."

"Keeping in mind LG's goal to provide real value to its mobile customers, our newest flagship was designed with ToF technology from inception to give users a unique and secure verification system without sacrificing camera capabilities," said Chang Ma, SVP and head of product strategy at LG Mobile Communications Company. "With innovative technology like ToF, the LG G8 ThinQ will be the optimal choice for users in search of a premium smartphone that offers unmatched camera capabilities."

Photonics West Videos

Canon presents its current and future image sensor offerings:





FLIR presents its new InGaAs cameras:



Photron demos its polarized imaging use cases:





PCO talks about sCMOS sensor evolution:

Thursday, February 07, 2019

Sony 1.12um Pixel Reverse Engineering

Australian University of Adelaide group of researches publishes an interesting arxive.org paper with further research of Sony IMX219PQ 1.12um pixel nonuniformities "Reverse Engineering the Raspberry Pi Camera V2: A study of Pixel Non-Uniformity using a Scanning Electron Microscope" by Richard Matthews, Matthew Sorell, and Nickolas Falkner. The paper undergoes a peer review now.

"In this paper we reverse engineer the Sony IMX219PQ image sensor, otherwise known as the Raspberry Pi Camera v2.0. We provide a visual reference for pixel non-uniformity by analysing variations in transistor length, microlens optic system and in the photodiode. We use these measurements to demonstrate irregularities at the microscopic level and link this to the signal variation measured as pixel non-uniformity used for unique identification of discrete image sensors."

Wednesday, February 06, 2019

Fairchild Imaging Unveils sCMOS 3.0 Sensors Featuring BSI & DTI

Fairchild Imaging is introducing two new 4/3” 10MP sensors, LTN4323 and MST4323, the first to feature sCMOS 3.0 technology. The monochrome LTN4323 (for scientific and industrial cameras) has 0.7e read noise. A special BSI process enhancement delivers a broad spectrum NIR-QE with greater than 2x sensitivity. Dark current at 30C is less than 2e/s enabling compact camera designs without the need for TE cooling. The color MST4323 (for professional cinema cameras) provides 4k video at 120fps.

ON Semi Automotive Sensors in Challenging Lighting Transitions

ON Semi publishes a short video on its automotive CMOS sensor operation in challenging lighting transitions on the road:

Tuesday, February 05, 2019

Melexis Launches High Operating Temperature Thermal Sensor Array

Melexis announces a new version of its FIR thermal sensor array with lower thermal noise compared to the current MLX90640, an increased refresh rate of 64 Hz and an elevated operating temperature up to 125 °C.

The new MLX90641 is a small 16 x 12 pixel IR array housed in a 4-lead TO39 package that is able to accurately measure temperatures in the range -40 °C to +300 °C. The factory-calibrated devices ensure an accuracy of 1 °C in typical measurement conditions. The high accuracy is further supported by a Noise Equivalent Temperature Difference (NETD) of 0.1 K RMS.

Two different FoV options are available, a standard 55° x 35° and a wide angle 110° x 75°. The device is simple to use, operating from a single 3.3 V supply and storing all results in internal RAM for access via an I2C compatible digital interface. A proprietary algorithm ensures high thermal stability, even in conditions where the temperature is changing rapidly.