Wednesday, March 21, 2018

ToF Depth Resolution Improved to 6.5nm

OSA Optics Letters issue dated by April 1st, 2018 publishes Peking University, China, KAIST and KRISS, Korea paper "Time-of-flight detection of femtosecond laser pulses for precise measurement of large microelectronic step height."

"By using time-of-flight detection with fiber-loop optical-microwave phase detectors, precise measurement of large step height is realized. The proposed method shows uncertainties of 15 nm and 6.5 nm at sampling periods of 40 ms and 800 ms, respectively. This method employs only one free-running femtosecond mode-locked laser and requires no scanning of laser repetition rate, making it easier to operate. Precise measurements of 6 μm and 0.5 mm step heights have been demonstrated, which show good functionality of this method for measurement of step heights."

Google Reportedly Buys Lytro

Techcrunch sources report that Google is acquiring Lytro:

"One source described the deal as an “asset sale” with Lytro going for no more than $40 million. Another source said the price was even lower: $25 million. A third source tells us that not all employees are coming over with the company’s technology: some have already received severance and parted ways with the company, and others have simply left. Assets would presumably also include Lytro’s 59 patents related to light-field and other digital imaging technology.

The sale would be far from a big win for Lytro and its backers. The startup has raised just over $200 million in funding and was valued at around $360 million after its last round in 2017. Its long list of investors include Andreessen Horowitz, Foxconn, GV, Greylock, NEA, Qualcomm Ventures and many more.

Here is NYTimes illustration of Lytro's first product back in 2012:

Samsung Foundry CIS Offerings

Samsung publishes its CIS process features available at 8-inch foundry:

Currently, all 8-inch foundry wafers are processed at Line 6 in Giheung campus, Korea:

Recent Progress of Visible Light Image Sensors

CERN publishes Nobukazu Teranishi's 58 page-large presentation "Recent Progresses of Visible Light Image Sensors" at the Detector Seminar at CERN on February 23, 2018. There is a lot of interesting slides, including spares in the end. Here is just a small part of the content:

Tuesday, March 20, 2018

Nikkei Reviews Sony Paper at ISSCC

Nikkei publishes a 4-part review of Sony ISSCC 2018 presentation on event-driven sensor:

ST Talk about Dirty Glass in ToF Imaging

ST video presents issues with dirty cover glass in ToF devices, followed by a sort of obvious solution:

Monday, March 19, 2018

Up-Conversion Device to Give 1550nm Sensitivity to CMOS Sensors

Nocamels, The Times of Israel: Gabby Sarusi from Ben-Gurion University of the Negev "has developed a stamp-like device of which one side reads 1,500-nanometer infrared wavelengths, and converts them to images that are visible to the human eye on the other side of the stamp. This stamp — basically a film that is half a micron in thickness — is composed of nano-metric layers, nano-columns and metal foil, which transform infrared images into visible images.

An infrared sensor costs around $3,000, Sarusi said. A regular vision sensor used by autonomous cars costs $1-$2. So, by adding the nanotech layers, which cost around $5, Sarusi said, one can get an infrared sensor for about $7-$8.

Thanks to DS for the pointer!

Omnivision Nyxel Technology Wins Smart Products Leadership Award

Frost & Sullivan’s Manufacturing Leadership Council prizes Omnivision by Smart Products and Services Leadership Award for Nyxel NIR imaging technology.

Sunday, March 18, 2018

SF Current and RTN

Japanese Journal of Applied Physics publishes Tohoku University paper "Effect of drain current on appearance probability and amplitude of random telegraph noise in low-noise CMOS image sensors" by Shinya Ichino, Takezo Mawaki, Akinobu Teramoto, Rihito Kuroda, Hyeonwoo Park, Shunichi Wakashima, Tetsuya Goto, Tomoyuki Suwa, and Shigetoshi Sugawa. It turns out that lower SF current can reduce RTN, at least for 0.18um process used in the test chip:

Saturday, March 17, 2018

ST Announces 4m Range ToF Sensor

The VL53L1X TOF sensor extends the detection range of ST's FlightSense technology to four meters, bringing high-accuracy, low-power distance measurement, and proximity detection to an even wider variety of applications. The fully integrated VL53L1X measures only 4.9mm x 2.5mm x 1.56mm, allowing use even where space is very limited. It is also pin-compatible with its predecessor, the VL53L0X, allowing easy upgrading of existing products. The compact package contains the laser driver and emitter as well as SPAD array light receiver that gives ST’s FlightSense sensors their ranging speed and reliability. Furthermore, the 940nm emitter, operating in the non-visible spectrum, eliminates distracting light emission and can be hidden behind a protective window without impairing measurement performance.

ST publishes quite a detailed datasheet with the performance data:

GM 4th Gen Self-Driving Car Roof Module

GM has started production of a roof rack for its fourth generation Cruise AV featuring 5 Velodyne LiDARs and, at least, 7 cameras:

Friday, March 16, 2018

MEMSDrive OIS Technology Presentation

MEMSDrive kindly sent me a presentation on its OIS technology:

Pictures from Image Sensors Europe 2018

Few assorted pictures from Image Sensors Europe conference being held these days in London, UK.

From Ron (Vision Markets) twitter:

Image Sensors twitter:

From X-Fab presentation:

Thursday, March 15, 2018

Rumor: Mantis Vision 3D Camera to Appear in Samsung Galaxy S10 Phone

Korean newspaper The Investor quotes local media reports that Mantis Vision and camera module maker Namuga are developing 3-D sensing camera for Samsung next-generation Galaxy S smartphones, tentatively called the Galaxy S10. Namuga is also providing 3-D sensing modules for Intel’s RealSense AR cameras.

TechInsights: Samsung Galaxy S9+ Cameras Cost 12.7% of BOM

TechInsights Samsung Galaxy S9+ cost table estimates cameras cost at $48 out of $379 total. The previous generation S8 camera was estimated at $25.50 or 7.8% of the total BOM.

TechInsights publishes a cost comparison of this year and last yera;s flagship phones. Galaxy S9+ appears to have the largest investment in camera and imaging hardware:

ICFO Graphene Image Sensors

ICFO food analyzer demo at MWC in Barcelona in February 2018:

UV graphene sensors:

Samsung CIS Production Capacity to Beat Sony

ETNews reports that Samsung is to convert its 300mm DRAM 13 line in Hwasung to CMOS sensors production. Since last year, the company also working to convert its DRAM 11 line in Hwasung into an image sensor production (named as S4 line). Conversion of S4 line will be done by end of this year. Right after that, Samsung is going to convert its 300mm 13 line. The 13 line can produce about 100,000 DRAM wafers per month. Because image sensor has more manufacturing steps than DRAM, the production capacity is said to be reduced by about 50% after conversion.

At the end of last year, production capacity of image sensor from 300mm plant based on wafer input was about 45,000 units.” said ETNews source. “Because production capacities of image sensor that will be added from 11 line and 13 line will exceed 70,000 units per month, Samsung Electronics will have production capacity of 120,000 units of image sensor after these conversion processes are over.

Sony CIS capacity is about 100,000 wafers per month. Even with Sony capacity extension plans are accounted, Samsung should be able to match or exceed Sony production capacity.

While increasing production capacity of 300mm CIS lines for 13MP and larger sensors, Samsung is planning to slowly decrease output of 200mm line located in Giheung.

Samsung capacity expansion demonstrates its market confidence. Samsung believes that its image sensor capabilities approach that of Sony. The number of the company's outside CIS customers is over 10.

Wednesday, March 14, 2018

ULIS Video

ULIS publishes a promotional video about its capabilities and products:

Vivo Announces SuperHDR

One of the largest smartphone makers in China, Vivo, announces its AI-powered Super HDR that follows the same principles as regular multi-frame HDR but merges more frames.

The Super HDR’s DR is said to reach up to 14 EV. With a single press of the shutter, Super HDR captures up to 12 frames, significantly more than former HDR schemes. AI algorithms are used to adapt to different scenarios. The moment the shutter is pressed, the AI will detect the scene to determine the ideal exposure strategy and accordingly select the frames for merging.

Alex Feng, SVP at Vivo says “Vivo continues to push the boundaries and provide the ultimate camera experience for consumers. This goes beyond just adding powerful functions, but to developing innovations that our users can immediately enjoy. Today’s showcase of Super HDR is an example of our continued commitment to mobile photography, to enable our consumers to shoot professional quality photos at the touch of a button. Using intelligent AI, Super HDR can capture more detail under any conditions, without additional demands on the user.

Tuesday, March 13, 2018

Prophesee Expands Event Driven Concept to LiDARs

EETimes publishes an article on event-driven image sensors such as Prophesee's (former Chronocam) Asynchronous Time-Based Image Sensor (ATIS) chip.

The company CEO Luca Verre "disclosed to us that Prophesee is exploring the possibility that its event-driven approach can apply to other sensors such as lidars and radars. Verre asked: “What if we can steer lidars to capture data focused on only what’s relevant and just the region of interest?” If it can be done, it will not only speed up data acquisition but also reduce the data volume that needs processing.

Phrophesee is currently “evaluating” the idea, said Luca, cautioning that it will take “some months” before the company can reach that conclusion. But he added, “We’re quite confident that we can pull it off.”

Asked about Prophesee’s new idea — to extend the event-driven approach to other sensors — Yole Développement’s analyst Cambou told us, “Merging the advantages of an event-based camera with a lidar (which offers the “Z” information) is extremely interesting.”

Noting that problems with traditional lidars are tied to limited resolution — “relatively less than typical high-end industrial cameras” — and the speed of analysis, Cambou said that the event-driven approach can help improve lidars, “especially for fast and close-by events, such as a pedestrian appearing in front of an autonomous car.

Samsung Galaxy S9+ Cameras

TechInsights publishes an article on Galaxy S9+ reverse engineering including its 4 cameras - a dual rear camera, a front camera and an iris recognition sensor:

"We are excited to analyze Samsung's new 3-stack ISOCELL Fast 2L3 and we'll be publishing updates as our labs capture more camera details.

Samsung is not first to market with variable mechanical apertures or 3-layer stacked image sensors, however the integration of both elements in the S-series is a bold move to differentiate from other flagship phones.

The S9 wide-angle camera system, which integrates a 2 Gbit LPDDR4 DRAM, offers similar slo-mo video functionality with 0.2 s of video expanded to 6 s of slo-mo captured at 960 fps. Samsung promotes the memory buffer as beneficial to still photography mode where higher speed readout can reduce motion artifacts and facilitate multi-frame noise reduction.

iFixit reverse engineering report publishes nice pictures showing a changing aperture on the wide-angle rear camera: