Friday, April 23, 2021

Characterization and Modeling of Image Sensor Helps to Achieve Lossless Image Compression

EETimes-Europe: Swiss startup Dotphoton claims to achieve 10x lossless image compression:

"Dotphoton’s Jetraw software starts before the image is created and uses the information of the image sensor’s noise performance to efficiently compress the image data. The roots of the image compression date back to the research questions of quantum physics. For example, whether effects such as quantum entanglement can be made visible for the human eye.

Bruno Sanguinetti, CTO and co-founder of Dotphoton, explained, “Experimental setups with CCD/CMOS sensors for the quantification of the entropy and the relation between signal and noise showed that even with excellent sensors, the largest part of the entropy consists of noise. With a 16-bit sensor, we typically detected 9-bit entropy, which could be referred back solely to noise, and only 1 bit that came from the signal. It is a finding from our observations that good sensors virtually ‘zoom’ into the noise.”

Dotphoton showed that, with their compression method, image files are not affected by loss of information even with compression by a factor of ten. In concrete terms, Dotphoton uses information about the sensor’s own temporal and spatial noise."

The company's Dropbox comparison document dated by January 2020 benchmarks its DPCV algorithm vs other approaches:

"We rely on the calibration data present in the Dotphoton files to improve SNR without introducing artefacts or affecting delicate signals.

— per-pixel calibration and linearization. Even for high-end cameras, each pixel may have a different efficiency, offset and noise structure. Our advanced calibration method perfectly captures this information, which then allows both to correct sensor defects and to better evaluate whether an observed feature arises from signal or from noise.

— quantitatively-accurate amplitude noise reduction. Many de-noising techniques produce visually stunning results but affect the quantitative properties of an image. Our noise reduction methods, on the other hand, are targeted at scientific applications, where the quantitative properties of an image are important and where producing no artefacts is critical.

— color noise reduction using amplitude data and spectral calibration data

Dotphoton CV is a lossless image compression algorithms, however, it relies on data having been pre-processed in-camera or in the driver. This pre-processing does modify the original raw data, and therefore introduces a small amount of loss. Pre-processing is adapted to the specific camera model, and noise sources that can be corrected are corrected, noise from sources that cannot be corrected is replaced with noise that has the same statistical distribution, and therefore the output data presents no artefacts or bias. The maximum loss introduced by pre-processing is equivalent to having taken an image with an ’ISO’ setting 20% higher. In some situations (e.g. un-homogenous sensors) correcting systematic errors may result in a Dotphoton-compressed image that is of higher quality than the original raw data."


  1. The headline could also read "New startup discovers photon shot noise is a thing" :-)

    In seriousness, it is good to see anything that addresses the madness and data-deluge that comes from 16 bits per pixel where the image sensor has maybe 30 ke- full well capacity. It has long been known that, even with on-board processing in the camera, maybe 6 or 7 LSBs will just be digitizing noise. Best case maybe a bit less for some multi-exposure HDR modes.

  2. Compression by camera fpga and decompression on pc is a quite cool approach for industrial type of applications. Main bottlenecks being cable and pc side interfaces, eg pci. Fpgas gets stonger, sensors fasters, cable bandwidth does not scale that much

  3. Photonshot noise based compression is used a lot on chip in many sensors. What is the added advantage here? Many papers/patents out there about this. Can the company present here the differences?

    1. A lot? Can you name a few examples, e.g. of actual ip cores for fpgas? If there are many, i wonder why every industrial camera vendor sends raw data across the cables

    2. Because they want full control of the data.


All comments are moderated to avoid spam and personal attacks.