Marketwired: Teledyne Dalsa presents TurboDrive, a proprietary and patent-pending data encoding technology that allows some DALSA GigE Vision cameras to achieve breakthrough speeds, increasing throughput by as much as 150% while retaining 100% image data.
"We're pleased to deliver an innovative speed advantage to customers who need to push beyond the current GigE bandwidth limitations with no loss of data," commented Mark Butler, Product Marketing Manager for Teledyne DALSA, "It's available now in our low-cost Linea line scan cameras, and will continue in future area cameras set to launch in the fall."
The company's technology primer explains how the compression works:
"Leveraging neighborhood effect Image entropy is the first principle used in TurboDrive. But to reduce even further the number of bits required to encode pixel information (with no loss of information), TurboDrive considers the neighborhood effect. The neighborhood of a pixel is the collection of pixels that surround it. Although the exact distance of a neighbor can vary, in this analysis, we will limit our example to the adjacent pixels (i.e. those that directly touch the reference pixel).
For most pixels, there is little pixel to pixel variation and a lot of redundancy. Therefore, it is possible to efficiently use the information of the adjacent pixels to more efficiently encode the reference pixel. One way to see this is looking at a high-pass 2D filter implemented using a convolution. A simple high-pass filter has the sum of all of its coefficients equal to 0. The filter we use in our model has a 3x3 mask and it provides the largest weight to the center pixel.
The result of this filter provides the difference between the reference pixel at the center, and four of its closest neighbor. It can be seen that, for a uniform image, the 9 pixels have the same value and the result out of this filtering operation is 0. Essentially, the less pixel to pixel variation, the smaller the value output by this high-pass filter. One can intuitively understand it takes less bits to encode a small value than to encode a large value. Obviously, it is possible to play with the weights of the 9 filter coefficients of this model to adapt to the image content."