Many 4T pixel designers are forced to make a difficult trade-off between full well, dark current and image lag. The bigger full well, the lower dark current, the larger image lag grows. Small pixel size makes these trade-offs extremely painful, as virtually any combination sacrifices image quality in one way or another.
In that respect it's interesting to look on Micron's patent US7417677, published just a few days ago. Its main idea is to combine a regular readout with a full discharge of the photodiode - see the pictures below:
After the photodiode charge is read, a low voltage, for example, ground, is applied to the floating diffusion and the photodiode. Essentially, the photodiode is charged to its full well or so. Then the photodiode is reset again to high voltage by the usual readout sequence. So the history is mostly deleted, sans a slow trapped charge which needs long time to wipe out.
What we get in the end? All the conventional image lag measurements, such as one described in Nokia-ST SMIA standard, would show very small image lag, if any. This is great, everybody knows that image lag is a thing to avoid.
What we lose? We totally distort low-light linearity in comparison with "true" lag-free sensor. Best case, the small signal would be reduced to even smaller one, much out of proportion. Worst case, the small signal would virtually disappear. However, the small signal linearity is tricky to measure. And for some reason, SMIA allows not to measure linearity below 10% of full scale.
So, using this idea, one ends up with a great looking spec, but bad real world low-light performance. Probably, low-light linearity data should complement the image lag number to give a true understanding of sensor behavior.
Update: Just because there exists a patent does not mean it is used by Micron, or used in the manner described.