I had a look at what I could find on compression algorithms and it seems likely that all compression potentially changes the image slightly at the extremes. I imagine that a totally black (or white) field with only noise, would ideally be read as a totally black field plus mathematical noise. Then, when decompressed, the totally black field would have noise added, which would be mathematically correct, but in fact different. I understand that Fourier transforms and inverse transform are used (I dropped out of maths at about that point) and that could imply a level of uncertainty. The choice of where to put the line which says two things are the same is open to some discussion. (It's 1.99999999999, lets call it 2)