A significant limitation of the approach seems to be that it targets extremely low bitrates where other codecs fall apart, but at these bitrates it incurs problems of its own (artifacts take the form of meaningful changes to the source image instead of blur or blocking, very high computational complexity for the decoder). No surprise that JPEG and WEBP totally fall apart at that bitrate.ģ. It would be great to see the best codecs included in the comparison - AVIF and JPEG XL. It's not clear that community is doing this level of due diligence, so then the voices here are right: it's not a good idea to use.Ī few thoughts that aren't related to each other.Ģ. fabricated information, like 95% confidence-interval extremal reconstructions or something. Ideally the image should be labeled at the pixel level with reconstruction probabilities, or presented in other ways to demonstrate the ratio of measured vs. So this way people can see how reliable the reconstruction is. PSNR characterization against alternate direct (non-sparse) measurement ground truths (that have to exist, or else the reconstruction method should be called into question), and the bit rate for each particular sparse measurement. In that case it would be a very good idea to produce, as they do for lossy compression, both the standard overall bit rate vs. You are talking about compressed sensing which is not lossy compression (compressed sensing can be lossless unless you're dealing with noisy measurements).īut say you're doing noisy measurements, and you are under-measuring like you say, and you have to fabricate non-random non-homogenous reconstruction noise. ![]() We would never suspect a clean substitution of a meaningful symbol in what is "just a raster image". We would all expect a compression bug to manifest as blurry text, or weird artifacts. This number-swapping story hit me the same way. ![]() ![]() But somehow the combination of our equipment, their networking gear, and that program on that workstation caused the local loop equipment to lose its mind. Nothing else was weird about this standard T1 install. I wish I had had time to really dig into that one, because obviously other customers used VNC with no issues. We ended up changing the NIU and corresponding CO card to a different manufacturer (from Adtran to Soneplex I think?) to fix the issue. We reliably triggered the issue by running VNC on their network. Obviously a userland program (VNC) could not possibly cause our NIU to reboot, right? It's several layers "up the stack" from the physical equipment sending the DS1 signal over the copper.īut that's what it was. We replaced the NIU card, no change.Ĭustomer then hit us with, "it looks like it only happens when Jim VNCs to our remote server". ![]() We monitored it while they used it and saw not a single error, except for when the line completely dropped. Not a single bit error no matter what test pattern we used. We ran tests on the line for extended periods of time trying to troubleshoot the problem. One day we had a customer that complained that their T1 was dropping every afternoon. I used to install T1 lines a long time ago. Or even if they do leak, that leakage won't spill down several layers of abstraction. We programmers tend to think our abstractions match reality somehow.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |