Over the years we had various posts on bit rot, the problem that digitally stored images become unaccessible after just a few years because the system that can decode them is no longer available. Essentially, this might be an ethical problem we cannot solve with technology.
Recently, steganography has been proposed as a possible solution for digital image archiving, so today we revisit the origins of this approach.
To avoid unnecessary excitement, lets us just reveal from the beginning that it cannot do anything about bit rot in image archives, because it requires a system to decode. Sure, you might interject we can always implement a decoder, but as we saw with PhotoCD, unless there is a commercial product in the form of an operating system feature, this statement is useless in practice. For example, we can read punched cards by simply scanning them, but I doubt that you would pursue this route if you still had a stack of cards in your basement.
In the early days of digital color printing, the Feds were the early adopters of technology, so we always had their requirements in mind. One requirement relating to copiers and telecopiers (digital facsimile machines) was to be able to subject a document to seven copy generations without degrading its readability.
At the time this lead to religious wars of colorimetric reproduction versus preferred color reproduction, with the idea that colorimetric systems stood a better chance in surviving seven generations, while preferred color reproduction (e.g., saturation boost and contrast enhancement) would make more money because most customers make just one generation and a better looking copy begets more business.
The main digital color print technologies at the time comprised liquid and dry xerography, acoustic inkjet, thermal transfer, and dye diffusion thermal transfer (D2T2). In industrial research labs we cared mostly about dry xerography, because that is where the biggest profits were.
At that time we were fighting with the triboelectric effect, so halftoning with dispersed dots like dithering, or error diffusion, did not work well and be had to use clustered dot halftoning. We were achieving the best results with Tom Holladay's rotated dots. They were ellipses at a 45º angle, which were robust for the triboelectric effect and prevented the human visual system from connecting the dots into unsightly patterns.
Research is about synergies and serendipity, so at this point I need to digress.
At that time (late 80s) PARC had a big cross-lab project called System 33. It played a big role in Xerox renaming itself the document company and had a big effect on society by introducing concepts like Mark Weiser's ubiquitous computing, document management, etc. The basic idea was to take all possible technologies currently in the research stage and connect them together in one big bet.
One of these concepts was Smart Paper (not to be confused with the SmartPaper that then became Gyricon). Every document would have a cover page that could act as a banner page for print and a cover page for fax. This page would also have a barcode universally identifying the document. On one side this provided a solution for the copy generation problem, because a copier could reprint the original document referenced in the barcode instead of the document on the platen (annotation could be lifted from the paper document and overprinted onto the original document).
On the other side, having a cover page on every document is ugly and a barcode is even uglier. Although all this is done by document delivery services, you do not want this on all your office documents.
At this point the preceding two threads can be combined. Enter Rob Tow (click here for his account), who in 1988 came up with the idea of encoding information in images on documents (all office documents tend to have at least a company logo) by using Holladay's rotated dots for a binary code, by simply rotating them at ±45º. Rob called them glyphs.
Encoding a document's universal identifier in the logo was just a simple application. A more compelling application was to encode the CIELAB values of an image's pixels in the image's halftones. A dumb copier would simply do whatever it did to copy an image, but a smart copier would decode the image's colorimetric information from the halftones by interpreting them as glyphs.
At the time we coined the phrase "scan–think–print" for digital copying, so we could safely assume each one of our copiers would always have the additional intelligence to restore an image's colors from the glyphs. The actual image was then just a backup for the dumb copiers from the competition.
We even filed an invention disclosure for an Oliver North copier, which was a DocuTech with a built-in shredder. It would scan each page, encrypt the bitmap, and print the result using glyphs. The original document would be shredded right then and there, as part of the process. The copy could be stored and distributed in plain sight. It could even be copied at least seven generations, but only when the operator inserted in the copier a token with the decryption key, the copy would be the original readable document. This would have solved North's problem because he would have had only his token to be destroyed.
Rob got US Patent 5,315,098 on the basic concept, but then it took a lot of work to turn the idea into a robust technology. For one, the glyphs had to survive the infamous seven copy generations. Meg Withgott had come up with the concept of document dry-cleaning, but it took Dan Bloomberg substantial work in mathematical morphology to achieve a robust implementation for glyphs.
Then there were the problems of optical distortions, self-clocking, and error correction, among many others. All said, until the technology was done David Hecht and Noah Flores got 51 more US patents solving all the details.
The final artifact became a Xerox product under the trademark DataGlyphs. A project called Express (Henry Sang, Jr. was one of its leaders) achieved a successful commercial deployment solving the problem of processing the field test reports for Syntex, with others following.
Bit rot refers to images meant for archival applications. In that sense the glyph technology was not invented for bit rot but for document management, i.e., with a limited time scope in mind.
I am also using the term steganography in a loose sense, because it really refers to hiding a secret payload image into a carrier image. As such, steganography has to work only over a very restricted time span, just to smuggle an image through a hostile boundary.
A related concept is that of watermarks. Here the system has to be available only for the duration of a copyright, and the owner has a pecuniary incentive to keep the system working during this time.
As far as I know, currently the most promising remedy for bit rot is encoding the images in the DNG format and encapsulating them into a PDF file. However, this archiving path is not yet available at the operating system level. And there will always be the ethical issue to enable digital image archiving.
No comments:
Post a Comment