Color science has always required considerable computing power to calculate complex appearance models for every pixel in an image. For a while, like everybody else, we were riding along with Moore’s law giving us faster and faster machines. However, the intersection of a number of paradigm shifts is making our life particularly hard at the moment.
In this post I will list a few of these disruptive changes. The first is that in our nomadic society everything has become distributed and for example CIELAB is no longer an adequate model for color proofing. Today a print job may have to be soft-proofed on a display remotely on a different continent. This entails the use of more sophisticated appearance models like CIECAM02.
Moreover, today it is no longer sufficient to consider aperture color, we must account for complex color or natural scenes. This makes the models even more computing intensive, as you can appreciate in Wu and Wardman’s paper with the CIECAM02-m2 proposal in the latest issue of Color Science and Application, Vol. 32, N. 2, pp.12- 29.
Concomitantly, in the same issue Nayatani and Sakai propose a new concept of color-appearance modeling they call Integrated CAM. This new model can predict tone and nuance in NCS, which all of you who painted your living room in yellow know, is important to extend colorimetry to color design.
Moreover, when calculating inks, we no longer can just compute the black component. Today we must separate into 12 or more inks. And to take into account the different direction of paper grain in Europe and the U.S., and the brighteners in papers and inks, the calculations have to be spectral.
Printers are not the only devices requiring complex calculations. Today we know that video-conferencing only works well when we have a very high quality video and audio channel, and when we reduce cost by processing multiple video channels through a single PC, this requires writing very tight code and tuning the performance to the last cycle.
Another disruptive change is that in the enterprise, computing is moving from the desktop to the server, and with the modern nomads wanting to compute on the go without schlepping a suitcase with the battery, this trend is happening also for the consumer. And now the system is multi-modal, it no longer processes just spreadsheets but also shows picture, movies, and audio-video conferences to the antipodes.
And in the new mega-sized utility data centers the trend is towards virtualization, to keep the energy requirements in check while disentangling functions so that a failure does not become a catastrophe. Therefore, we have to understand virtualization. And because CPUs now are multi-core and hyperthreaded, we have to brush up our concurrent programming skills and learn performance well enough not to be surprised by those 25% lost MIPS common to hyper-threading.
In our case this is even more complicated by the ulterior paradigm shift of opening up the GPUs to developers and moving imaging operations to the graphics card. Using a GPU effectively requires concurrent programming skills beyond those required for multi-threading on multi-cores.
The latest issue of the Computer Measurement Group’s MeasureIT online magazine has a good introductory article on these issues, which can get you started with pointers to the latest results appearing in the literature.
PS: as usual, since our software does not support links in comments, I am adding the links here
No comments:
Post a Comment