When I teach my color course I always stress that the visual system does not work like a camcorder: there is no fixed pixel array, no bitmap, and no homunculus in our head watching the bitmap frames on a biological display. True, in the LGN and cortex we can record (distorted) maps of the visual field, but this does not explain color vision.
In the course I have diagrams illustrating how vision is not hierarchical but a network of bidirectional paths. I also reorder the factors in the tristimulus formulæ so it is evident the color matching functions are measures in the mathematical sense, i.e., probabilities for the catch of a photon of a certain energy. The latter means that for example an M cone cannot know if the photon it just catched is green or some other color.
Photon detection in the retina is a quantum effect and we can only describe probabilities. The fact that the brain cannot know the color of a point in the visual field at a given time is generally known as the principle of univariance and was originally formulated by William Albert Hugh Rushton (1901–1980).
That said, the geometric distribution of the L, M, and S cones in the retina is puzzling. Science Now reports on a recent hypothesis claiming on an African savanna 10 million years ago, our ancestors awoke to the sun rising over dry, rolling grasslands, vast skies, and patterned wildlife. This complex scenery influenced the evolution of our eyes, according to a new study, guiding the arrangement of light-sensitive cone cells. The findings might allow researchers to develop machines with more humanlike vision: efficient, accurate, and attuned to the natural world.
Read more at this link: Visions of Africa Shaped Eye Evolution
No comments:
Post a Comment