Social media has been on fire with a debate – not over ISIS, healthcare or global warming – but over the perceived color of a dress. The dress provides a unique opportunity to consider two big questions at the interface of philosophy, neuroscience and psychophysics: is there an objective reality, and do we all experience it the same way? You may see the dress differently when you see it next.
“Space and time are the framework within which the mind is constrained to construct its experience of reality.”
― Immanuel Kant
In the movie The Matrix, the core conceit is that a malevolent artificial intelligence keeps a sleeping humanity enslaved by feeding a simulation of reality directly into the brain. Morpheus confronts Neo with the enormity of the simulation in which he has been living when he says to him of the matrix, “It is the world that has been pulled over your eyes to blind you from the truth.”
It’s appropriate that the visual system is used by Morpheus to drive his point home – because in many respects, we do live in the matrix. Reality, as measured by the instruments of physics, ceases to exist in the first few millimeters beyond the surface of the skin. What we’re all left with is a kind of virtual reality simulation that runs on the wetware of the brain, driven in part through learned expectations of how the world is supposed to work – in other words, we are all operating, not on the basis of the real world, but based on a model of reality that is made of our own expectations (and evolution’s construction) and is constantly updated. If you’re not cool with that, then the issue of what color a dress may be on the internet can produce a feeling of existential crisis called, “freaking out”.
There has been significant discussion of the differing perceptions of the dress, which I represent here with a link back to the original image:
[Here’s a link to an online study where you can test your color vision and weigh in for science.]
“What color is this dress?” has been the question buzzing around social media, with some answering this question as either “white and gold” or “blue and black”. But the question isn’t quite fair, because the persons asked might field the question in one of two ways. “What color does the dress appear to be?”, might be one implication of the question. Another might be, “Regardless of how the dress appears in this photo, what color do you guess the dress might be?”
The difference is subtle, but it may be important. The answer, assuming normal color vision, depends on at least 2 things: 1) How you think the photo was taken; 2) how you expect a dress to look when photographed under that particular set of conditions.
Let’s unpack this a bit.
In the first steps of vision, photons that may start their journey from a light source (even the sun) are reflected from something, perhaps the wing of a butterfly, and enter the eye. Photoreceptors are embedded in the back of the eye within the sheet of cells called the retina (the fact that our eye has a lens, and that the lens forms an light image on the retina is often compared to a film camera – back in Victorian times a practice called optography even attempted to develop images from the retinas of the dead, to little avail). A photon of light produces a photochemical response as it interacts with proteins called opsins within the photoreceptor, altering the electrical potential of the photoreceptor and interrupting the release of the neurotransmitter glutamate. The resulting waves of signals in the retina are ultimately converted into patterns of spikes, which are transmitted to the rest of the brain. In vision at least, this photochemical conversion is the last instance of what is “real” interacting with the excitable tissues of our brain. The rest is what you do with it.
And this is where it gets even more interesting, because in vision, context is everything. The spiking neurons of the eye, called retinal ganglion cells, encode images based on contrast within the image (where light and dark elements are close together) rather than its brightness. This is a wonderful system for keeping perceptions stable across the wide range of light levels we encounter, but it does produce some wicked illusions like this one:
The horizontal bar in the center has the same intensity throughout, but it does not appear to be so because it is superimposed on a gradient that changes the local perception of the image as your eyes scan around it. The local contrast can promote the perception as a dark bar on a light background (on the right), or as a light bar on dark (on the left). This dichotomy can be explained by interactions that occur within the retinal ganglion cells that encode contrast through receptive fields that can produce a sum of the local contrast, but respond less vigorously when uniformly lit.
Like the dress, a dark thing can appear light, or a light thing might appear dark, depending on its surroundings.
Other illusions point out that what our brain learns about the world can strongly influence our perceptions of what we encounter in it. Unlike the simple example above, these illusions sometimes invoke the cerebral cortex, because they draw upon more complex expectations of how we reckon light should behave under real world conditions.
One of these is the “same grey” illusion, shown here:
While for most people there is a strong sensation that the lower polygon is lighter in grey that one above it, by simply blocking the bright line between the two you’ll see that they are actually of equal luminance.
The Adelson illusion, shown here (a neat video of it can be found at his link) makes a similar point that our expectations powerfully influence our perception (A and B in the figure are the same value of gray – see the video). In this case, it’s not local contrast but rather our expectations of how grays in shadow should appear:
So far I’ve avoided mentioning color. The retinal ganglion cells of primates also possess color sensitivities that work in opposition to one another (assuming there isn’t a color blindness whereby color vision may be abnormal or compromised). Red/Green and Yellow/Blue opponent cells are found in the primate retina, where they can feed information into the sorts of illusory relationships I’ve sketched out for greys. Like this one of two dogs that appear very different on first glance (surely the one on top is darker!):
But upon removing the background they look like this:
This tendency to contextualize color perhaps explains why there may be differences of opinion regarding the color of a dress. For the brain’s calculations to produce a result that is consistent with our personal “matrix”, our model must start with a limited set of assumptions. We might assume – based on the overexposed background – that the foreground image was darker than normal; that the dress was in deep shadow or that the lighting had a particular set of qualities based on our experience. These expectations set the stage for something called color constancy, illustrated here:
The perception of the pink in the card that is second from the left remains pink even though the context changes. Many of the other colors are also recognizable even though the filter between the photos has changed. Another phenomena called chromatic adaptation can even cause us to see colors that are not there. But while many of the explanations about the differing perceptions of the dress (I think rightly) implicate the family of “color constancy” phenomena, my sense is that what we are seeing is actually a failure of color constancy, in that not everyone is agreeing with what the “constant” colors might be.
As for my experience of the dress, I imagined the photographer in a mall taking a photo in the typical dark lighting of a store, with the much brighter main promenade in the background. I can imagine a light dress (white, vanilla, chalk?) with gold (ocher, brown?) trim best fitting the set of conditions I just described, with the slight blue cast (periwinkle?) to the “white” dress reflecting the lighting within the store. My hypothesis is formed by these assumptions when confronted with this image, and shaped by what I have learned to name a color. According to those who took the picture I am apparently wrong – but my brain can’t immediately deal with the notion of a washed-out overexposure that would produce a golden ocher from black, or a white from a deep blue.
We can guess, but can’t say for sure exactly why some people perceive the dress as white/periwinkle and ocher (as I do) when others maintain they clearly see blue and black. To resolve this, one would need to do a psychophysical experiment under tightly restricted conditions wherein the luminance, contrast, color (including light source to examine the degree of metamerism) and viewing distance are controlled. Or, we might manipulate the image in a way to accentuate its bistable nature. Rosa Lafer-Sousa of MIT kindly shared this image (color constancy background by Beau Lotto) in which she adjusted the background and the model’s skin tone to provide strong cues to the ambient light (either yellow or blue). This rendering may help if you are locked into one perception. The dress has not been altered, but I think you’ll agree that your perception of it is different based on the surroundings:
Demonstrations like this support the idea that the bistable nature of the perception of the dress depends heavily on the implicit assumptions one makes about the lighting conditions of the photograph.
In The Matrix, the feeling of déjà vu (literally “already seen”) was explained as a “glitch in the matrix”. Does this mean that our brains are somehow broken because we can’t analyze color like PhotoShop? On the contrary, it points out something important about how the brain achieves its remarkable efficiency – by defining the “already seen” conditions under which perceptions are expected to occur. It says that despite the well-worn metaphor of the eye as camera we are far from being machines.