How the brain stores and manipulates categories of objects is a quite interesting scientific question – but a difficult one. Every noun in a language represents some sort of category. Oak trees, for instance, is a category that regroups every individual oak tree. Then, there are other words to regroup those specific categories within more general ones – trees is the category that regroups all of the trees, including oak trees. An even wider one is biological organisms, which includes bacteria, fungi, plants and animals. The semantic categories are related to each other, sometimes in a hierarchical fashion, up to the category everything, which includes all of them. Beyond this basic hierarchical organization, concepts are linked together by numerous rules. For instance objects of the category tree can grow, look beautiful or age – but they cannot be sad, or happy.
Understanding the brain mechanisms responsible for handling those complex relations would likely inform us on how language and human cognition work. One of the researchers who have been exploring those questions is Jack Gallant. Like most researchers people in the Gallant lab are publishing their results in scientific journals, but in this case they have also taken the time to create a website that anyone can use to see how the brains of their subjects reacted to different semantic categories. Their study is being published at Nature Neuroscience1, and we can already visualize brain activities using their Brain Viewer.
The theory that the authors wanted to test in this study is whether or not the brain’s representation of categories is influenced by what we are attending to. Imagine you are viewing a movie and I ask you to attend to humans. Now imagine I show you the same movie and ask you to attend to cars. This should modify a lot of things as to how you view the movie – you might attend to different parts of the screen, you might view different aspects of the movie, and what you think about during the movie might change.
The study finds that attending to cars expands the semantic representation of categories that are close to cars, while attending to humans expands the representation of categories closer to humans. Categories close to humans, for instance “Pedestrian”, “Child”, or “Animal” were encoded in more brain areas when subjects were instructed to pay attention to humans. Conversely, categories like “Motorcycle”, “Hook” or “Furniture” were encoded better by some brain areas when subjects were paying attention to cars.
This suggests that attention to a specific object in a category also has impacts on the way the brain encodes closely-related objects. The tools developed by the lab are pretty unique and I recommend to anyone interested in the subject to go try their Brain Viewer tool.
1. Çukur T., Nishimoto S., Huth A.G. Gallant J.L. (2013) Attention during natural vision warps semantic representation across the human brain. Nature Neuroscience doi:10.1038/nn.3381.