Time to think about visual neuroscience

by Poppy Sharp, PhD candidate at the Center for Mind/Brain Sciences, University of Trento.

All is not as it seems

We all delight in discovering that what we see isn’t always the truth. Think optical illusions: as a kid I loved finding the hidden images in Magic Eye stereogram pictures. Maybe you remember a surprising moment when you realised you can’t always trust your eyes. Here’s a quick example. In the image below, cover your left eye and stare at the cross, then slowly move closer towards the screen. At some point, instead of seeing what’s really there, you’ll see a continuous black line. This happens when the WAB logo falls in a small patch on the retinae of your eyes where the nerve fibres leave in a bundle, and consequently this patch has no light receptors – a blind spot. When the logo is in your blind spot, your visual system fills in the gap using the available information. Since there are lines on either side, the assumption is made that the line continues through the blind spot.

Illusions reveal that our perception of the world results from the brain building our visual experiences, using best guesses as to what’s really out there. Most of the time you don’t notice, because the visual system has been adapted over years of evolution and then been honed by your lifetime of perceptual experiences, and is pretty good at what it does.

WAB vision

For vision scientists, illusions can provide clues about the way the visual system builds our experiences. We refer to our visual experience of something as a ‘percept’, and use the term ‘stimulus’ for the thing which prompted that percept. The stimulus could be something as simple as a flash of light, or more complex like a human face. Vision science is all about carefully designing experiments so we can tease apart the relationship between the physical stimulus out in the world and our percept of it. In this way, we learn about the ongoing processes in the brain which allow us to do everything from recognising objects and people, to judging the trajectory of a moving ball so we can catch it.

We can get insight into what people perceived by measuring their behavioural responses. Take a simple experiment: we show people an arrow to indicate whether to pay attention to the left or the right side of the screen, then they see either one or two flashes of light flash quickly on one side, and have to press a button to indicate how many flashes they saw. There are several behavioural measures we could record here. Did the cue help them be more accurate at telling the difference between one or two flashes? Did the cue allow them to respond more quickly? Were they more confident in their response? These are all behavioural measures. In addition, we can also look at another type of measure: their brain activity. Recording brain activity allows unique insights into how our experiences of the world are put together, and investigation of exciting new questions about the mind and brain.

Rhythms of the brain

Your brain is a complex network of cells using electrochemical signals to communicate with one another. We can take a peek at your brain waves by measuring the magnetic fields associated with the electrical activity of your brain. These magnetic fields are very small, so to record them we need a machine called an MEG scanner (magnetoencephalography) which has many extremely sensitive sensors called SQUIDs (superconducting quantum interference devices). The scanner somewhat resembles a dryer for ladies getting their blue rinse done, but differs in that it’s filled with liquid helium and costs about three million euros.

A single cell firing off an electrical signal would have too small a magnetic field to be detected, but since cells tend to fire together as groups, we can measure these patterns of activity in the MEG signal. Then we look for differences in the patterns of activity under different experimental conditions, in order to reveal what’s going on in the brain during different cognitive processes. For example, in our simple experiment from before with a cue and flashes of light, we would likely find differences in brain activity when these flashes occur at an expected location as compared to an unexpected one.

One particularly fascinating way we can characterise patterns of brain activity is in terms of the the rhythms of the brain. Brain activity is an ongoing symphony of multiple groups of cells firing in concert. Some groups fire together more often (i.e. at high frequency), whereas others may also be firing together in a synchronised way, but firing less often (low frequency). These different patterns of brain waves generated by cells forming different groups and firing at various frequencies are vital for many important processes, including visual perception.

What I’m working on

For as many hours of the day as your eyes are open, a flood of visual information is continuously streaming into your brain. I’m interested in how the visual system makes sense of all that information, and prioritises some things over others. Like many researchers, the approach we use is to show simple stimuli in a controlled setting, in order to ask questions about fundamental low level visual processes. We then hope that our insights generalise to more natural processing in the busy and changeable visual environment of the ‘real world’. My focus is on temporal processing. Temporal processing can refer to a lot of things, but as far as my projects go we mean how you deal with stimuli occurring very close together in time (tens of milliseconds apart). I’m investigating how this is influenced by expectations, so in my experiments we manipulate expectations about where in space stimuli will be, and also your expectations about when they will appear. This is achieved using simple visual cues to direct your attention to, for example, a certain area of the screen.

When stimuli rapidly follow one another in time, sometimes it’s important to be parse them into separate percepts whereas other times it’s more appropriate to integrate them together. There’s always a tradeoff between the precision and stability of the percepts built by the visual system.  The right balance between splitting up stimuli into separate percepts as opposed to blending them into a combined percept depends on the situation and what you’re trying to achieve at that moment.

Let’s illustrate some aspects of this idea about parsing versus integrating stimuli with a story, out in the woods at night. If some flashes of light come in quick succession from the undergrowth, this could be the moonlight reflecting off the eyes of a moving predator. In this case, your visual system needs to integrate these stimuli into a percept of the predator moving through space. But a similar set of several stimuli flashing up from the darkness could also be multiple predators next to each other, in which case it’s vital that you parse the incoming information and perceive them separately. Current circumstances and goals determine the mode of temporal processing that is most appropriate.

I’m investigating how expectations about where stimuli will be can influence your ability to either parse them into separate percepts or to form an integrated percept. Through characterising how expectations influence these two fundamental but opposing temporal processes, we hope to gain insights not only into the processes themselves, but also into the mechanisms of expectation in the visual system. By combining behavioural measures with measures of brain activity (collected using the MEG scanner), we are working towards new accounts of the dynamics of temporal processing and factors which influence it. In this way, we better our understanding of the visual system’s impressive capabilities in building our vital visual experiences from the lively stream of information entering our eyes.

Advertisements

What can the brain learn from itself? Neurofeedback for understanding brain function.

By:Dr. Kathy L.Ruddy

STEM editor: Francesca Farina

The human brain has a remarkable capacity to learn from feedback. During daily life as we interact with our environment the brain processes the consequences of our actions, and uses this ‘feedback’ in order to update its stored representations or ‘blueprints’ for how to perform certain behaviours optimally. This learning-by-feedback process occurs regardless of whether we are consciously aware of it or not.

The more interesting implication of this process is that the brain can also ‘learn from itself’, forming the basis of the ‘neurofeedback’ phenomenon.

BCI_soft_EdgesBasically, if we stick an electrode on the head to record the brain’s electrical rhythms (or ‘waves’), the brain can learn to change the rhythm simply by watching feedback displayed on a computer screen. Because we know that the presence of particular types of brain rhythms can be beneficial or detrimental depending on the context and the task being performed, the ability to volitionally change them may have useful applications for enhancing human performance and treating pathological patterns of brain activity.

In recent years neurofeedback has, however, earned itself a bad reputation in scientific circles. This is mainly due to the premature commercialisation of the technique, which is now being ‘sold’ as a treatment for clinical disorders – for which the research evidence is currently still lacking – and even for home use to alleviate symptoms of stress, migraine, depression, anxiety, and essentially any other complaint you can think of! The problem with all of this is that we, as scientists, understand very little about the brain rhythms in the first place; Where do they come from? What do they mean? Are they simply a by-product of other ongoing brain processes, or does the rhythm itself set the ‘state’ of a particular brain region, enhancing or inhibiting its processing capabilities?

In my own research, I am currently working towards bridging this gap, by trying to make the connection between fundamental brain mechanisms, behaviours, and their associated electrical rhythms or brain ‘states’.

By training people to put their brain into different ‘states’, we were – for the first time – able to glimpse how brain rhythms directly influence these states in humans. We focused on the motor cortex, the part of the brain that controls movement, because there is a vast ongoing debate in the literature concerning whether changing the state of this region has implications for movement rehabilitation following stroke or other brain injury. Some argue that if the motor cortex is in a more ‘excitable’ state, traditional stroke rehabilitation therapies have enhanced effectiveness, compared to when the same region is more ‘inhibited’. Brain stimulation directly targeting the motor cortex has been used in the past in an attempt to achieve this more plastic, excitable state, but with mixed success and small effects that have proven difficult to reproduce.

TMS neuofedback_imagesIn our investigation we used brain stimulation in a non-traditional way to achieve robust bidirectional changes in the state of the motor cortex. Transcranial magnetic stimulation (TMS) can be used to measure the excitability (state) of the motor system. By applying a magnetic pulse to the skull over the exact location in the brain that controls the finger, a response can be measured in finger muscles that is referred to as a motor-evoked potential (MEP). The size of the MEP tells us how excitable the system is. We developed a form of neurofeedback training where the size of each MEP was displayed to participants on screen, and they were rewarded for either large, or small MEPs by positive auditory feedback and a dollar symbol. This type of neurofeedback mobilizes learning mechansims in the brain, as participants develop mental strategies and observe the consequences of their thought processes upon the state of their motor system. Over a period of 5 days, participants were able to make their MEPs significantly bigger or smaller, by changing the excitatory/inhibitory state of the motor cortex.

Our next question was, how exactly is this change of state being achieved in the brain? Are electrical brain rhythms changing in the motor cortex to mediate the changing brain state? Using this new tool to change brain state experimentally, we asked participants to return for one final training session, this time while we recorded their brain rhythms (using EEG) during the TMS-based neurofeedback. This revealed that when the motor cortex was more excitable, there was a significant local increase in high frequency (gamma) brainwaves (between 30-50Hz). By contrast, higher alpha waves (8-14Hz) were associated with a more ‘inhibited’ brain state, but were not as influential in setting the excitability of the motor cortex as the gamma waves

page-0The implications of these findings are twofold. Firstly, having a tool to robustly change the excitatory/inhibitory balance of the motor cortex gives us experimental control over this process, and thus opens several doors for new fundamental scientific research into the neural mechanisms that determine the state of the motor system. Secondly, this approach may have future clinical potential, as a non-invasive and non-pharmacological way to ‘prime’ the motor cortex in advance of movement rehabilitation therapy, by putting the brain in a state that is more receptive to re-learning motor skills. As the training is straightforward, pain free and enjoyable for the participant, we believe that this approach may pave the way for a new wave of research using neurofeedback in place of traditional electrical brain stimulation, as a scientific tool and an adjunct to commonly used stroke rehabilitation practices.

 

How your brain plans actions with different body parts

Got your hands full? – How the brain plans actions with different body parts

by Phyllis Mania

STEM editor: Francesca Farina

Imagine you’re carrying a laundry basket in your hand, dutifully pursuing your domestic tasks. You open the door with your knee, press the light switch with your elbow, and pick up a lost sock with your foot. Easy, right? Normally, we perform these kinds of goal-directed movements with our hands. Unsurprisingly, hands are also the most widely studied body part, or so-called effector, in research on action planning. We do know a fair bit about how the brain prepares movements with a hand (not to be confused with movement execution). You see something desirable, say, a chocolate bar, and that image goes from your retina to the visual cortex, which is roughly located at the back of your brain. At the same time, an estimate of where your hand is in space is generated in somatosensory cortex, which is located more frontally. Between these two areas sits an area called posterior parietal cortex (PPC), in an ideal position to bring these two pieces of information – the seen location of the chocolate bar and the felt location of your hand – together (for a detailed description of these so-called coordinate transformations see [1]). From here, the movement plan is sent to primary motor cortex, which directly controls movement execution through the spinal cord. What’s interesting about motor cortex is that it is organised like a map of the body, so the muscles that are next to each other on the “outside” are also controlled by neuronal populations that are next to each other on the “inside”. Put simply, there is a small patch of brain for each body part we have, a phenomenon known as the motor homunculus [2].

eeg1

Photo of an EEG, by Gabriele Fischer-Mania

As we all know from everyday experience, it is pretty simple to use a body part other than the hand to perform a purposeful action. But the findings from studies investigating movement planning with different effectors are not clear-cut. Usually, the paradigm used in this kind of research works as follows: The participants look at a centrally presented fixation mark and rest their hand in front of the body midline. Next, a dot indicating the movement goal is presented to the left or right of fixation. The colour of the dot tells the participants, whether they have to use their hand or their eyes to move towards the dot. Only when the fixation mark disappears, the participants are allowed to perform the movement with the desired effector. The delay between the presentation of the goal and the actual movement is important, because muscle activity affects the signal that is measured from the brain (and not in a good way). The subsequent analyses usually focus on this delay period, as the signal emerging throughout is thought to reflect movement preparation. Many studies assessing the activity preceding eye and hand movements have suggested that PPC is organised in an effector-specific manner, with different sub-regions representing different body parts [3]. Other studies report contradicting results, with overlapping activity for hand and eye [4].

eeg2

EEG photo, as before.

But here’s the thing: We cannot stare at a door until it finally opens itself and I imagine picking up that lost piece of laundry with my eye to be rather uncomfortable. Put more scientifically, hands and eyes are functionally different. Whereas we use our hands to interact with the environment, our eyes are a key player in perception. This is why my supervisor came up with the idea to compare hands and feet, as virtually all goal-directed actions we typically perform using our hands can also be performed with our feet (e.g., see http://www.mfpa.uk for mouth and foot painting artists). Surprisingly, it turned out that the portion of PPC that was previously thought to be exclusively dedicated to hand movement planning showed virtually the same fMRI activation during foot movement planning [5]. That is, the brain does not seem to differentiate between the two limbs in PPC. Wait, the brain? Whereas fMRI is useful to show us where in the brain something is happening, it does not tell us much about what exactly is going on in neuronal populations. Here, the high temporal resolution of EEG allows for a more detailed investigation of brain activity. During my PhD, I used EEG to look at hands and feet from different angles (literally – I looked at a lot of feet). One way to quantify possible effects is to analyse the signal in the frequency domain. Different cognitive functions have been associated with power changes in different frequency bands. Based on a study that found eye and hand movement planning to be encoded in different frequencies [6], my project focused on identifying a similar effect for foot movements.

feet_pixabay

Source: Pixabay

This is not as straightforward as it might sound, because there are a number of things that need to be controlled for: To make a comparison between the two limbs as valid as possible, movements should start from a similar position and end at the same spot. And to avoid expectancy effects, movements with both limbs should alternate randomly. As you can imagine, it is quite challenging to find a comfortable position to complete this task (most participants did still talk to me after the experiment, though). Another important thing to keep in mind is the fact that foot movements are somewhat more sluggish than hand movements, owing to physical differences between the limbs. This circumstance can be accounted for by performing different types of movements; some easy, some difficult. When the presented movement goal is rather big, it’s easier to hit than when it’s smaller. Unsurprisingly, movements to easy targets are faster than movements to difficult targets, an effect that has long been known for the hand [7] but had not been shown for the foot yet. Even though this effect is obviously observed during movement execution, it has been shown to already arise during movement planning [8].

So, taking a closer look at actual movements can also tell us a fair bit about the underlying planning processes. In my case, “looking closer” meant recording hand and foot movements using infrared lights, a procedure called motion capture. Basically the same method is used to create the characters in movies like Avatar and the Hobbit, but rather than making fancy films I used the trajectories to extract kinematic measures like velocity and acceleration. Again, it turned out that hands and feet have more in common than it may seem at first sight. And it makes sense – as we evolved from quadrupeds (i.e., mammals walking on all fours) to bipeds (walking on two feet), the neural pathways that used to control locomotion with all fours likely evolved into the system now controlling skilled hand movements [9].

What’s most fascinating to me is the incredible speed and flexibility with which all of this happens. We hardly ever give a thought to the seemingly simple actions we perform every minute (and it’s useful not to, otherwise we’d probably stand rooted to the spot). Our brain is able to take in such a vast amount of information – visually, auditory, somatosensory – filter it effectively and generate motor commands in the range of milliseconds. And we haven’t even found out a fraction of how all of it works. Or to use a famous quote [10]: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

 [1] Batista, A. (2002). Inner space: Reference frames. Current Biology, 12(11), R380-R383.

[2] Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389-443.

[3] Connolly, J. D., Andersen, R. A., & Goodale, M. A. (2003). FMRI evidence for a ‘parietal reach region’ in the human brain. Experimental Brain Research153(2), 140-145.

[4] Beurze, S. M., Lange, F. P. de, Toni, I., & Medendorp, W. P. (2009). Spatial and Effector Processing in the Human Parietofrontal Network for Reaches and Saccades. Journal of Neurophysiology, 101(6), 3053–3062

[5] Heed, T., Beurze, S. M., Toni, I., Röder, B., & Medendorp, W. P. (2011). Functional rather than effector-specific organization of human posterior parietal cortex. The Journal of Neuroscience31(8), 3066-3076.

[6] Van Der Werf, J., Jensen, O., Fries, P., & Medendorp, W. P. (2010). Neuronal synchronization in human posterior parietal cortex during reach planning. Journal of Neuroscience30(4), 1402-1412.

[7] Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology47(6), 381.

[8] Bertucco, M., Cesari, P., & Latash, M. L. (2013). Fitts’ Law in early postural adjustments. Neuroscience231, 61-69.

[9] Georgopoulos, A. P., & Grillner, S. (1989). Visuomotor coordination in reaching and locomotion. Science, 245(4923), 1209–1210.

[10] Pugh, Edward M, quoted in George Pugh (1977). The Biological Origin of Human Values.