The mysterious lives of chimaera sharks & the effects of deep sea fishing


by Melissa C. Marquez.

chimaera families

A rhinochimaera

“You’re not what I expected when you said you were a shark scientist.” Gee, thanks. I can’t tell you how many times I’ve heard that I don’t live up to someone’s preconceived mental image of what I should look like as a “shark scientist.” It doesn’t change the fact that I’m a marine biologist though, and that I am very passionate about my field.

I recently wrapped up my Masters in Marine Biology, focusing on “Habitat use throughout a Chondrichthyan’s life.” Chondrichthyans (class Chondrichthyes) are sharks, skates, rays, and chimaeras. Today, there are more than 500 species of sharks and about 500 species of rays known, with many more being discovered every year.

Over the last few decades, much effort has been devoted towards evaluating and reducing bycatch (the part of a fishery’s catch that is made up of non-target species) in marine fisheries. There has been a particular focus on quantifying the risk to Chondrichthyans, primarily because of their high vulnerability to overfishing. My study focused on five species of deep sea chimaeras (not the mythical Greek ones, but the just-as-mysterious real animal) found in New Zealand waters:

• Callorhynchus milii (elephant fish),

• Hydrolagus novaezealandiae (dark ghost shark),

• Hydrolagus bemisi (pale ghost shark),

• Harriotta raleighana (Pacific longnose chimaera),

• Rhinochimaera pacifica (Pacific spookfish).


These species were chosen because they cover a large range of depth (7 m – 1306 m), and had been noted as being abundant despite extensive fisheries in their presumed habitats; they were also of special interest to the Deepwater Group (who funded the scholarship for my MSc).

Although there is no set definition for what constitutes as “deep sea,” it is conventionally regarded to be >200 m depth and beyond the continental shelf break (Thistle, 2003); in this zone, a number of species are considered to have low productivity, leading to them having a highly vulnerable target of commercial fishing (FAO, 2009). Deep sea fisheries have become increasingly economically important over the past few years as numerous commercial fisheries become overexploited (Koslow et al., 2000; Clark et al., 2007; Pitcher et al., 2010). Major commercial fisheries exist for deep sea species such as orange roughy (Hoplostethus atlanticus), oreos (several species of the family Oreosomatidae), cardinalfish, grenadiers (such as Coryphaenoides rupestris) and alfonsino (Beryx splendens). Many of these deep sea fisheries were not sustainable (Clark, 2009; Pitcher et al., 2010; Norse et al., 2012) with most of the stocks having undergone substantial declines.

chimaera (1)

Deep sea fishing can also cause environmental harm (Koslow et al., 2001; Hall-Spencer et al., 2002; Waller et al., 2007; Althaus et al., 2009; Clark and Rowden, 2009). Deep sea fisheries use various types of gear that can leader to lasting scars: bottom otter trawls, bottom longlines, deep midwater trawls, sink/anchor gillnets, pots and traps, and more. While none of this gear is solely used in deep sea fisheries, all of them catch animals indiscriminately and can also damage important habitats (such as centuries-old deep sea coral). In fact, orange roughy trawling scars on soft-sediment areas were still visible five years after all fishing stopped in certain areas off New Zealand (Clark et al ., 2010a).

Risk assessment is evaluating the distributional overlap of the fish with the fisheries, where fish distribution is influenced by habitat use. For sharks, that risk assessment included a lot of variables: there are a number of shark species (approximately 112 species of sharks have been recorded from New Zealand waters) with many different lifestyles, differences in their market value for different body parts (like meat, oil, fins, cartilage), what body parts they use for sharks (for example, some sharks have both their fins and meat utilised but not their oil; some just have their fins taken, etc.) and how to identify sharks once on the market (Fisheries Agency of Japan, 1999; Vannuccini, 1999; Yeung et al. 2000; Froese and Pauly, 2002; Clarke and Mosqueira, 2002).

In order to carry out a risk assessment, you have to know your study animals pretty well. It should come to no surprise that little is known about the different life history stages of chimaeras, so I did the next best thing and looked at Chondrichthyans in general. My literature review synthesized over 300 published observations of habitat use for these different life history stages; from there, I used New Zealand research vessel catch data (provided by NIWA, the National Institute of Water and Atmospheric Research) and separated them by species, sex, size, and maturity (when available). I then dove into the deep end of using a computer language called “R,” which is used for statistical computing and graphics. Using R programming software, I searched for the catch compositions based on the life history stage I was looking for (example: looking for smaller sized, immature fish of both sexes and little to no adults when in search for a nursery ground).

The way we went about this thesis differs in that we first developed hypotheses for characteristics of different habitat use, rather than “data mining” for patterns, and it therefore it has a structured and scientific approach to determining shark habitats. Our results showed that some life history stages and habitats for certain species could be identified, whereas others could not.

Pupping ground criteria were met for Callorhynchus milii (elephant fish), Hydrolagus novaezealandiae (dark ghost shark), and Hydrolagus bemisi (pale ghost shark); nursery ground criteria were met for Callorhynchus milii (elephant fish); mating ground criteria were met for Callorhynchus milii (elephant fish), Hydrolagus novaezealandiae (dark ghost shark), Hydrolagus bemisi (pale ghost shark), and Harriotta raleighana (Pacific longnose chimaera); lek-like mating criteria were met for Hydrolagus novaezealandiae (dark ghost shark). Note: Lek-like mating is where males perform feats of physical endurance to impress females and she gets to choose a mate.

Ghost Shark_SPP unknown

Ghost shark

These complex—and barely understood— deep sea ecosystems can be overwhelmed by the fishing technologies that rip through them. Like sharks, many deep sea animals live a k-style lifestyle, meaning that they take a long time to reach sexual maturity and once they are sexually active, they give birth to few young after a long gestation period. This lifestyle means these creatures are especially vulnerable since they cannot repopulate quickly if overfished.

In order to manage the environmental impact of deep sea fisheries, scientists, policymakers and stakeholders have to identify the ways to help re-establish delicate biological functions after the impacts made by deep sea fisheries. Recovery—defined as the return to conditions before they were damaged by fishing activities—is not a unique concept to just deep sea communities, and is usually due to site-specific factors that are often poorly understood and difficult to estimate. Little is known about biological histories and structures of the deep sea, and therefore the rates of recovery may be much slower than shallow environments.

Management of the seas, especially the deep sea, lags behind that of land and of the continental shelf, but there is a number of protection measures already being put in place. These actions include, but are not limited to,

• regulating fishing methods and gear types,

• specify the depth that one can fish at,

• limit the volume of bycatch, limit the volume of catch,

• move-on rules, and

• closure of areas of particular importance.

Modifications to trawl gear and how they are used have made these usually heavy tools less destructive (Mounsey and Prado, 1997; Valdemarsen et al. 2007; Rose et al. 2010; Skaar and Vold 2010). Fishery closures are becoming more common, with large parts of EEZs (exclusive economic zone) being closed zones for bottom trawling (e.g. New Zealand, North Atlantic, Gulf of Alaska, Bering Sea, USA waters, Azores) (Hourigan, 2009; Morato et al. 2010); the effectiveness of these closures is yet to be established.

And while this approach, dubbed the “ecosystem approach,” to fisheries management is widely advocated for, it does not help every deep sea animal or structure. Those that cannot move (sessile) are still in danger of being destroyed. As such, ecosystem-based marine spatial planning and management may be the most effective fisheries management strategy for protecting the vulnerable deep sea critters (Clark and Dunn, 2012; Schlacher et al. 2014). This management strategy can include marine protected areas (MPAs) to restrict fishing in specific locations and other management tools, such as zoning or spatial user rights, which will affect the distribution of fishing effort in a more effective manner. Using spatial management measures effectively requires new models and data, and will always have their limitations given how little data in regards to the deep sea exists, and that this particular environment is hard to get to.

So what does it all mean in regards to my thesis? Well, for one thing, there is a growing acknowledgement these unique ecosystems require special protection. And like any scientist knows, there are still many unanswered questions about just how important this environment is (especially certain structures).


A juvenile Elephantfish, Callorhinchus milii. Source: Rudie H. Kuiter / Aquatic Photographics

On a more shark-related note, not all life-history stage habitats were found for my chimaeras, and this may be because these are outside of the coverage of the data set (and likely also commercial fisheries), or because they do not actually exist for some Chondrichthyans. That cliffhanger is research for another day, I suppose…

This project could not have been done without the endless amount of support of my family and friends; those who have supported me since day one of my marine biology adventures. They’re the ones who stick up for me whenever I hear, “You’re not what I expected when you said you were a shark scientist.” I am not really sure what the stereotype of a shark scientist is supposed to be, thankfully I grew up where you accept and judge people by who they are and what they do. However I see this as a challenge, as it sets the stage up for me to show the mind of a shark scientist can come in all kinds of packages.

As a final note, I’d like to thank the New Zealand Seafood Scholarship, the Deepwater Group, as well as researchers from National Institute of Water and Atmospheric Research (NIWA) who provided funding, insight and expertise that greatly assisted the research. The challenge of venturing into complex theories is that not all agree with all of the interpretations/conclusions of any research, but it is a basis for having a discussion, which can only be good for all.




  • Thistle, D. (2003). The deep-sea floor: an overview. Ecosystems of The Deep Oceans. Ecosystems of the World 28.
  • FAO. 2009. Management of Deep-Sea Fisheries in the High Seas. FAO, Rome, Italy.
  • Koslow, J. A., Boehlert, G. W., Gordon, J. D. M., Haedrich, R. L., Lorance, P., and Parin, N. 2000. Continental slope and deep-sea fisheries: implications for a fragile ecosystem. ICES Journal of Marine Science, 57: 548–557.
  • Clark, M. R., and Koslow, J. A. 2007. Impacts of fisheries on seamounts. In Seamounts: Ecology, Fisheries and Conservation, pp. 413 –441. Ed. by T. J. Pitcher, T. Morato, P. J. B. Hart, M. R. Clark, N. Haggen, and R. Santos. Blackwell, Oxford.
  • Pitcher, T. J., Clark, M. R., Morato, T., and Watson, R. 2010. Seamount fisheries: do they have a future? Oceanography, 23: 134–144.
  • Clark, M. R. 2009. Deep-sea seamount fisheries: a review of global status and future prospects. Latin American Journal of Aquatic Research, 37: 501 –512.
  • Norse, E. A., Brooke, S., Cheung, W. W. L., Clark, M. R., Ekeland, L., Froese, R., Gjerde, K. M., et al. 2012. Sustainability of deep-sea fisheries. Marine Policy, 36: 307–320.
  • Koslow, J. A., Gowlett-Holmes, K., Lowry, J. K., O’Hara, T., Poore, G. C. B., and Williams, A. 2001. Seamount benthic macrofauna off southern Tasmania: community structure and impacts of trawling. Marine Ecology Progress Series, 213: 111–125.
  • Hall-Spencer, J., Allain, V., and Fossa, J. H. 2002. Trawling damage to Northeast Atlantic ancient coral reefs. Proceedings of the Royal Society of London Series B: Biological Sciences, 269: 507–511.
  • Waller, R., Watling, L., Auster, P., and Shank, T. 2007. Anthropogenic impacts on the corner rise seamounts, north-west Atlantic Ocean. Journal of the Marine Biological Association of the United Kingdom, 87: 1075 –1076.
  • Althaus, F., Williams, A., Schlacher, T. A., Kloser, R. K., Green, M. A., Barker, B. A., Bax, N. J., et al. 2009. Impacts of bottom trawling on deep-coral ecosystems of seamounts are long-lasting. Marine Ecology Progress Series, 397: 279–294.
  • Clark, M. R., and Rowden, A. A. 2009. Effect of deep water trawling on the macro-invertebrate assemblages of seamounts on the Chatham Rise, New Zealand. Deep Sea Research I, 56: 1540–1554.
  • Clark, M. R., Bowden, D. A., Baird, S. J., and Stewart, R. 2010a. Effects of fishing on the benthic biodiversity of seamounts of the “Graveyard” complex, northern Chatham Rise. New Zealand Aquatic Environment and Biodiversity Report, 46: 1 –40.
  • Fisheries Agency of Japan. 1999. Characterization of morphology of shark fin products: a guide of the identification of shark fin caught by tuna longline fishery. Global Guardian Trust, Tokyo.
  • Vannuccini, S. 1999. Shark utilization, marketing and trade. Fisheries Technical Paper 389. Food and Agriculture Organization, Rome.
  • Yeung, W. S.; Lam, C.C.; Zhao, P.Y. 2000. The complete book of dried seafood and foodstuffs. Wan Li Book Company Limited, Hong Kong (in Chinese).
  • Froese, R. and Pauly, D. 2002. FishBase database. Fishbase, Kiel, Germany. Eds. Available from (accessed April 2016).
  • Clarke, S. and Mosqueira, I. 2002. A preliminary assessment of European participation in the shark fin trade. Pages 65–72 in M.Vacchi, G.La Mesa, F.Serena, and B.Séret, editors. Proceedings of the 4th European elasmobranch association meeting. Société Française d’Ichtyologie, Paris.
  • Mounsey, R. P., and Prado, J. 1997. Eco-friendly demersal fish trawling systems. Fishery Technology, 34: 1 – 6.
  • Valdemarsen, J. W., Jorgensen, T., and Engas, A. 2007. Options to mitigate bottom habitat impact of dragged gears. FAO Fisheries Technical Paper, 29.
  • Rose, C. S., Gauvin, J. R., and Hammond, C. F. 2010. Effective herding of flatfish by cables with minimal seafloor contact. Fishery Bulletin, 108: 136–144.
  • Skaar, K. L., and Vold, A. 2010. New trawl gear with reduced bottom contact. Marine Research News, 2: 1–2.
  • Hourigan, T. F. 2009. Managing fishery impacts on deep-water coral ecosystems of the USA: emerging best practices. Marine Ecology Progress Series, 397: 333–340.
  • Morato, T., Pitcher, T. J., Clark, M. R., Menezes, G., Tempera, F., Porteiro, F., Giacomello, E., et al. 2010. Can we protect seamounts for research? A call for conservation. Oceanography, 23: 190–199.
  • Clark, M. R., and Dunn, M. R. 2012. Spatial management of deep-sea seamount fisheries: balancing sustainable exploitation and habitat conservation. Environmental Conservation, 39: 204 –214.
  • Schlacher, T. A., Baco, A. R., Rowden, A. A., O’Hara, T. D., Clark, M. R., Kelley, C., and Dower, J. F. 2014. Seamount benthos in a cobalt-rich crust region of the central Pacific: Conservation challenges for future seabed mining. Diversity and Distributions, 20: 491 –502.

Time to think about visual neuroscience

by Poppy Sharp, PhD candidate at the Center for Mind/Brain Sciences, University of Trento.

All is not as it seems

We all delight in discovering that what we see isn’t always the truth. Think optical illusions: as a kid I loved finding the hidden images in Magic Eye stereogram pictures. Maybe you remember a surprising moment when you realised you can’t always trust your eyes. Here’s a quick example. In the image below, cover your left eye and stare at the cross, then slowly move closer towards the screen. At some point, instead of seeing what’s really there, you’ll see a continuous black line. This happens when the WAB logo falls in a small patch on the retinae of your eyes where the nerve fibres leave in a bundle, and consequently this patch has no light receptors – a blind spot. When the logo is in your blind spot, your visual system fills in the gap using the available information. Since there are lines on either side, the assumption is made that the line continues through the blind spot.

Illusions reveal that our perception of the world results from the brain building our visual experiences, using best guesses as to what’s really out there. Most of the time you don’t notice, because the visual system has been adapted over years of evolution and then been honed by your lifetime of perceptual experiences, and is pretty good at what it does.

WAB vision

For vision scientists, illusions can provide clues about the way the visual system builds our experiences. We refer to our visual experience of something as a ‘percept’, and use the term ‘stimulus’ for the thing which prompted that percept. The stimulus could be something as simple as a flash of light, or more complex like a human face. Vision science is all about carefully designing experiments so we can tease apart the relationship between the physical stimulus out in the world and our percept of it. In this way, we learn about the ongoing processes in the brain which allow us to do everything from recognising objects and people, to judging the trajectory of a moving ball so we can catch it.

We can get insight into what people perceived by measuring their behavioural responses. Take a simple experiment: we show people an arrow to indicate whether to pay attention to the left or the right side of the screen, then they see either one or two flashes of light flash quickly on one side, and have to press a button to indicate how many flashes they saw. There are several behavioural measures we could record here. Did the cue help them be more accurate at telling the difference between one or two flashes? Did the cue allow them to respond more quickly? Were they more confident in their response? These are all behavioural measures. In addition, we can also look at another type of measure: their brain activity. Recording brain activity allows unique insights into how our experiences of the world are put together, and investigation of exciting new questions about the mind and brain.

Rhythms of the brain

Your brain is a complex network of cells using electrochemical signals to communicate with one another. We can take a peek at your brain waves by measuring the magnetic fields associated with the electrical activity of your brain. These magnetic fields are very small, so to record them we need a machine called an MEG scanner (magnetoencephalography) which has many extremely sensitive sensors called SQUIDs (superconducting quantum interference devices). The scanner somewhat resembles a dryer for ladies getting their blue rinse done, but differs in that it’s filled with liquid helium and costs about three million euros.

A single cell firing off an electrical signal would have too small a magnetic field to be detected, but since cells tend to fire together as groups, we can measure these patterns of activity in the MEG signal. Then we look for differences in the patterns of activity under different experimental conditions, in order to reveal what’s going on in the brain during different cognitive processes. For example, in our simple experiment from before with a cue and flashes of light, we would likely find differences in brain activity when these flashes occur at an expected location as compared to an unexpected one.

One particularly fascinating way we can characterise patterns of brain activity is in terms of the the rhythms of the brain. Brain activity is an ongoing symphony of multiple groups of cells firing in concert. Some groups fire together more often (i.e. at high frequency), whereas others may also be firing together in a synchronised way, but firing less often (low frequency). These different patterns of brain waves generated by cells forming different groups and firing at various frequencies are vital for many important processes, including visual perception.

What I’m working on

For as many hours of the day as your eyes are open, a flood of visual information is continuously streaming into your brain. I’m interested in how the visual system makes sense of all that information, and prioritises some things over others. Like many researchers, the approach we use is to show simple stimuli in a controlled setting, in order to ask questions about fundamental low level visual processes. We then hope that our insights generalise to more natural processing in the busy and changeable visual environment of the ‘real world’. My focus is on temporal processing. Temporal processing can refer to a lot of things, but as far as my projects go we mean how you deal with stimuli occurring very close together in time (tens of milliseconds apart). I’m investigating how this is influenced by expectations, so in my experiments we manipulate expectations about where in space stimuli will be, and also your expectations about when they will appear. This is achieved using simple visual cues to direct your attention to, for example, a certain area of the screen.

When stimuli rapidly follow one another in time, sometimes it’s important to be parse them into separate percepts whereas other times it’s more appropriate to integrate them together. There’s always a tradeoff between the precision and stability of the percepts built by the visual system.  The right balance between splitting up stimuli into separate percepts as opposed to blending them into a combined percept depends on the situation and what you’re trying to achieve at that moment.

Let’s illustrate some aspects of this idea about parsing versus integrating stimuli with a story, out in the woods at night. If some flashes of light come in quick succession from the undergrowth, this could be the moonlight reflecting off the eyes of a moving predator. In this case, your visual system needs to integrate these stimuli into a percept of the predator moving through space. But a similar set of several stimuli flashing up from the darkness could also be multiple predators next to each other, in which case it’s vital that you parse the incoming information and perceive them separately. Current circumstances and goals determine the mode of temporal processing that is most appropriate.

I’m investigating how expectations about where stimuli will be can influence your ability to either parse them into separate percepts or to form an integrated percept. Through characterising how expectations influence these two fundamental but opposing temporal processes, we hope to gain insights not only into the processes themselves, but also into the mechanisms of expectation in the visual system. By combining behavioural measures with measures of brain activity (collected using the MEG scanner), we are working towards new accounts of the dynamics of temporal processing and factors which influence it. In this way, we better our understanding of the visual system’s impressive capabilities in building our vital visual experiences from the lively stream of information entering our eyes.

Women are literally boring….

By: Laurie Winkless

Tunnels, that is. All over the world, Tunnel Boring Machines (or TBMs) are chewing their way through the packed subterranean network of your nearest city. But something you might not know is that they’re all given women’s names. Naming a machine after a human isn’t that weird, right? Many of us have named our cars after all, but it goes a bit deeper for TBMs. According to tunnelling tradition, a TBM cannot start work until it is officially named. But exactly where we got the tradition of naming them after women remains a bit of a mystery.

Some sources suggest that it comes from the 16th century, when miners, armourers, and artillerymen prayed to Saint Barbara. Legend has it that Barbara’s father had locked her in a windowless tower when he found out about her conversion to Christianity. Later, a flash of lightning struck him dead, and since then, all trades associated with darkness and the use of explosives have recognised Barbara as their patron saint. Today’s tunnel engineers see themselves as fitting that description, and so give TBMs women’s names in Barbara’s honour. Others suggest that the tradition comes from the link between miners and ship-builders – their physical strength and similar skills often saw men switch between trades as the need arose. Boats have long been given the pronoun ‘she’ (again for reasons unknown), so perhaps using women’s names for tunnelling machines started there?

Regardless of its beginnings, this tradition is carried out throughout the world today, as a sign of good luck for the project ahead. And, perhaps surprisingly in our increasingly secular world, most tunnelling projects still erect a shrine to Saint Barbara at the tunnel entrance.

I am a massive fan of TBMs. Here I am looking very excited in a TBM- tunnel under the streets of London. If I lived my life again, I think I’d be a tunnelling engineer. (Credit: Laurie Winkless)

Anyway, before we meet some of the First Ladies of the Underground, let have a quick look at how they work. First off, TBMs are huge. Bertha, the largest TBM in the world, is currently working her way under Seattle. She has a diameter of 17.5m, is 99 m long, and weighs over 6,000 tonnes. If we measure her in units of ‘double decker buses’ – she’s as tall as four parked on top of one another, as long as eight parked nose-to-tail, and weighs as much as 467 of them. So it’s no surprise that she’s usually referred to as ‘Big Bertha’.

So what do TBM’s like Bertha do with all that…girth? In their simplest form, TBMs are cylinder-shaped machines that can munch their way through almost any rock type. As I mentioned in my book, Science and the City, TBMs are generally referred to as ‘moles’, but I prefer to think of them as earthworms. Worms eat, push forward and expel whatever is left over, and while there are lots of different types of TBM, they pretty much all do those same three things.

Image credit: Crossrail

At the front, TBMs have a circular face covered in incredibly hard teeth made from a material called tungsten carbide. As the cutter-head rotates, it breaks up the rock in front of it. This excavated material is swallowed through an opening in the face (some would call it a mouth) and it is carried inside the body of the TBM using a rotating conveyor belt. There, it is mixed with various additives (rather like saliva or stomach acid in some animals) that turn the rock into something with the consistency, if not the minty-freshness, of toothpaste. After digestion, this goo is expelled out of the back of the TBM, and it travels along a conveyor belt, until it reaches a processing facility above ground. There, the goo is filtered and treated, with much of it reused in other building projects.

Because of their shape, TBMs produce smooth tunnel walls, which can then be lined using curved segments of concrete. TBMs manage this part of the process too – many metres behind the cutter-head, large robotic suction arms called erectors (stop giggling) pick up and place the concrete panels, to form a complete ring. As the TBM moves forward, more and more of these rings are put into place, until the tunnel is fully clad. In this way, cities across the globe can produce fully-lined tunnels at the rather impressive rate of 100 m per week.

Enough background. Time to meet some of the TBMs boldly going where no machine-named-after-a-woman has gone before.

London – Ada, Phyllis, Victoria, Elizabeth, Mary, Sophia, Jessica and Ellie

Crossrail is Europe’s biggest engineering project. Since 2009, they’ve constructed two brand-new, 21 km-long tunnels across London, running east-west. To do this, they used eight TBMs, and as tradition dictates, each was given a woman’s name, selected by members of the public. The first six machines were named after historical London figures, whilst the final two machines were named after ‘modern day heroes’. Because two TBM’s excavate parallel tunnels at the same time, they’re also named in pairs.

Image credit: Crossrail

– Mary and Sophia: These two excavated Crossrail’s new Thames Tunnel, between Plumstead and North Woolwich. They were named after the wives of Isambard and Marc Brunel, the famous engineers who constructed London’s first Thames Tunnel over 150 years ago. The women were a lot faster than their hubbies though – the original tunnel took 16 years to construct. This one was completed in just eight months.

Victoria and Elizabeth: Can you guess which women from history these TBMs were named after?! Yep, Queenie #1 and #2. In the citation, the reason given was that “Victoria was monarch in the first age of great railway engineering projects and Elizabeth is the monarch at the advent of this great age.” Victoria and Elizabeth excavated the tunnels that run between Canning Town and Farringdon, finishing the job in May 2015. As an aside, the Crossrail route itself will appear on tube maps as ‘The Elizabeth Line’, which is disappointingly predictable. I was rooting for ‘The Brunel Line’ myself, but hey.

Ada and Phyllis: These may be my favourites – named after the world’s first computer scientist, Ada Lovelace, and Phyllis Pearsall, who single-handedly created the London A-Z. Lovelace was a woman before her time – without her work, Charles Babbage and his ‘analytical engine’ would have been nothing more than a rich-man and his hobby. Pearsall, on the other hand, got lost on the way to a party in 1935, and decided the maps were inadequate. She walked a total of 3,000 miles to compile the first comprehensive street map of the city. Their Crossrail reincarnations drove west from Farringdon station, laying the groundwork for the second stage of the project.

Jessica and Ellie: These names were selected by primary school children from East London, and they come from heptathlete Jessica Ennis-Hill and swimmer Ellie Simmonds, who won gold medals at the 2012 Olympics and Paralympics held in the city. Like their human counterparts, these TBMs were hard-working, each excavating two sections of Crossrail’s route.

London has two brand-new TBMs too, which will be working on the extension to the tube’s Northern Line – the line I spent almost all of my 13 years in London living on. Like Crossrail’s Jessica and Ellie, the names of the newbies – each weighing in at 650 tonnes (or 50 double-decker buses) – were selected by schoolchildren. They drew inspiration from pioneering women in aviation. One is named Amy, after Amy Johnson, the first female pilot to fly solo from Britain to Australia. And the second is Helen, named after the first British astronaut, Helen Sharman.

Seattle – Big Bertha

What more can I say about Bertha? Well, she was named after one of Seattle’s early mayors. In fact, Bertha K. Landes was the city’s first and only female mayor…. And she’s still widely regarded as one of the best they ever had. She fought against police corruption and dangerous drivers, and advocated for municipal ownership of the Seattle City Light and street railways. In 2013, Bertha-the-TBM started her long journey across the city, excavating a multilevel road tunnel to replace the Alaskan Way Viaduct. But just six months into the project, Bertha ground to a halt. Investigations showed that some of Bertha’s cutting teeth had been severely damaged by a large steel pipe embedded in the ground that hadn’t shown up on surveys. Over the next two years (yes, really), construction engineers dug a recovery pit, so that they could access the machine’s cutter-head, and partially replace it. Bertha resumed tunnel boring in late December, 2015. As I type, she’s also on a pause because of some misalignment, but this stoppage is expected to be temporary. Poor Bertha.

Image credit: Washington State Department of Transportation

Auckland – Alice

Since moving to New Zealand in December, I’ve had a bit of rail-infrastructure-shaped gap in my life. Thankfully, Kiwis are also fans of TBMs, but they tend to use them for road tunnels. The latest one to finish her work is Alice – a 3200 tonne (246 buses) TBM that spent the last two years carving a path between Auckland’s major transport routes. Alice’s tunnel connects State Highway 16 and State Highway 20, and once it opens in April/May 2017, it will complete the city’s ring road. Having recently spent more than an hour in Auckland traffic heading to the airport, I can attest to how much the road is needed! Since finishing her tour of duty, Alice has since gone to a farm when she can roam free amongst all of the other TBMs…. Oh if only this were true. In reality, the largest sections of the machine are being shipped back to her German manufacturer. There, her components will be used to build another TBM. So it’s not been a bad life, I guess.

San Francisco – Mom Chung

Mom Chung is another TBM that has already done her job and is now ‘in retirement’. She is named after Dr. Margaret Chung, the first American-born female Chinese physician, who practiced medicine in the heart of San Francisco’s Chinatown. During World War II, she took lots of American servicemen under her wing, earning her the nickname ‘Mom’. Legend has it that when one of her ‘sons’ became a congressman, he filed the legislation to create a female branch of the Navy, in response to pressure from Mom, who was a firm supporter of women in the military. Mom Chung-the-TBM built the southbound central subway tunnel in San Francisco, and even had a Twitter account for a while.

Of course, actual, real-life women work alongside (and inside) these machines. As more women are attracted into engineering, tunnelling is no longer solely a male pursuit. Women still make up a small percentage (around 11% of the UK construction sector, for example), but those numbers are slowly growing. So no matter which way you look at it, women are literally boring. Tunnelling is awesome.

*** You can follow Laurie on Twitter @laurie_winkless. She also wants to say thank you to Dr Jess Wade for inspiring this article. If you love science and very cool doodles, you can also follow Jess on Twitter – she’s @jesswade


Deep Time Diversity: Decoding 375 Million Years of Life on Land

By: Emma Dunne (@emmadnn)

Across the world today we can see a tremendous amount of biodiversity. Animals occupy every corner of the globe, from the lush rainforests at the equator to the vast icy expanses at the poles and the plethora of grasslands, deserts, and forests in between. Nature is outstanding in its variation of animal forms; animals have mastered flight, can tolerate extreme environments, demonstrate complex behaviours, and some can even use tools. But exactly how life on land became so diverse remains largely uncertain.



Chameleons are a distinctive group of reptiles which contains many different species that vary greatly in colour. Image: Pixabay.

Life has been around for an extremely long time – 3.8 billion years to be exact. Now, that’s a very long time indeed, but for the first 3.795 or so billion years life was microscopic. It wasn’t until 542 million years ago that animals became a little more complex – during the ‘Cambrian Explosion’ when most major groups, such as arthropods, first evolved. To put things into perspective, wherever you are right now stick both of your arms out straight to the side (don’t be shy!). The very tip of your left index finger represents the present day, and the tip of your right index finger represents the point about 542 million years in the past. Moving from right to left, the first fish appear somewhere in the middle of your right forearm just after the Cambrian Explosion. Plants emerged on land around 425 million years ago, a little closer to your right elbow. It wasn’t until the point just before your right shoulder that vertebrates first ventured onto land, beginning the process of evolving into the beasts we are all familiar with today. At the point in the middle of your body, the continents were all squashed together in a landmass known as Pangaea, while reptiles, such as the sailbacked Dimetrodon, ruled the hot and arid lands around the equator. Dinosaurs first appear somewhere on your left shoulder (about 240 million years ago), followed very closely by the first mammals. Dinosaurs are wiped out just before we reach your left wrist (66 million years ago), paving the way for mammals to begin ruling the land. And now to make you really feel like a big fish in a small pond: Humans did not appear until the very tip of your left index finger, occupying a slice of your makeshift timescale no thicker than your fingernail. So, our species really hasn’t been around for long at all!

2 Dimetrodon

Dimetrodon grandis, an extinct reptile that lived 295-272 million years ago during the Permian period in the wetlands of the supercontinent Euramerica. Illustration: Scott Hartman (


With all of these different animals evolving and going extinct at different points throughout Earth’s history, biodiversity has fluctuated, with increases in diversity punctuated by significant decreases known as extinction events, some more severe than others.

Over the last 50 years palaeobiologists have been trying to quantify exactly how significant these rises and falls in diversity have been using computational methods.

Typically, these analyses involve tallying the number of fossil families for specific time intervals and comparing the totals between neighbouring intervals. Previous studies using this method estimate that diversity on land has risen exponentially, or continued to rise faster and faster over time. A number of reasons have been given for this pattern, including the availability of suitable niches and favourable climatic conditions allowing species to thrive and diversify further.

Sounds simple, right? Not quite…


The currently accepted pattern of changes in diversity on land constructed using counts of fossil tetrapod (four-limbed vertebrates) families through time. This pattern shows an “exponential rise” in diversity and more and more families appear on land as time goes on. From Sahney et al. (2010) Biol. Lett. (Numbered 1-3 are the end-Permian, end-Triassic and the Cretaceous/Paleogene boundary mass extinctions)

The problem is the fossil record is inherently biased. When you think of a fossil I could almost be certain that you would think of a skeleton in a piece of rock. And that’s not wrong! Hard parts, such as bones, shells, and teeth, are much easier to preserve than soft squishy bits – bias number one. Luckily for vertebrate palaeontologists, like myself, we don’t usually run into this issue as our study subjects have bones. But we do unfortunately encounter other biases. Some groups of animals contain many more individuals than others, and are therefore more likely to leave fossils behind (think huge herds of wildebeest vs. a pride of lions). Similarly, different habitats allow more diversity than others (for example the Siberian Tundra vs. the African savannah). These ‘biological factors’ come in to play even before the fossilisation process even begins!


Groups of animals that exist in large numbers such as wildebeest or antelope, are much more likely to leave behind some fossils for us to find that animals who don’t exist in such large numbers, such as lions. These biological factors affect the fossilisation potential of an organisms waaay before the geological processes kick in!

The chances of an animal becoming a fossil are very slim indeed. Usually, after an animal dies its body rots away or is devoured by predators and scavengers, never to be seen again. But sometimes conditions are just right, and once the body is buried quickly with mud or sand, rock can begin to form and the remains can be fossilised. As we look back further in time our picture of the past gets a little fuzzier, as older rocks get overlain by younger rocks and mashed up by geological forces such as earthquakes and erosion. Fossils also only occur in sedimentary rocks (if you can remember back to your high school geography classes, you might remember that there are three types of rock: igneous, metamorphic, and sedimentary!), and sedimentary rocks are not found uniformly across the globe. So even finding a fossil is an extremely rare occurrence!

Human biases permeate all scientific disciplines, and palaeontology is no exception.

Sometimes it is easy to stumble across a large ‘mass grave’ containing hundreds of fossils, and sometimes these sites can be in very sunny, very beautiful countries worth visiting. Other times fossils have been found in isolation in areas where conditions are harsh, such as the important transitional fossil Acanthostega found in eastern Greenland. So, who’s up for a fun expedition to the wilds of Siberia in search of reptile fossils in the dead of winter? What, no? Yeah, me neither.

All of these factors (biological, geological, and human in origin) contribute to what are known as ‘sampling biases’, or biases that influence the amount and type of fossil data we have available for us to study.


An exquisitely preserved full body fossil of the extinct amphibian Phlegethontia longissima from the Mazon Creek fossil beds in Illinois, USA. Finds like this little fella are very rare indeed. Specimen housed at the Burpee Museum.

With these sampling biases stacked against us, it seems unwise to use simple counts of fossils to illuminate important patterns of diversity through time. This is where my research comes in. We are currently building a shiny new dataset within the publically accessible Paleobiology Database ( With this dataset, we are able to apply more sophisticated statistical methods to our analyses and rigorously test the patterns of diversity change on land over the last 375 million years.

My research will allow palaeobiologists to answer the question; are we able to identify genuine patterns of diversity change, or are we simply viewing changes in the number of fossils available to study through time?

So, with so many millions of years to get through, where’s the best place to start? Why, at the beginning of course! My current work surrounds the interval of geological time when the first vertebrates appeared on land and began to diversify over the next 100 million years. Given that the rocks containing these fossils are very old and are poorly surveyed, our ability to identify genuine diversity patterns is significantly distorted. However, the story does begin to improve as we move into the next 100 million years and we begin to see the fossils reflecting the true patterns of diversity.


Map of the world from the Paleobiology Database ( showing the locations across the world where tetrapod fossils have been found from the time they first appeared approximately 375 million years ago right up to the present day. You can create maps such as this for yourself at:!

My research has just begun to scratch the surface of decoding the diversity of life on land, and there’s still a long way to go! Studies such as ours are becoming increasingly relevant today as we try to anticipate the effects of the current biodiversity crisis happening across the world. Many animals worldwide are currently under threat of extinction, and if this pattern is to continue we might well see ourselves experiencing the terrifying prospect of a 6th major mass extinction.

Research into past extinction events can determine how ecosystems and animal communities responded in the aftermath of dramatic decreases in diversity, and I hope that my research looking into the geological past will give us some hope for the future.

Find out more:


What can the brain learn from itself? Neurofeedback for understanding brain function.

By:Dr. Kathy L.Ruddy

STEM editor: Francesca Farina

The human brain has a remarkable capacity to learn from feedback. During daily life as we interact with our environment the brain processes the consequences of our actions, and uses this ‘feedback’ in order to update its stored representations or ‘blueprints’ for how to perform certain behaviours optimally. This learning-by-feedback process occurs regardless of whether we are consciously aware of it or not.

The more interesting implication of this process is that the brain can also ‘learn from itself’, forming the basis of the ‘neurofeedback’ phenomenon.

BCI_soft_EdgesBasically, if we stick an electrode on the head to record the brain’s electrical rhythms (or ‘waves’), the brain can learn to change the rhythm simply by watching feedback displayed on a computer screen. Because we know that the presence of particular types of brain rhythms can be beneficial or detrimental depending on the context and the task being performed, the ability to volitionally change them may have useful applications for enhancing human performance and treating pathological patterns of brain activity.

In recent years neurofeedback has, however, earned itself a bad reputation in scientific circles. This is mainly due to the premature commercialisation of the technique, which is now being ‘sold’ as a treatment for clinical disorders – for which the research evidence is currently still lacking – and even for home use to alleviate symptoms of stress, migraine, depression, anxiety, and essentially any other complaint you can think of! The problem with all of this is that we, as scientists, understand very little about the brain rhythms in the first place; Where do they come from? What do they mean? Are they simply a by-product of other ongoing brain processes, or does the rhythm itself set the ‘state’ of a particular brain region, enhancing or inhibiting its processing capabilities?

In my own research, I am currently working towards bridging this gap, by trying to make the connection between fundamental brain mechanisms, behaviours, and their associated electrical rhythms or brain ‘states’.

By training people to put their brain into different ‘states’, we were – for the first time – able to glimpse how brain rhythms directly influence these states in humans. We focused on the motor cortex, the part of the brain that controls movement, because there is a vast ongoing debate in the literature concerning whether changing the state of this region has implications for movement rehabilitation following stroke or other brain injury. Some argue that if the motor cortex is in a more ‘excitable’ state, traditional stroke rehabilitation therapies have enhanced effectiveness, compared to when the same region is more ‘inhibited’. Brain stimulation directly targeting the motor cortex has been used in the past in an attempt to achieve this more plastic, excitable state, but with mixed success and small effects that have proven difficult to reproduce.

TMS neuofedback_imagesIn our investigation we used brain stimulation in a non-traditional way to achieve robust bidirectional changes in the state of the motor cortex. Transcranial magnetic stimulation (TMS) can be used to measure the excitability (state) of the motor system. By applying a magnetic pulse to the skull over the exact location in the brain that controls the finger, a response can be measured in finger muscles that is referred to as a motor-evoked potential (MEP). The size of the MEP tells us how excitable the system is. We developed a form of neurofeedback training where the size of each MEP was displayed to participants on screen, and they were rewarded for either large, or small MEPs by positive auditory feedback and a dollar symbol. This type of neurofeedback mobilizes learning mechansims in the brain, as participants develop mental strategies and observe the consequences of their thought processes upon the state of their motor system. Over a period of 5 days, participants were able to make their MEPs significantly bigger or smaller, by changing the excitatory/inhibitory state of the motor cortex.

Our next question was, how exactly is this change of state being achieved in the brain? Are electrical brain rhythms changing in the motor cortex to mediate the changing brain state? Using this new tool to change brain state experimentally, we asked participants to return for one final training session, this time while we recorded their brain rhythms (using EEG) during the TMS-based neurofeedback. This revealed that when the motor cortex was more excitable, there was a significant local increase in high frequency (gamma) brainwaves (between 30-50Hz). By contrast, higher alpha waves (8-14Hz) were associated with a more ‘inhibited’ brain state, but were not as influential in setting the excitability of the motor cortex as the gamma waves

page-0The implications of these findings are twofold. Firstly, having a tool to robustly change the excitatory/inhibitory balance of the motor cortex gives us experimental control over this process, and thus opens several doors for new fundamental scientific research into the neural mechanisms that determine the state of the motor system. Secondly, this approach may have future clinical potential, as a non-invasive and non-pharmacological way to ‘prime’ the motor cortex in advance of movement rehabilitation therapy, by putting the brain in a state that is more receptive to re-learning motor skills. As the training is straightforward, pain free and enjoyable for the participant, we believe that this approach may pave the way for a new wave of research using neurofeedback in place of traditional electrical brain stimulation, as a scientific tool and an adjunct to commonly used stroke rehabilitation practices.


Women and the Forensic Thriller

By: Elena Avanzas Álvarez

Every time I tell someone I am pursuing a PhD in the Humanities, it is clear to me that they do not think I am in my right mind. Then I tell them that I am doing this with no fixed income or scholarship to support it, and I can see fear in their faces. But my favourite reaction comes when I tell them I am writing a thesis about forensic crime fiction: ‘Why do you write your thesis about trash/airport/commercial literature?’ And every time I tell them that there is more to crime fiction than CSI. There is even more to CSI!!! And here is why:

Crime fiction has been – along with romance – one of the most popular literary subgenres since the 19th century. People are addicted to crime, especially if it comes from a book, as it appears to be a shallow but safe entertainment.

As Umberto Eco said, fiction offers readers different forests in which to get lost: No immediate or physical harm comes from reading a crime novel, and if it ever gets too much, we can always close the book and start another one, or turn the TV on, or simply go for a walk to clear up our minds. It is that simple. However, despite our historical preference as a society for crime stories, we are quickly to dismiss them as low-quality cultural products. Think about the contradiction between CSI’ audience ratings during its 15-year run (2000 – 2015), and how it is perceived by most people. Where does this tension come from?TemperanceBrennan

Historically, crime fiction has been regarded as corrupting literature, contagious stories that could turn readers into deprived human beings. Despite this belief, writers like Sir Arthur Conan Doyle and Agatha Christie are well-known even to modern audiences. They were big back then, and they still are. The passing of time has turned them into classics, but what do we have to say about the men – and especially the women – writing crime fiction nowadays? This is the reason I chose to focus my PhD thesis on contemporary authors: They are writing as we prepare our meals, go to school, go to work, or simply have a bath. Many of them are making a living of their writings, and some of them have changed the way in which we define ‘detective fiction’ in the 21st century. It is our duty, as well as a privilege, to enjoy their work, but also to support it in order to keep the arts going and evolve as a society.

If artists have it difficult, imagine women artists. We still live in a patriarchal society where the roles of child-bearing and caring are primarily associated to the women in the family leaving them – us! – less time to develop our careers and passions than our male counterparts. This is why I have also chosen to focus my research on women writers, especially since the crime fiction has always been considered a masculine genre that, nonetheless, has had some of the most successful female writers in the past century. Agatha Christie (1890 – 1976), Elizabeth Sanxay Holding (1889 – 1955), Patricia Highsmith (1921 – 1995), Margaret Millar (1915 – 1994), Liza Cody (1944 – ), Eleanor Taylor Bland (1944 – ), Ruth Rendell (1930 – 2015), P.D James (1920 – 2014), and Sue Grafton (1940 -), are some of the most well-known, but their works can be considered classics, even though some of them are still writing nowadays. So, instead of researching more about the past, I decided that it would be worth researching the type of crime fiction that has made of the detecting process a complicated and exciting combination of science, technology, and brains. That is, I chose to focus on forensic crime fiction because many of us cannot understand the detecting process without forensic science.

It all started in 1990, when Patricia Cornwell published the first novel in the Kay Scarpetta series. Postmortem (1990) tells the story of doctor Kay Scarpetta, Chief Medical Examiner of the State of Virginia, as she investigates the serial rapes and killings of Richmond’s young professional women.PostMortem

The novel is historically relevant as it inscribes DNA profiling in literature. Back then, recent forensic developments were considered yet another form of hocus-pocus and Scarpetta has to fight for DNA evidence to be performed by the laboratory.

However, the novel’s strength comes from the main character herself, who inscribes the struggles of the 1990’s feminism as her struggle against a male-dominated police department who doubts her abilities as a doctor takes central stage. After the success of Postmortem, Patricia Cornwell has written 25 Kay Scarpetta novels – the latest one published in 2016 – in which the middle-aged female forensic doctor has faced the difficulties typical of her gender, job, and situation as a main character in one of the most successful crime fiction series in America.

Cornwell’s success quickly inspired other female writers to dip into forensic thriller territory. Kathy Reichs being the most remarkable of them in the 1990’s with her debut novel Dèja Dead (1994).


If her name does not ring a bell, the television adaptation of her novels surely does. Author of the Temperance Brennan series, Reichs has seen how her fictional alter-ego has been transformed in one of the most beloved television characters in Bones (2005 – 2017). Even though Cornwell has a remarkable knowledge about forensic science, and she keeps herself constantly updated on the latest developments in the field, Reich has the advantage of writing about what she knows best: Forensic Anthropology. Like Brennan, Reich is one of the best forensic anthropologists in the USA, as well as a remarkable scientist, who has been working in humanitarian causes for the last 40 years. With the Brennan series, she has inscribed a very specific field of study in popular fiction, and she has offered women all over the world the opportunity to discover forensic anthropology as a field of study: If she can see it, she can be it.

From the rise of the forensic thriller in the 1990’s until the present day, the introduction of female forensic doctors in contemporary popular fiction (‘fiction’ here understood as any text in any format, television shows included) has become a tendency. And we, as an audience, love it. If you think of any crime fiction television show that you enjoy, it is very likely to star a female forensic doctor. Some of them feature this doctors as secondary characters, such as Castle (2009 – 2016), and CSI: NY (2004 – 2013). But there were also productions that focused on a female forensic doctor, who also did some detecting work. Think of Crossing Jordan (2001 – 2007), Body of Proof (2011 – 2013), or even Bones.


Crossing Jordan

All these women have something in common, and that is their ability to transform detection into a whole new process by including the latest advances in science and technology. Furthermore, they are qualified experts in reading bodies. If the corpse is the raison d`être of the crime narrative, forensic doctors are the ultimate sleuths, as their medical and scientific knowledge allows them to read a body and produce a narrative of the victim’s lived experience.

Crime fiction may be commercial. A crime novel may also be the best choice to keep to keep your attention during a flight, or while you wait for the train back home. But crime novels have so many layers, that they allow for both light and in-depth reading.

It is up to us to choose whether to focus on the thrilling page-turning quality of the text and dismiss it – why do we still equal easy with bad? – or we can choose to see the social prejudices, tensions and developments that build the story. In any case, something is clear: We like crime fiction. We read detective fiction. And we should study it.

More about women in crime fiction and the author here:


Monkeys, happiness, and winning debates

By: Lauren Robinson


Monkeys you say? Tell me more.
Jane Goddsfodall once asked me, “Was it you, was it you who put a monkey in the loo?!” If you’re wondering, no it was not. Thankfully she was referring to a poster rather than an actual monkey. Yet, I take it as a point of pride to have been asked and to be working in a field where I regularly get close enough to monkeys to have been slapped by one (truthfully
it’s more than that but I’ve lost count). It was my fault; I was observing the monkeys and how dare that require looking at them. Primatology, the study of nonhuman primates ckvsn(monkeys, chimpanzees, gorillas, etc.), is not for the faint of heart or slow of reflex. It’s a field I fell in love with (I mean look at the baby Sulawesi macaque on the right, it has a heart shaped bum!) during my Masters dissertation studying Japanese macaques (see: photo above of suckling infant).
There are a lot of different things about primates that I could study (having anecdotally and painfully observed their speed) and the area of primatology that I am most interested is primate welfare. What do I mean when I say “welfare”? Well, I use a very broad definition and define welfare as the mental and physical health of an animal. In order to study animal welfare, researchers, such as myself, use methods that cross between the fields of animal behaviour, psychology, and physiology, among others. We observe animals for unusual behaviours, assess them for increased stress levels, and look for signs of injury and illness. Animal welfare science is a growing field and, with pioneers such as Marian Dawkins (Dawkins, 1980) and Temple Grandin, it is one with multiple strong and well known female scientists to look up to.

Enough of that, let’s talk about me.

My research focuses on the individual animal, which is why I’m currently in a psychology department studying individual differences in animal personality. I take the approach that an animal’s welfare is an individual experience and we need to understand the individual differences associated with it, specifically personality. Most of us have a general idea of what personality is, especially when asked to list the traits we love or hate about other people.fvsdknv Over the last couple decades it has become more accepted to talk about animals having personality as well (Gosling, 2001). It’s rare that someone describes their dog as “consistently approaching unfamiliar people and animals in a nonaggressive manner”. Instead, they say their dog is friendly and sociable. In the case of my dog Juneau (left), we describe her as eccentric and too clever for her own good. While some scientists may be on the fence about animal personality my experience has been that the public isn’t, they get it and they believe that animals have it. In order to understand primate welfare I look for the personality differences that influence it, which is the focus of my research. I want to know if certain personality traits make animals more likely to be do well in captivity, in the same way that people with certain personality traits do better in life. For example, more extraverted and sociable people tend to be healthier and happier (Costa & McCrae, 1980; Deary, Weiss, & Batty, 2010).

I started as PhD student at the University of Edinburgh in 2013 working with Dr Alex Weiss. Alex and I have different scientific backgrounds and naturally, we disagree on some things. Key among the disagreements we’ve had over the years is the difference between welfare and happiness in animals. Alex felt that if an animal had everything it needed in captivity (safety, food, companionship, good physical health) then it had high welfare. He noted then even when animals have all these things they can be unhappy, which to him meant that happiness and welfare did not necessarily go together for animals. Alex based this on the observation that some people appear to have everything they could want for (money, friends, shelter) but aren’t happy. I felt differently. As I said earlier, I take the approach that an animal’s welfare is an individual experience. Therefore, if the animal appears to have everything it needs but is still unhappy then, by definition, that animal has reduced welfare. How to find out who is right though? To the Batcave! Yeah, sadly not. Instead it was off to Google Scholar to research and come up with a way of testing my hypothesis that primate happiness and welfare were one and the same.skndcs

What I found was a great article by Franklin McMillan (2005), who says that there are five main things that influence an animal’s welfare: mental stimulation, physical health, stress, social relationships, and control of physical and social environment. When psychologists look at human happiness they typically use questionnaires (Sandvik, Diener, & Seidlitz, 1993) and there is a questionnaire to measure primate happiness (King & Landau, 2003) but animal welfare scientists don’t typically use questionnaires as there are concerns about the accuracy of ratings.
This hasn’t been well studied though so I took McMillan’s five things and created a questionnaire for staff familiar with animals to fill out. To test if it worked I took my welfare questionnaire and the primate happiness questionnaire and sent them out to zoos and research facilities.


Well, you win the debate or not?

Currently, I’m working on finishing my PhD (send whisky for my woes) and have used the questionnaires to study welfare and happiness in three species: Brown capuchins, chimpanzees, and rhesus macaques. First thing I found was that staff familiar with the animals I studied were really good at rating animal welfare. They agreed to the same degree that people do when they rate their friends and family member’s personality. The next thing I found, much to my own happiness, was that welfare and happiness are really one and the same in those three species (I won!). Three species and some pretty compelling results (Robinson et al., 2016; in review; in prep) were convincing enough to get my supervisor to rethink his opinion on happiness and welfare. Did you catch that? The PhD student actually won one! Sure, Alex has taught me a billion things to this one thing I taught him, but I will take it.

So, what about personality and welfare? Personality does influence primate welfare, similar to what we see in people. Animals with certain personality traits have higher happiness and welfare. The brown capuchins that were more sociable, assertive, attentive, and more emotionally stable were those that had higher happiness and welfare. For chimpanzees, seems to be about extraversion and emotional stability. Rhesus macaques, it’s all about confidence; those with more confident personalities had higher welfare and happiness. It’s my hope that now we know more about welfare, happiness, and personality we can use this information to improve the lives of animals. This could be done by using the questionnaire as another tool for measuring animal welfare or by trying to provide more care for animals with personality traits that tend to be related to unhappiness.

Upon reflection…

bfkjdfWhile my research results are better than I could have hoped for the best part of this research were the experiences I gained along the way. As I get to the end of my PhD, and this post, I’m starting to put thesis together and I’m all about reflection about my past three years (when I’m not panicking about the next three). I’ve gotten to study three species of primates, worked in zoos and research facilities (many of you will have thoughts on animals in research, I get that but don’t have room to get into that topic without a separate post), and collaborated with tons of amazing researchers. All of that is fantastic but let’s be honest, the monkeys are the best part.

You may be wondering what monkeys are like. I’ve worked directly with over 100 macaques and there is no doubt in my mind that each one is an individual with very different personality. Some are funny, some are playful, some are grumpy, and plenty are aggressive (learned that the hard way). While I hope that I’ve piqued your interest in primates, their amazing personalities, and their welfare I would be remiss if I didn’t state that primate are not pets (see resources below). I know I’ve spoken of my passion for working with primates but only in professional manner and environment and I never treat them as less than they are, which is wild animals. Primates are far too clever and socially complex to be kept as pets. Anyone that tells you otherwise is flat out wrong. No exceptions to the rule, no anecdotes, no to primates as pets.

Having said my warning, I will finish by acknowledging that while there are a lot of words to describe what I do (science, animal welfare, primatology) the one that always stands out to me is ‘privileged’. Working with primates is a privilege. Studying and working to improve their welfare is the best way I know to show my appreciation of that privilege.

If you’re interested in learning more about primate welfare, there are some public engagement resources that I’m a big fan of:

NC3Rs macaque page

Online tour of German Primate Center

Why monkeys shouldn’t be pets

Animal welfare legislation resources

Costa, P. T., & McCrae, R. R. (1980). Influence of extraversion and neuroticism on subjective well-being: happy and unhappy people. Journal of Personality and Social Psychology, 38(4), 668–678.

Dawkins, M. S. (1980). Animal suffering: The science of animal welfare. Ethology (Vol. 114). New York: Chapman and Hall.

Deary, I. J., Weiss, A., & Batty, G. D. (2010). Intelligence and Personality as Predictors of Illness and Death: How Researchers in Differential Psychology and Chronic Disease Epidemiology Are Collaborating to Understand and Address Health Inequalities. Psychological Science in the Public Interest.

Gosling, S. D. (2001). From mice to men: What can we learn about personality from animal research? Psychological Bulletin, 127(1), 45–86.

King, J. E., & Landau, V. I. (2003). Can chimpanzee (Pan troglodytes) happiness be estimated by human raters? Journal of Research in Personality, 37(1), 1–15.

McMillan, F. (2005). Mental wellness: The concept of quality of life in animals. In Mental Health and Well-Being in Animals.

Robinson, L. M., Waran, N. K., Leach, M. C., Morton, F. B., Paukner, A., Lonsdorf, E., Handel, I., Wilson V. A. D., Morton, F. B., Brosnan, S., & Weiss, A. (2016). Happiness is positive welfare in brown capuchins (Sapajus apella). Applied Animal Behaviour Science, 181, 145-151.

Robinson, L. M., Altschul, D., Wallace, E. K., Ubeda, Y., Machanda, Z., Slocombe, K. E., Llorente, M., Leach, M. C., Waran, N. K., & Weiss, A. (In press). Chimpanzees with positive welfare are happier, extraverted, and emotionally stable. Applied Animal Behaviour Science. 10.1016/j.applanim.2017.02.008.

Robinson, L. M., Capitanio, J. P., Leach, M. C., Waran, N. K., & Weiss, A. (In prep). The influence of personality on rhesus macaque health, welfare, and happiness.

Sandvik, E., Diener, E., & Seidlitz, L. (1993). Subjective Well-Being – the Convergence and Stability of Self-Report and Non-Self-Report Measures. Journal of Personality, 61(3), 318–342.


Improving future asthma care

L0040548 Flyer and advert for "Potter's Asthma Cure"

5.4 million people in the UK have asthma, and every ten seconds, someone in the UK has a potentially life-threatening asthma attack. On average, three people a day die from an asthma attack in the UK – in 2014 (the most recent data available), 1216 people died from asthma. Many of these deaths are preventable, and continued use of asthma medication is an important factor in this (Asthma UK, 2017). But many people don’t stick to their asthma medication routines. Kathy Hetherington writes about her research into a new method of asthma treatment which is significantly reducing the risks associated with severe asthma.

My PhD investigates patient’s response to inhaled steroids using novel monitoring technology. I have spent the past year coordinating this project throughout the UK, within the Refractory Asthma Stratification Programme-UK, (RASP-UK). I work alongside Professor Liam Heaney and Professor Judy Bradley in Queen’s University, and Professor Richard Costello in the Royal College of Surgeons Ireland. As a young researcher in Northern Ireland I am excited in the knowledge that my PhD has the potential to improve future asthma care.

The Problem

Many asthmatics do not use their inhalers correctly. As a result, they don’t receive their prescribed dosage of inhaled steroid. Within Queen’s University Belfast and the Belfast City Hospital, we have developed and implemented a new method of observing and monitoring how patients use their inhalers. This revelation is significantly reducing the risks associated with severe asthma.

In RASP-UK severe asthma centres we record Fractional exhaled Nitric Oxide (FeNO), which is a measure of lung inflammation. An elevated FeNO is a predictor of worsening asthma symptoms or even an asthma attack. Those who continue to have an elevated FeNO are usually considered high-risk patients who need daily oral steroids alongside their inhalers. This elevated FeNO could be due to steroid resistance, or not continuing to use their inhaler (this is known as non-adherence). Determining inhaled steroid response in a difficult asthma population is a major problem in a clinical setting.

The Intervention

Within RASP-UK, we have established and further validated a clinical test using daily FeNO measurements (using a Niox Vero machine – Figure 2) alongside some additional inhaled steroid. The remote monitoring technology we use alongside this test is called an INCA™ (INhaled Compliance Aid) device. The INCA™ (Figure 1) was developed by Professor Richard Costello in conjunction with Vitalograph and is designed to work with the diskus inhaler. The INCA™ device records a time and date when the microphone inside it is activated, and records a sound file of the inhaler being used; these sound files can then be transferred to a computer. The sound files are then uploaded onto a server via a data compression utility programme where it is analysed by an automated and validated sound analysis algorithm. This combination allows us to create a remote assessment of inhaled steroid response and thus identify non-adherence to inhalers. We then communicate this information to the patients to try and improve their adherence to their inhaled treatment.

With further development, we created a web-based interface (Figure 3) to deploy FeNO suppression testing across the UK though our established RASP-UK Severe Asthma Centres. Here, we examined the utility of FeNO suppression testing to predict inhaled steroid responsiveness after a further 30 days on a normal inhaler. This period of prolonged monitoring provides further feedback on patient inhaler use and technique, using the unique presentation method below, enabling us to identify facilitators and barriers which may be involved in optimising inhaler adherence. We are constantly increasing the precision and user-friendliness of this hardware and software so that the data is easily interpreted and demonstrated to the patient.


Figure 3 Data from the Vitalograph server following upload of one week FeNO suppression data and INCA™. The Vitalograph server shows activation and usage of both FeNO machine and INCA™ device (A) and depicts the FeNO data as precentage change from baseline as originally described (y1-aixs figure A).  The INCA™ device time and date stamps the number of inhaler uses (y2-axis – Figure A) and this is shown alongside technique analysis (B). Possible technique errors which can be identified and reported are shown in Graphic 3.


The Future

Though we are only a year into our project, 250 patients in severe asthma centres throughout the UK have carried out FeNO suppression testing. Many have gone on to improve their inhaler usage and asthma control and decrease the inflammation in their lungs. We have presented our UK multi-centre data at conferences all over the world and interest in our project is increasing. In the past 6 months I have had the privilege of being a key note speaker at Severe Asthma Masterclasses and Specialist Asthma Meetings. This summer I have been invited as a symposium speaker at the European Academy of Allergy & Clinical Immunology in Helsinki, Finland which will undoubtedly be the highlight of my career to date!

My PhD has given me the opportunity to be able to work with a wide range of fantastic professors, clinicians, patients and co-ordinators. This PhD has convinced me that we can use this unique test and methods of presentation to improve asthma care throughout the world. I can’t express how much this thought excites and drives me; it is with great humility and privilege that I will continue to contribute to this extraordinary field.

This IS ‘proper’ research: Taking on the social science vs. science debate

By: Rosie Smith 

“So why is your research necessary?”

“How do you get funding for research like this?”

These are just two of the many questions that I was asked recently whilst taking part in a competition for PhD researchers at my university. The competition was interdisciplinary and was aimed at showcasing doctoral research at the institution, whilst also providing early career researchers, like myself, a gateway into public engagement. Needless to say the competition was one of the many uncomfortable things I intend to do this year as part of my resolution to be a ‘yes’ woman and challenge myself more.

Finalists were made up of three researchers per faculty (social science, science, arts and humanities), and as a criminologist I quickly found myself gravitating towards the social sciences camp. It was a full day event in which we were judged on a multitude of criteria ranging from originality, impact, accessibility, interdisciplinary scope, and importance. I use the word ‘importance’ hesitantly, as it’s a term that causes particular anxiety when I consider my own research. My work explores the concept of ‘spectacular justice’ and the way the mass media makes the criminal justice system visible and public. I explore this concept by analysing how high profile criminal cases are represented in media archives from the 1800s to 2016.



And whilst I thoroughly enjoy my research, I still often find it difficult to have confidence that my work is ‘important’, and necessary. In part this is because I am self-funding my research, and at times I find it difficult to have confidence in my work when understandings of ‘good’ research are so closely bound to notions of impact and attracting funding. But it is also in part because of situations like these, when I am forced to contemplate the debate around what constitutes ‘proper’ research.

When I was posed these questions, I admit, I was initially shocked and somewhat taken aback by the abruptness with which they were posed. But at the same time these questions draw on some of the existing anxieties I have as I begin the journey into academia. To me, these questions in some way breach the social conventions on conversation etiquette, not to mention conventions on what is and is not okay to ask a frazzled and distressed PhD student.

To the first, I was honest, and launched into the toils of juggling several part-time jobs alongside trying to develop the aura of a rounded and successful academic.

But it was the question “Why is your research necessary?” that caused me more concern. Looking around the room at the other contestants I began to question whether this question had been asked of the other finalists, in particular the natural, computer, and the physical scientists.

I was transported back to the long debates I had as an undergraduate with my ‘proper’ scientist friends. In these debates I would spend hours defending the position that social science is important and necessary, and that the two disciplines can exist in parallel.

I would passionately defend the position that the relationship between the two does not need to be one of comparison. Admittedly, my efforts to convert them were largely fruitless. And I was often left being endearingly mocked, only to be told that “but it’s not a real science though is it?” And unfortunately this is still a plight I am fighting as I embark through my PhD.

It is as if this debate is a matter of either or. You are either a social scientist or a scientist, with very little scope to dabble somewhere in the middle. This was only confirmed as the day progressed. I overheard the finalist next to me ask a gentleman, “Are you going to go to Rosie’s stand next?” To which the gentleman replied, “I don’t think so, I don’t like social science, I’m a more of a scientist”.


Needless to say I tried my best to convince him of the merits of the dark underbelly of the social sciences, but was left wondering why I had to.

I cannot escape the importance of gender to this debate. Despite being interdisciplinary, the competition finalists were overwhelmingly female, with male colleagues only being represented by the science faculty.

Needless to say there are a large number of male social scientists who contribute greatly to the field, but historically the social sciences have been regarded as a ‘feminine’ discipline.

This is supported by statistics on the relationship between gender and higher education degree choices: in 2016, 17,075 men accepted university offers to study a social science subject in the UK, which amounts to just over half the figure for women which totalled at 30,860 (UCAS, 2016). And so I interpreted the questions “why is your research necessary?” and “how do you get funding for research like this?” not only as a judgment on the value of my research, but a value judgment more generally about the credibility of the social sciences as a predominantly female discipline. I couldn’t ignore the feeling that the feminization of the social sciences served as a double mechanism to justify the position of the sciences as superior.

At times I worry that as a social scientist, the rivalry that exists with science, whilst often only in jest or antics, has a direct impact on understandings of what constitutes ‘proper’ research.

And I question the appropriateness of using one set of criteria to judge and compare the value and ‘necessity’ of the two disciplines. In my opinion they are complimentary rather than contradictory fields. And we should be striving to broaden our understanding of what constitutes ‘proper’ research. Because although my research does not find a solution to world hunger or fight disease, it does have value- just in its own way.

At the end of the day the judges seemed to recognise some of that value too. When the scores came in, I won! It was one of the proudest moments of my Phd so far, as a social scientist, as an early career researcher, and as a woman. This experience has taught me many lessons, but the most important is to take the victories, whether big or small, when they come around. Equally I aim to worry a little less about how much impact my research has, or how much funding I attract (or not) and concentrate on enjoying my PhD and remembering that whilst not earth-shattering, my research is still necessary. All research is proper research.





How your brain plans actions with different body parts

Got your hands full? – How the brain plans actions with different body parts

by Phyllis Mania

STEM editor: Francesca Farina

Imagine you’re carrying a laundry basket in your hand, dutifully pursuing your domestic tasks. You open the door with your knee, press the light switch with your elbow, and pick up a lost sock with your foot. Easy, right? Normally, we perform these kinds of goal-directed movements with our hands. Unsurprisingly, hands are also the most widely studied body part, or so-called effector, in research on action planning. We do know a fair bit about how the brain prepares movements with a hand (not to be confused with movement execution). You see something desirable, say, a chocolate bar, and that image goes from your retina to the visual cortex, which is roughly located at the back of your brain. At the same time, an estimate of where your hand is in space is generated in somatosensory cortex, which is located more frontally. Between these two areas sits an area called posterior parietal cortex (PPC), in an ideal position to bring these two pieces of information – the seen location of the chocolate bar and the felt location of your hand – together (for a detailed description of these so-called coordinate transformations see [1]). From here, the movement plan is sent to primary motor cortex, which directly controls movement execution through the spinal cord. What’s interesting about motor cortex is that it is organised like a map of the body, so the muscles that are next to each other on the “outside” are also controlled by neuronal populations that are next to each other on the “inside”. Put simply, there is a small patch of brain for each body part we have, a phenomenon known as the motor homunculus [2].


Photo of an EEG, by Gabriele Fischer-Mania

As we all know from everyday experience, it is pretty simple to use a body part other than the hand to perform a purposeful action. But the findings from studies investigating movement planning with different effectors are not clear-cut. Usually, the paradigm used in this kind of research works as follows: The participants look at a centrally presented fixation mark and rest their hand in front of the body midline. Next, a dot indicating the movement goal is presented to the left or right of fixation. The colour of the dot tells the participants, whether they have to use their hand or their eyes to move towards the dot. Only when the fixation mark disappears, the participants are allowed to perform the movement with the desired effector. The delay between the presentation of the goal and the actual movement is important, because muscle activity affects the signal that is measured from the brain (and not in a good way). The subsequent analyses usually focus on this delay period, as the signal emerging throughout is thought to reflect movement preparation. Many studies assessing the activity preceding eye and hand movements have suggested that PPC is organised in an effector-specific manner, with different sub-regions representing different body parts [3]. Other studies report contradicting results, with overlapping activity for hand and eye [4].


EEG photo, as before.

But here’s the thing: We cannot stare at a door until it finally opens itself and I imagine picking up that lost piece of laundry with my eye to be rather uncomfortable. Put more scientifically, hands and eyes are functionally different. Whereas we use our hands to interact with the environment, our eyes are a key player in perception. This is why my supervisor came up with the idea to compare hands and feet, as virtually all goal-directed actions we typically perform using our hands can also be performed with our feet (e.g., see for mouth and foot painting artists). Surprisingly, it turned out that the portion of PPC that was previously thought to be exclusively dedicated to hand movement planning showed virtually the same fMRI activation during foot movement planning [5]. That is, the brain does not seem to differentiate between the two limbs in PPC. Wait, the brain? Whereas fMRI is useful to show us where in the brain something is happening, it does not tell us much about what exactly is going on in neuronal populations. Here, the high temporal resolution of EEG allows for a more detailed investigation of brain activity. During my PhD, I used EEG to look at hands and feet from different angles (literally – I looked at a lot of feet). One way to quantify possible effects is to analyse the signal in the frequency domain. Different cognitive functions have been associated with power changes in different frequency bands. Based on a study that found eye and hand movement planning to be encoded in different frequencies [6], my project focused on identifying a similar effect for foot movements.


Source: Pixabay

This is not as straightforward as it might sound, because there are a number of things that need to be controlled for: To make a comparison between the two limbs as valid as possible, movements should start from a similar position and end at the same spot. And to avoid expectancy effects, movements with both limbs should alternate randomly. As you can imagine, it is quite challenging to find a comfortable position to complete this task (most participants did still talk to me after the experiment, though). Another important thing to keep in mind is the fact that foot movements are somewhat more sluggish than hand movements, owing to physical differences between the limbs. This circumstance can be accounted for by performing different types of movements; some easy, some difficult. When the presented movement goal is rather big, it’s easier to hit than when it’s smaller. Unsurprisingly, movements to easy targets are faster than movements to difficult targets, an effect that has long been known for the hand [7] but had not been shown for the foot yet. Even though this effect is obviously observed during movement execution, it has been shown to already arise during movement planning [8].

So, taking a closer look at actual movements can also tell us a fair bit about the underlying planning processes. In my case, “looking closer” meant recording hand and foot movements using infrared lights, a procedure called motion capture. Basically the same method is used to create the characters in movies like Avatar and the Hobbit, but rather than making fancy films I used the trajectories to extract kinematic measures like velocity and acceleration. Again, it turned out that hands and feet have more in common than it may seem at first sight. And it makes sense – as we evolved from quadrupeds (i.e., mammals walking on all fours) to bipeds (walking on two feet), the neural pathways that used to control locomotion with all fours likely evolved into the system now controlling skilled hand movements [9].

What’s most fascinating to me is the incredible speed and flexibility with which all of this happens. We hardly ever give a thought to the seemingly simple actions we perform every minute (and it’s useful not to, otherwise we’d probably stand rooted to the spot). Our brain is able to take in such a vast amount of information – visually, auditory, somatosensory – filter it effectively and generate motor commands in the range of milliseconds. And we haven’t even found out a fraction of how all of it works. Or to use a famous quote [10]: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

 [1] Batista, A. (2002). Inner space: Reference frames. Current Biology, 12(11), R380-R383.

[2] Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389-443.

[3] Connolly, J. D., Andersen, R. A., & Goodale, M. A. (2003). FMRI evidence for a ‘parietal reach region’ in the human brain. Experimental Brain Research153(2), 140-145.

[4] Beurze, S. M., Lange, F. P. de, Toni, I., & Medendorp, W. P. (2009). Spatial and Effector Processing in the Human Parietofrontal Network for Reaches and Saccades. Journal of Neurophysiology, 101(6), 3053–3062

[5] Heed, T., Beurze, S. M., Toni, I., Röder, B., & Medendorp, W. P. (2011). Functional rather than effector-specific organization of human posterior parietal cortex. The Journal of Neuroscience31(8), 3066-3076.

[6] Van Der Werf, J., Jensen, O., Fries, P., & Medendorp, W. P. (2010). Neuronal synchronization in human posterior parietal cortex during reach planning. Journal of Neuroscience30(4), 1402-1412.

[7] Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology47(6), 381.

[8] Bertucco, M., Cesari, P., & Latash, M. L. (2013). Fitts’ Law in early postural adjustments. Neuroscience231, 61-69.

[9] Georgopoulos, A. P., & Grillner, S. (1989). Visuomotor coordination in reaching and locomotion. Science, 245(4923), 1209–1210.

[10] Pugh, Edward M, quoted in George Pugh (1977). The Biological Origin of Human Values.