Making mistakes and owning them: How I submitted corrections to published papers and (currently) live to tell the tale

 

838-02493432

by Dr. Lauren Robinson

It’s the nightmare scenario: you look back at an old bit of code and realize you’ve made a mistake and, to make matters worse, the paper has already been published. This year I lived that nightmare scenario. I had shared my code only to discover that a variable that should have been reverse scored (which boils down to multiplying the number by -1), wasn’t. It was a minor oversight that I’d made as a 1st year PhD student learning new statistics, I hadn’t caught the mistake until now, and, worse still, the code had been used in two papers I wrote simultaneously. I considered changing my name and hiding but as I had a postdoc and my mother claims to like me, I figured it was better to keep my current identity.

‘…the right decisions don’t come without risk….’

Reaching out to the senior author we knew there was only one solution: We had to redo the statistics and submit corrections. As an early career researcher, I was panicked. What if the results were drastically different, was a retraction (possibly two) in my future? Fear aside, a mistake was made, we had to own it, and if we were going to believe in scientific integrity then we had to show ours. It’s been my experience that the most difficult decisions, the ones that I’m truly afraid to make – those are the decisions I know to be right. But the right decisions don’t come without risk and I can’t pretend that I wasn’t, and continue to be, worried that not everyone would see this as a minor mistake. Science is competitive and the feeling of having to be flawless, particularly at this phase of my career, is a weight. As a woman in science I already have to fight to be taken seriously, to be seen as competent, and I had committed a sin, I had made an honest mistake that had been published, twice. Before I could find out the results of my mistake on my career, I had to find out their impact on my papers.

‘As a woman in science I already have to fight to be taken seriously, to be seen as competent…’

I somehow survived three painful hours while I waited to finish work at my postdoc and could get back to where I kept the study data. Upon sitting at my desk (liquid courage in hand) I redid the stats, anxious to find the results. Now look, I’m no slouch with numbers, I know what multiplying by -1 does to them, but panic overrode sense in that moment and I needed to see to believe. First paper: Flipped the direction of effect on a non-significant variable that remained that way. Okay, fairly minor, just requires that the journal update the tables. Second paper: Again, the only thing that changed was the direction of effect, though this variable had been and still was significant, means we had to adjust the numbers, a line in the abstract, and three sentences in the results. Not great, but as variables go it hadn’t even rated being mentioned in the discussion.

Okay, okay, okay (deep breaths, bit more whisky), this could be so much worse I told myself. I screwed up but hey, everyone makes mistakes, I was learning something new, I should’ve have caught it earlier, but it was caught now. Onto the next step, making the corrections, contacting coauthors, and letting the journals know. Time to really live by our ideals. But first! Another moment of panic while I wondered if I had made the same mistake in my two newest papers. Opening code, reading through, and…no, I hadn’t made the mistake again. Somewhere along the way I had clearly learned how to do these statistics correctly, I just hadn’t caught it while I was working on these two papers and had copy-pasted the code across them. Good news, I am in fact capable of doing things correctly.

‘I had lived my nightmare and it felt, as least in this moment…completely survivable…’

Writing the email to my coauthors wasn’t something that I was particularly looking forward to. “Oh hey fellow researchers that I respect and admire, I screwed up and am going to let the journals and the world know. PS, please don’t think less of me and hate me. Okay, thanks.” While that’s not what I wrote, that’s what it felt like. An admission of imperfection, shame, guilt, a desire to live under a rock. However, I’ve been blessed with caring and understanding collaborators, each of whom was extremely supportive. Next, I sent an email to the journals explaining the mistake and requesting corrections be published. Each journal was understanding and helped us write and publish corrections and that was it, it was done. I had lived my nightmare and it felt, as least in this moment…completely survivable. I had imagined anxiety and panic and battling my own shame and guilt. This…this was a feeling of stillness that I was not prepared for.

Prior to contacting the journals and writing this blog, I asked myself how much this would hurt my career. Would a small mistake cost me my reputation, respect, and future in the science I’d already sacrificed so much for? Would writing this blog and openly speaking to the fact that I had made a mistake only further the potential damage to career and respect? Would a single mistake, done at the beginning of my PhD and not since repeated, mean that others didn’t trust my science and statistics, not want to work with me? Would I trust my own skills, and more importantly, myself, again? There was so much uncertainty and so little information available on this experience, yet mistakes like this must happen more than we think, they just go unspoken.

‘…genuine mistakes? We have to make those acceptable to acknowledge, correct, even retract, and speak about, to learn and move on from.’

This, this is the crux of a problem in science, there are unknown consequences of acknowledging and speaking openly about our mistakes and, by failing to do so, we only further increase the chance that mistakes go uncorrected. Let’s hold those that perform purposeful scientific misconduct accountable, but genuine mistakes? We have to make those acceptable to acknowledge, correct, even retract, and speak about, to learn and move on from them. Those who don’t learn from their mistakes? Well, they may be doomed to face the consequences. As a note, if we’re going to move towards openness and transparency in science then we need to be particularly careful that those in underrepresented groups aren’t unfairly punished or scrutinized for admitting and speaking about mistakes as these groups are already under a microscope and face unique and frustrating challenges. We cannot allow openness and transparency to be used as one more excuse for someone to tell us no, not if science is to diversify and progress.

‘What kind of person and scientist do I want to be?’

Of all the questions I asked myself, deciding to write this post came down to one: What kind of person and scientist do I want to be? As an animal welfare scientist, I have long believed in being transparent and open in science, I realized that’s who I am as a person as well. Living by my ideals meant not only correcting my mistake but also talking openly and frankly about it. These choices, challenging as they may have been, are the right ones. To err is human and luckily for me I have divine friends, mentors, and colleagues that forgive me my mistakes and sins. I believe that we should all be so lucky and that mistakes should be openly and transparently discussed. For now, I live to science another day and look forward to the challenges, mistakes (which I intend to catch prior to publication), and learning that come with it.

For those interested in working with me (imperfections and all) when my current postdoc ends this January, feel free to get in touch via ResearchGate (https://www.researchgate.net/profile/Lauren_Robinson7) or Twitter (https://twitter.com/Laurenmrobin).

Links to published corrections:

http://psycnet.apa.org/buy/2016-39633-001

https://www.sciencedirect.com/science/article/pii/S016815911830193X

Read about Lauren’s fascinating research (with lots of monkey photos!) into animal welfare and animal behaviour here.

Advertisements

Shedding Light on the Dark Universe

By Dr Alexandra Amon, Stanford University

 

Dark Matter

Shedding light on the Dark Universe

It would be easy to imagine that the Dark Universe was a malevolent force in the latest Star Wars movie, it’s leaders the enemy of the Federation, or that dark energy had some kind of demonic origin. However sinister it may sound, the dark side is entirely innocent and, in fact, it comprises 95% of our Universe.

To give this perspective, Earth is an almost infinitesimal speck in the cosmos. It orbits the Sun, one of billions of stars, swirling around and bound together to form our galaxy, the Milky Way. Moreover, there are billions of galaxies in our Universe, each boasting their own hoard of stars and planets! Observational cosmology tells us that these structures, that are made of particles whose physics we understand, only constitute about 5% of everything in the Universe. The rest is dark matter and dark energy.

Dark matter is a special type of matter that neither emits nor interacts with light, but plays an important role in the story of our Universe. More than three quarters of the mass in our Milky Way galaxy (and other galaxies) is the invisible dark matter, rather than the stars and the planets. Therefore, the dark matter creates a large gravitational effect and acts as the glue holding our galaxies together.

Dark energy is even more mysterious. It is a form of energy that drives the accelerated expansion of our Universe. That is, our observations reveal that while stars stay tightly bound in galaxies, as cosmic time marches on the galaxies themselves are moving further away from each other, and our best theory holds dark energy responsible. While we can’t see these entities, we infer that they exist from their effect on things we can see.

 It may sound like cosmologists have the Universe sussed, but there are cracks in our Standard Cosmological Model. While we understand the effect of dark matter in the universe,  particle physicists are yet to detect its particle in their giant dark matter net experiments. On the other hand the best theory for dark energy, as predicted by quantum physics, is starkly wrong. To put it politely, there is much work to be done! It is possible that we are missing something in our theory of gravity- Einstein’s General Relativity- and may need to invoke some new physics in order to solve the dark energy phenomenon. That is, just as Newtonian gravity, which satisfies experiments on Earth, was revolutionised by Einstein’s theory in order to explain measurements in the solar system, perhaps we need another upgrade to explain even larger-scale observations. We focus on observing how dark matter changes over cosmic time, which sheds light on how dark energy evolves and allows us to test gravity on cosmological scales.

 Cosmology has a vast toolbox of independent methods to understand the nature of the Dark Universe and to test the laws of gravity. Techniques include measurements of the brightness of supernovae- the explosive ends of binary pairs of unequal mass stars; exquisite observations of the Cosmic Microwave Background-temperature fluctuations across the sky from the light emitted in very early universe, just 380 000 years after the Big Bang; charting the distant Universe by obtaining precise velocities of and distances to galaxies; and meticulously measuring the shapes of distant galaxies. The latter is called weak gravitational lensing.

 Weak gravitational lensing

 As we observe a distant galaxy, we collect its light in our telescopes after it has journeyed across the Universe. According to General Relativity, dark matter, like any massive structure, warps the very fabric of the Universe, space-time, as depicted by the grid in the image below. The path that the light travels along, indicated by an arrow, also gets bent with the space-time and as such, the image of the galaxy that we capture appears distorted. The presence of dark matter or massive structures along the line of sight has the effect of lensing the galaxy- making it appear more elliptical in our images and inducing a coherent alignment among nearby galaxies.

PastedGraphic-1

A depiction of weak gravitational lensing. As light from distant galaxies travels towards us, it passes by massive structures of dark matter, shown here as grey spheres. Dark matter’s gravity curves the local space-time as well as the path that the light follows. This curvature distorts the images of the background galaxies that we then observe, with the amount of distortion depending on the distribution of dark matter along the light path. By measuring this distortion, we can infer the size and location of invisible massive structures (dotted circles). Image credit; APS/Alan Stonebraker; galaxy images from STScI/AURA, NASA, ESA, and the Hubble Heritage Team.

The stronger the average galaxy ellipticity is in a patch of sky, the more dark matter there is in that region of the Universe, assuming galaxies are in reality, randomly oriented. Therefore, the induced ellipticity of the galaxies is a faint signature of dark matter inscribed across the Universe. If we can measure this alignment to extreme precision, and combine with the equations of General Relativity, we can infer the location and properties of the matter- both visible and dark- between us and the galaxies.  By mapping the evolution of the dark-matter structures with cosmic history and documenting the accelerating expansion of space and time, we learn about dark energy.  

I work as part of a European team, called the Kilo-Degree Survey, imaging a 5% chunk of the sky a few hundred times the size of the full moon. We have measured the positions and shapes of tens of millions galaxies, as the universe was when (at most) half its current age. While this sounds wildly impressive, we are only now seeing the tip of the iceberg for what is required to truly understand our Universe. That is because while gravitational lensing is a powerful cosmological technique, it is extremely technologically challenging. The typical distortion induced by dark matter as a galaxy’s light travels through the universe, is only enough to alter the shape of that galaxy by less than 1%. As the lensing effect is weak, in order to detect it we need to analyse the images of millions of galaxies. This entails a data challenge, necessitating rapid processing of petabytes of data. A scientific hurdle arises as the weak lensing distortions are significantly smaller than the distortions that arise in the last moments of the the light’s journey.  Due to the effect of the Earth’s atmosphere and our imperfect telescopes and detectors, instead of measuring the shapes of galaxies in images that are beautifully resolved like the Hubble Space Telescope image below, in large lensing surveys, galaxies can appear as fuzzy blobs that only span a few pixels. Just to up the ante, the terrestrial effects change between and throughout the night’s observations as the wind, temperature and weather vary, even in the exquisite conditions of the  mountaintops of the Atacama Desert, Chile, where lensing data is often collected. In order to isolate the dark matter signature, the nuisance distortions are modelled to extremely high precision and then inverted, allowing an accurate recovery of the cosmological signal. Further complications arise from the physics of the galaxies. They have an intrinsic ellipticity and dynamical processes that we do not perfectly understand, but must also factor into our calculations.

PastedGraphic-3

Hubble Space Telescope image of a cluster of galaxies called Abell 1689. The larger yellow galaxies are members of this massive galaxy cluster, bound within a dense clump of dark matter that gravitationally distorts the space and time around the cluster. The small blue objects are galaxies that are behind the cluster, whose light path has become bent as it journeys towards Earth, passing by the cluster. Gravitational lensing effectuates the giant curved blue arcs that you can see surrounding Abell 1689- the distorted images of the distant galaxies . The five blue dots with rainbow crosses are just stars in our own Milky Way Galaxy. Image credit: NASA/ESA/STScI.

 

The Kilo-Degree Survey, as well as similar American and Japanese experiments, act as stepping stones and a training ground for an epic coming decade for observational cosmologists. We are at the dawn of several major international projects that will survey the sky to greater depths and resolution than ever before. The Large Synoptic Survey Telescope will image the entire Southern sky every few nights, building the deepest and largest map of our cosmos, the Euclid satellite will survey the sky from space, eradicating the worry of Earth’s atmosphere and the the Dark Energy Spectroscopic Instrument will delivery extremely precise locations and velocities of over 30 million galaxies. I look forward to helping these projects to map the distant Universe, trace the evolution of the dark matter and dark energy from 10 billion years ago to the present day and in doing so, bringing us closer to fathoming the other 95% of our Universe: the dark side.

It is a humbling field that asks what the Universe is made of and how its structure evolved for the formation of galaxies and our existence. In our insignificant snippet in the grand story of the Universe, it is remarkable that technology allows us to observe objects at distances beyond our comprehension and that our diverse range of measurements even vaguely fit a consistent model.

How I Changed from Science to Technology

by Azahara Fernández Guizán

AF4

How I changed from Science to Technology

I was never a kid that was sure about what professional career I wanted when I grew up. And this has been a good thing for me, because it has let me experience many different fields, and led me to where I am today.

I was born in the north of Spain, in a mining zone of Asturias. My father was a coal miner and my mother a housewife. I attended a local school and a local high school. My grandmother says I was an unusual kid, preferring to be bought a book rather than a box of sweets. I also started learning English when I was 6 years old, and spent my free time reading historical novels and biographies.

I enjoyed visiting museums and monuments, and I used to search for information in my town’s library before going on an excursion. I loved to write stories and tales, and had always obtained high marks in school, which led my teachers to suggest that I study medicine. But I always changed my mind –  from architecture, to journalism or even dentistry, depending on the book I was reading or the museum I’d just visited.

At that age, only one thing was clear: I wanted to be an independent and strong woman like the ones that inspired me. I hadn’t seen many role models during my primary education, but one teacher told us about Marie Curie. At the library, I discovered Rita Levi-Montalcini and the Brontë sisters.

 

SECONDARY STUDIES

During the last year of high-school I was a mess, and the pressure was high because I had to make a decision. All I had were doubts

In Spain at that time, after finishing your last secondary education course, the students that want to continue to a degree have to take a general exam, the PAU. You could choose the subjects you want to be tested on and, after the exams took place, you were given a mark calculated to take account of your secondary school marks and the results of PAU exams. According to this mark, you could register for certain degrees.

At that point, I decided to take more exams than necessary on the PAU in order to have more options in different types of degree, for example, science, engineering, or languages… But the worst moment of my student life came, and I had to decide.

I had two options on my mind: a Software Engineering degree, and a Biology degree. I must admit it, but at that time I only knew engineering stereotypes and I never liked video games or anything related with hardware, so I decided that a Biology degree would suit me better.

BIOLOGY DEGREE AND NEUROSCIENCE MASTERS

During my degree, I decided that plants and animals were not my passion, but I loved Microbiology, Genetics, Immunology and Neuroscience. I discovered more female role models, researchers who really inspired me, whose lives were incredible to me. I worked hard during my degree and travelled a lot during the summers, thanks to some scholarships that I was awarded (I spent one month in Lowestoft, another in Dublin, and another one in Toronto), and started learning German.

AF2

Azahara in the lab

During the second year of my biology degree, I decided that I would become a scientist, and started to look for a professor who would let me gain some experience in their laboratory.

During my penultimate year, I started working in a Neuroscience laboratory, studying the 3D eye degenerating pattern on C3H/He rd/rd mice. After finishing my degree, I decided to enrol in a Masters of Neuroscience and Behavioural Biology in Seville. During this masters, I worked in another Neuroscience laboratory doing electrophysiological studies, trying to understand how information is transformed in the cerebellar hippocampus circuit and how this mechanism could allow us to learn and memorise.

This was a period of my life where I worked a lot of hours, the experiments were very intense, and I had the opportunity to meet important scientist from all the world. I also had a physics peer that analysed all our data, and developed specific programmes in Matlab, which impressed me profoundly.

IMMUNOLOGY PHD

After this period, I continued working in Science, but I decided to start my PhD on Immunology, back in Asturias.

I worked in a laboratory in which, due to my friends in the lab, every day was special. We worked hard studying different types of tumours and testing different molecules, but also had the time to share confidences and laughs. After three years, I became a PhD in Immunology, and as it was the normal thing to do, I started looking for a post-doc position.

Rather than feeling happy or enthusiastic about the future, I discovered myself being upset and demotivated. I really didn’t want to carry on being a scientist. A huge sensation of failure invaded me, but as J.K. Rowling said “It is impossible to live without failing at something, unless you live so cautiously that you might as well not lived at all. In which case, you’ve failed by default”.

I want to specify that I don’t consider my PhD a waste of time – it has given me, apart from scientific publications, many important aptitudes and abilities, such as team work, analysis, problem solving, leadership, organisation skills, effective work habits, and better written and oral communication.

BECOMING A SOFTWARE DEVELOPER

As you might imagine, this was a hard moment of my life. I was unemployed, and doubtful about my professional career – just as I had been after high school.

Thanks to my husband, who supported me while converting my career, I decided to give software development a try.  As I didn’t have the necessary money or time to start a new degree, I signed up for a professional course in applications software development. The first days were difficult since all the other students were young and I didn’t feel at ease.

But as I learned software languages as HTML, CSS, JavaScript and Java, I also participated with good results in some software competitions which allowed me to gain confidence.

AF3

In 2015 I started working as software developer in .Net MVC, a language that I hadn’t studied during my course, but I had the necessary basics to learn it quickly and become part of a team. For me, one of the most marvellous things about software development is that it consists of team-work.

I also discovered that there are a lot of people working in this field that love to exchange knowledge, and I regularly go to events and meetups. I have also started recently giving talks, and workshops, some of them technological, with the aim of promoting the presence of women in technology.

AF4

Women and girls need to be encouraged to discover what software development really is. The software industry needs them. Software can be better, but only if it is developed by diverse teams with different opinions, backgrounds, and knowledge.

The mysterious lives of chimaera sharks & the effects of deep sea fishing

MY DEEP SEA MSC RESEARCH AND WHY DEEP SEA FISHERIES OVERSIGHT IS NEEDED

by Melissa C. Marquez.

chimaera families

A rhinochimaera

“You’re not what I expected when you said you were a shark scientist.” Gee, thanks. I can’t tell you how many times I’ve heard that I don’t live up to someone’s preconceived mental image of what I should look like as a “shark scientist.” It doesn’t change the fact that I’m a marine biologist though, and that I am very passionate about my field.

I recently wrapped up my Masters in Marine Biology, focusing on “Habitat use throughout a Chondrichthyan’s life.” Chondrichthyans (class Chondrichthyes) are sharks, skates, rays, and chimaeras. Today, there are more than 500 species of sharks and about 500 species of rays known, with many more being discovered every year.

Over the last few decades, much effort has been devoted towards evaluating and reducing bycatch (the part of a fishery’s catch that is made up of non-target species) in marine fisheries. There has been a particular focus on quantifying the risk to Chondrichthyans, primarily because of their high vulnerability to overfishing. My study focused on five species of deep sea chimaeras (not the mythical Greek ones, but the just-as-mysterious real animal) found in New Zealand waters:

• Callorhynchus milii (elephant fish),

• Hydrolagus novaezealandiae (dark ghost shark),

• Hydrolagus bemisi (pale ghost shark),

• Harriotta raleighana (Pacific longnose chimaera),

• Rhinochimaera pacifica (Pacific spookfish).

 

These species were chosen because they cover a large range of depth (7 m – 1306 m), and had been noted as being abundant despite extensive fisheries in their presumed habitats; they were also of special interest to the Deepwater Group (who funded the scholarship for my MSc).

Although there is no set definition for what constitutes as “deep sea,” it is conventionally regarded to be >200 m depth and beyond the continental shelf break (Thistle, 2003); in this zone, a number of species are considered to have low productivity, leading to them having a highly vulnerable target of commercial fishing (FAO, 2009). Deep sea fisheries have become increasingly economically important over the past few years as numerous commercial fisheries become overexploited (Koslow et al., 2000; Clark et al., 2007; Pitcher et al., 2010). Major commercial fisheries exist for deep sea species such as orange roughy (Hoplostethus atlanticus), oreos (several species of the family Oreosomatidae), cardinalfish, grenadiers (such as Coryphaenoides rupestris) and alfonsino (Beryx splendens). Many of these deep sea fisheries were not sustainable (Clark, 2009; Pitcher et al., 2010; Norse et al., 2012) with most of the stocks having undergone substantial declines.

chimaera (1)

Deep sea fishing can also cause environmental harm (Koslow et al., 2001; Hall-Spencer et al., 2002; Waller et al., 2007; Althaus et al., 2009; Clark and Rowden, 2009). Deep sea fisheries use various types of gear that can leader to lasting scars: bottom otter trawls, bottom longlines, deep midwater trawls, sink/anchor gillnets, pots and traps, and more. While none of this gear is solely used in deep sea fisheries, all of them catch animals indiscriminately and can also damage important habitats (such as centuries-old deep sea coral). In fact, orange roughy trawling scars on soft-sediment areas were still visible five years after all fishing stopped in certain areas off New Zealand (Clark et al ., 2010a).

Risk assessment is evaluating the distributional overlap of the fish with the fisheries, where fish distribution is influenced by habitat use. For sharks, that risk assessment included a lot of variables: there are a number of shark species (approximately 112 species of sharks have been recorded from New Zealand waters) with many different lifestyles, differences in their market value for different body parts (like meat, oil, fins, cartilage), what body parts they use for sharks (for example, some sharks have both their fins and meat utilised but not their oil; some just have their fins taken, etc.) and how to identify sharks once on the market (Fisheries Agency of Japan, 1999; Vannuccini, 1999; Yeung et al. 2000; Froese and Pauly, 2002; Clarke and Mosqueira, 2002).

In order to carry out a risk assessment, you have to know your study animals pretty well. It should come to no surprise that little is known about the different life history stages of chimaeras, so I did the next best thing and looked at Chondrichthyans in general. My literature review synthesized over 300 published observations of habitat use for these different life history stages; from there, I used New Zealand research vessel catch data (provided by NIWA, the National Institute of Water and Atmospheric Research) and separated them by species, sex, size, and maturity (when available). I then dove into the deep end of using a computer language called “R,” which is used for statistical computing and graphics. Using R programming software, I searched for the catch compositions based on the life history stage I was looking for (example: looking for smaller sized, immature fish of both sexes and little to no adults when in search for a nursery ground).

The way we went about this thesis differs in that we first developed hypotheses for characteristics of different habitat use, rather than “data mining” for patterns, and it therefore it has a structured and scientific approach to determining shark habitats. Our results showed that some life history stages and habitats for certain species could be identified, whereas others could not.

Pupping ground criteria were met for Callorhynchus milii (elephant fish), Hydrolagus novaezealandiae (dark ghost shark), and Hydrolagus bemisi (pale ghost shark); nursery ground criteria were met for Callorhynchus milii (elephant fish); mating ground criteria were met for Callorhynchus milii (elephant fish), Hydrolagus novaezealandiae (dark ghost shark), Hydrolagus bemisi (pale ghost shark), and Harriotta raleighana (Pacific longnose chimaera); lek-like mating criteria were met for Hydrolagus novaezealandiae (dark ghost shark). Note: Lek-like mating is where males perform feats of physical endurance to impress females and she gets to choose a mate.

Ghost Shark_SPP unknown

Ghost shark

These complex—and barely understood— deep sea ecosystems can be overwhelmed by the fishing technologies that rip through them. Like sharks, many deep sea animals live a k-style lifestyle, meaning that they take a long time to reach sexual maturity and once they are sexually active, they give birth to few young after a long gestation period. This lifestyle means these creatures are especially vulnerable since they cannot repopulate quickly if overfished.

In order to manage the environmental impact of deep sea fisheries, scientists, policymakers and stakeholders have to identify the ways to help re-establish delicate biological functions after the impacts made by deep sea fisheries. Recovery—defined as the return to conditions before they were damaged by fishing activities—is not a unique concept to just deep sea communities, and is usually due to site-specific factors that are often poorly understood and difficult to estimate. Little is known about biological histories and structures of the deep sea, and therefore the rates of recovery may be much slower than shallow environments.

Management of the seas, especially the deep sea, lags behind that of land and of the continental shelf, but there is a number of protection measures already being put in place. These actions include, but are not limited to,

• regulating fishing methods and gear types,

• specify the depth that one can fish at,

• limit the volume of bycatch, limit the volume of catch,

• move-on rules, and

• closure of areas of particular importance.

Modifications to trawl gear and how they are used have made these usually heavy tools less destructive (Mounsey and Prado, 1997; Valdemarsen et al. 2007; Rose et al. 2010; Skaar and Vold 2010). Fishery closures are becoming more common, with large parts of EEZs (exclusive economic zone) being closed zones for bottom trawling (e.g. New Zealand, North Atlantic, Gulf of Alaska, Bering Sea, USA waters, Azores) (Hourigan, 2009; Morato et al. 2010); the effectiveness of these closures is yet to be established.

And while this approach, dubbed the “ecosystem approach,” to fisheries management is widely advocated for, it does not help every deep sea animal or structure. Those that cannot move (sessile) are still in danger of being destroyed. As such, ecosystem-based marine spatial planning and management may be the most effective fisheries management strategy for protecting the vulnerable deep sea critters (Clark and Dunn, 2012; Schlacher et al. 2014). This management strategy can include marine protected areas (MPAs) to restrict fishing in specific locations and other management tools, such as zoning or spatial user rights, which will affect the distribution of fishing effort in a more effective manner. Using spatial management measures effectively requires new models and data, and will always have their limitations given how little data in regards to the deep sea exists, and that this particular environment is hard to get to.

So what does it all mean in regards to my thesis? Well, for one thing, there is a growing acknowledgement these unique ecosystems require special protection. And like any scientist knows, there are still many unanswered questions about just how important this environment is (especially certain structures).

ElephantFish

A juvenile Elephantfish, Callorhinchus milii. Source: Rudie H. Kuiter / Aquatic Photographics

On a more shark-related note, not all life-history stage habitats were found for my chimaeras, and this may be because these are outside of the coverage of the data set (and likely also commercial fisheries), or because they do not actually exist for some Chondrichthyans. That cliffhanger is research for another day, I suppose…

This project could not have been done without the endless amount of support of my family and friends; those who have supported me since day one of my marine biology adventures. They’re the ones who stick up for me whenever I hear, “You’re not what I expected when you said you were a shark scientist.” I am not really sure what the stereotype of a shark scientist is supposed to be, thankfully I grew up where you accept and judge people by who they are and what they do. However I see this as a challenge, as it sets the stage up for me to show the mind of a shark scientist can come in all kinds of packages.

As a final note, I’d like to thank the New Zealand Seafood Scholarship, the Deepwater Group, as well as researchers from National Institute of Water and Atmospheric Research (NIWA) who provided funding, insight and expertise that greatly assisted the research. The challenge of venturing into complex theories is that not all agree with all of the interpretations/conclusions of any research, but it is a basis for having a discussion, which can only be good for all.

 

 

References:

  • Thistle, D. (2003). The deep-sea floor: an overview. Ecosystems of The Deep Oceans. Ecosystems of the World 28.
  • FAO. 2009. Management of Deep-Sea Fisheries in the High Seas. FAO, Rome, Italy.
  • Koslow, J. A., Boehlert, G. W., Gordon, J. D. M., Haedrich, R. L., Lorance, P., and Parin, N. 2000. Continental slope and deep-sea fisheries: implications for a fragile ecosystem. ICES Journal of Marine Science, 57: 548–557.
  • Clark, M. R., and Koslow, J. A. 2007. Impacts of fisheries on seamounts. In Seamounts: Ecology, Fisheries and Conservation, pp. 413 –441. Ed. by T. J. Pitcher, T. Morato, P. J. B. Hart, M. R. Clark, N. Haggen, and R. Santos. Blackwell, Oxford.
  • Pitcher, T. J., Clark, M. R., Morato, T., and Watson, R. 2010. Seamount fisheries: do they have a future? Oceanography, 23: 134–144.
  • Clark, M. R. 2009. Deep-sea seamount fisheries: a review of global status and future prospects. Latin American Journal of Aquatic Research, 37: 501 –512.
  • Norse, E. A., Brooke, S., Cheung, W. W. L., Clark, M. R., Ekeland, L., Froese, R., Gjerde, K. M., et al. 2012. Sustainability of deep-sea fisheries. Marine Policy, 36: 307–320.
  • Koslow, J. A., Gowlett-Holmes, K., Lowry, J. K., O’Hara, T., Poore, G. C. B., and Williams, A. 2001. Seamount benthic macrofauna off southern Tasmania: community structure and impacts of trawling. Marine Ecology Progress Series, 213: 111–125.
  • Hall-Spencer, J., Allain, V., and Fossa, J. H. 2002. Trawling damage to Northeast Atlantic ancient coral reefs. Proceedings of the Royal Society of London Series B: Biological Sciences, 269: 507–511.
  • Waller, R., Watling, L., Auster, P., and Shank, T. 2007. Anthropogenic impacts on the corner rise seamounts, north-west Atlantic Ocean. Journal of the Marine Biological Association of the United Kingdom, 87: 1075 –1076.
  • Althaus, F., Williams, A., Schlacher, T. A., Kloser, R. K., Green, M. A., Barker, B. A., Bax, N. J., et al. 2009. Impacts of bottom trawling on deep-coral ecosystems of seamounts are long-lasting. Marine Ecology Progress Series, 397: 279–294.
  • Clark, M. R., and Rowden, A. A. 2009. Effect of deep water trawling on the macro-invertebrate assemblages of seamounts on the Chatham Rise, New Zealand. Deep Sea Research I, 56: 1540–1554.
  • Clark, M. R., Bowden, D. A., Baird, S. J., and Stewart, R. 2010a. Effects of fishing on the benthic biodiversity of seamounts of the “Graveyard” complex, northern Chatham Rise. New Zealand Aquatic Environment and Biodiversity Report, 46: 1 –40.
  • Fisheries Agency of Japan. 1999. Characterization of morphology of shark fin products: a guide of the identification of shark fin caught by tuna longline fishery. Global Guardian Trust, Tokyo.
  • Vannuccini, S. 1999. Shark utilization, marketing and trade. Fisheries Technical Paper 389. Food and Agriculture Organization, Rome.
  • Yeung, W. S.; Lam, C.C.; Zhao, P.Y. 2000. The complete book of dried seafood and foodstuffs. Wan Li Book Company Limited, Hong Kong (in Chinese).
  • Froese, R. and Pauly, D. 2002. FishBase database. Fishbase, Kiel, Germany. Eds. Available fromhttp://www.fishbase.org (accessed April 2016).
  • Clarke, S. and Mosqueira, I. 2002. A preliminary assessment of European participation in the shark fin trade. Pages 65–72 in M.Vacchi, G.La Mesa, F.Serena, and B.Séret, editors. Proceedings of the 4th European elasmobranch association meeting. Société Française d’Ichtyologie, Paris.
  • Mounsey, R. P., and Prado, J. 1997. Eco-friendly demersal fish trawling systems. Fishery Technology, 34: 1 – 6.
  • Valdemarsen, J. W., Jorgensen, T., and Engas, A. 2007. Options to mitigate bottom habitat impact of dragged gears. FAO Fisheries Technical Paper, 29.
  • Rose, C. S., Gauvin, J. R., and Hammond, C. F. 2010. Effective herding of flatfish by cables with minimal seafloor contact. Fishery Bulletin, 108: 136–144.
  • Skaar, K. L., and Vold, A. 2010. New trawl gear with reduced bottom contact. Marine Research News, 2: 1–2.
  • Hourigan, T. F. 2009. Managing fishery impacts on deep-water coral ecosystems of the USA: emerging best practices. Marine Ecology Progress Series, 397: 333–340.
  • Morato, T., Pitcher, T. J., Clark, M. R., Menezes, G., Tempera, F., Porteiro, F., Giacomello, E., et al. 2010. Can we protect seamounts for research? A call for conservation. Oceanography, 23: 190–199.
  • Clark, M. R., and Dunn, M. R. 2012. Spatial management of deep-sea seamount fisheries: balancing sustainable exploitation and habitat conservation. Environmental Conservation, 39: 204 –214.
  • Schlacher, T. A., Baco, A. R., Rowden, A. A., O’Hara, T. D., Clark, M. R., Kelley, C., and Dower, J. F. 2014. Seamount benthos in a cobalt-rich crust region of the central Pacific: Conservation challenges for future seabed mining. Diversity and Distributions, 20: 491 –502.

Time to think about visual neuroscience

by Poppy Sharp, PhD candidate at the Center for Mind/Brain Sciences, University of Trento.

All is not as it seems

We all delight in discovering that what we see isn’t always the truth. Think optical illusions: as a kid I loved finding the hidden images in Magic Eye stereogram pictures. Maybe you remember a surprising moment when you realised you can’t always trust your eyes. Here’s a quick example. In the image below, cover your left eye and stare at the cross, then slowly move closer towards the screen. At some point, instead of seeing what’s really there, you’ll see a continuous black line. This happens when the WAB logo falls in a small patch on the retinae of your eyes where the nerve fibres leave in a bundle, and consequently this patch has no light receptors – a blind spot. When the logo is in your blind spot, your visual system fills in the gap using the available information. Since there are lines on either side, the assumption is made that the line continues through the blind spot.

Illusions reveal that our perception of the world results from the brain building our visual experiences, using best guesses as to what’s really out there. Most of the time you don’t notice, because the visual system has been adapted over years of evolution and then been honed by your lifetime of perceptual experiences, and is pretty good at what it does.

WAB vision

For vision scientists, illusions can provide clues about the way the visual system builds our experiences. We refer to our visual experience of something as a ‘percept’, and use the term ‘stimulus’ for the thing which prompted that percept. The stimulus could be something as simple as a flash of light, or more complex like a human face. Vision science is all about carefully designing experiments so we can tease apart the relationship between the physical stimulus out in the world and our percept of it. In this way, we learn about the ongoing processes in the brain which allow us to do everything from recognising objects and people, to judging the trajectory of a moving ball so we can catch it.

We can get insight into what people perceived by measuring their behavioural responses. Take a simple experiment: we show people an arrow to indicate whether to pay attention to the left or the right side of the screen, then they see either one or two flashes of light flash quickly on one side, and have to press a button to indicate how many flashes they saw. There are several behavioural measures we could record here. Did the cue help them be more accurate at telling the difference between one or two flashes? Did the cue allow them to respond more quickly? Were they more confident in their response? These are all behavioural measures. In addition, we can also look at another type of measure: their brain activity. Recording brain activity allows unique insights into how our experiences of the world are put together, and investigation of exciting new questions about the mind and brain.

Rhythms of the brain

Your brain is a complex network of cells using electrochemical signals to communicate with one another. We can take a peek at your brain waves by measuring the magnetic fields associated with the electrical activity of your brain. These magnetic fields are very small, so to record them we need a machine called an MEG scanner (magnetoencephalography) which has many extremely sensitive sensors called SQUIDs (superconducting quantum interference devices). The scanner somewhat resembles a dryer for ladies getting their blue rinse done, but differs in that it’s filled with liquid helium and costs about three million euros.

A single cell firing off an electrical signal would have too small a magnetic field to be detected, but since cells tend to fire together as groups, we can measure these patterns of activity in the MEG signal. Then we look for differences in the patterns of activity under different experimental conditions, in order to reveal what’s going on in the brain during different cognitive processes. For example, in our simple experiment from before with a cue and flashes of light, we would likely find differences in brain activity when these flashes occur at an expected location as compared to an unexpected one.

One particularly fascinating way we can characterise patterns of brain activity is in terms of the the rhythms of the brain. Brain activity is an ongoing symphony of multiple groups of cells firing in concert. Some groups fire together more often (i.e. at high frequency), whereas others may also be firing together in a synchronised way, but firing less often (low frequency). These different patterns of brain waves generated by cells forming different groups and firing at various frequencies are vital for many important processes, including visual perception.

What I’m working on

For as many hours of the day as your eyes are open, a flood of visual information is continuously streaming into your brain. I’m interested in how the visual system makes sense of all that information, and prioritises some things over others. Like many researchers, the approach we use is to show simple stimuli in a controlled setting, in order to ask questions about fundamental low level visual processes. We then hope that our insights generalise to more natural processing in the busy and changeable visual environment of the ‘real world’. My focus is on temporal processing. Temporal processing can refer to a lot of things, but as far as my projects go we mean how you deal with stimuli occurring very close together in time (tens of milliseconds apart). I’m investigating how this is influenced by expectations, so in my experiments we manipulate expectations about where in space stimuli will be, and also your expectations about when they will appear. This is achieved using simple visual cues to direct your attention to, for example, a certain area of the screen.

When stimuli rapidly follow one another in time, sometimes it’s important to be parse them into separate percepts whereas other times it’s more appropriate to integrate them together. There’s always a tradeoff between the precision and stability of the percepts built by the visual system.  The right balance between splitting up stimuli into separate percepts as opposed to blending them into a combined percept depends on the situation and what you’re trying to achieve at that moment.

Let’s illustrate some aspects of this idea about parsing versus integrating stimuli with a story, out in the woods at night. If some flashes of light come in quick succession from the undergrowth, this could be the moonlight reflecting off the eyes of a moving predator. In this case, your visual system needs to integrate these stimuli into a percept of the predator moving through space. But a similar set of several stimuli flashing up from the darkness could also be multiple predators next to each other, in which case it’s vital that you parse the incoming information and perceive them separately. Current circumstances and goals determine the mode of temporal processing that is most appropriate.

I’m investigating how expectations about where stimuli will be can influence your ability to either parse them into separate percepts or to form an integrated percept. Through characterising how expectations influence these two fundamental but opposing temporal processes, we hope to gain insights not only into the processes themselves, but also into the mechanisms of expectation in the visual system. By combining behavioural measures with measures of brain activity (collected using the MEG scanner), we are working towards new accounts of the dynamics of temporal processing and factors which influence it. In this way, we better our understanding of the visual system’s impressive capabilities in building our vital visual experiences from the lively stream of information entering our eyes.

This IS ‘proper’ research: Taking on the social science vs. science debate

By: Rosie Smith 

“So why is your research necessary?”

“How do you get funding for research like this?”

These are just two of the many questions that I was asked recently whilst taking part in a competition for PhD researchers at my university. The competition was interdisciplinary and was aimed at showcasing doctoral research at the institution, whilst also providing early career researchers, like myself, a gateway into public engagement. Needless to say the competition was one of the many uncomfortable things I intend to do this year as part of my resolution to be a ‘yes’ woman and challenge myself more.

Finalists were made up of three researchers per faculty (social science, science, arts and humanities), and as a criminologist I quickly found myself gravitating towards the social sciences camp. It was a full day event in which we were judged on a multitude of criteria ranging from originality, impact, accessibility, interdisciplinary scope, and importance. I use the word ‘importance’ hesitantly, as it’s a term that causes particular anxiety when I consider my own research. My work explores the concept of ‘spectacular justice’ and the way the mass media makes the criminal justice system visible and public. I explore this concept by analysing how high profile criminal cases are represented in media archives from the 1800s to 2016.

 

lkhok

And whilst I thoroughly enjoy my research, I still often find it difficult to have confidence that my work is ‘important’, and necessary. In part this is because I am self-funding my research, and at times I find it difficult to have confidence in my work when understandings of ‘good’ research are so closely bound to notions of impact and attracting funding. But it is also in part because of situations like these, when I am forced to contemplate the debate around what constitutes ‘proper’ research.

When I was posed these questions, I admit, I was initially shocked and somewhat taken aback by the abruptness with which they were posed. But at the same time these questions draw on some of the existing anxieties I have as I begin the journey into academia. To me, these questions in some way breach the social conventions on conversation etiquette, not to mention conventions on what is and is not okay to ask a frazzled and distressed PhD student.

To the first, I was honest, and launched into the toils of juggling several part-time jobs alongside trying to develop the aura of a rounded and successful academic.

But it was the question “Why is your research necessary?” that caused me more concern. Looking around the room at the other contestants I began to question whether this question had been asked of the other finalists, in particular the natural, computer, and the physical scientists.

I was transported back to the long debates I had as an undergraduate with my ‘proper’ scientist friends. In these debates I would spend hours defending the position that social science is important and necessary, and that the two disciplines can exist in parallel.

I would passionately defend the position that the relationship between the two does not need to be one of comparison. Admittedly, my efforts to convert them were largely fruitless. And I was often left being endearingly mocked, only to be told that “but it’s not a real science though is it?” And unfortunately this is still a plight I am fighting as I embark through my PhD.

It is as if this debate is a matter of either or. You are either a social scientist or a scientist, with very little scope to dabble somewhere in the middle. This was only confirmed as the day progressed. I overheard the finalist next to me ask a gentleman, “Are you going to go to Rosie’s stand next?” To which the gentleman replied, “I don’t think so, I don’t like social science, I’m a more of a scientist”.

jlhk

Needless to say I tried my best to convince him of the merits of the dark underbelly of the social sciences, but was left wondering why I had to.

I cannot escape the importance of gender to this debate. Despite being interdisciplinary, the competition finalists were overwhelmingly female, with male colleagues only being represented by the science faculty.

Needless to say there are a large number of male social scientists who contribute greatly to the field, but historically the social sciences have been regarded as a ‘feminine’ discipline.

This is supported by statistics on the relationship between gender and higher education degree choices: in 2016, 17,075 men accepted university offers to study a social science subject in the UK, which amounts to just over half the figure for women which totalled at 30,860 (UCAS, 2016). And so I interpreted the questions “why is your research necessary?” and “how do you get funding for research like this?” not only as a judgment on the value of my research, but a value judgment more generally about the credibility of the social sciences as a predominantly female discipline. I couldn’t ignore the feeling that the feminization of the social sciences served as a double mechanism to justify the position of the sciences as superior.

At times I worry that as a social scientist, the rivalry that exists with science, whilst often only in jest or antics, has a direct impact on understandings of what constitutes ‘proper’ research.

And I question the appropriateness of using one set of criteria to judge and compare the value and ‘necessity’ of the two disciplines. In my opinion they are complimentary rather than contradictory fields. And we should be striving to broaden our understanding of what constitutes ‘proper’ research. Because although my research does not find a solution to world hunger or fight disease, it does have value- just in its own way.

At the end of the day the judges seemed to recognise some of that value too. When the scores came in, I won! It was one of the proudest moments of my Phd so far, as a social scientist, as an early career researcher, and as a woman. This experience has taught me many lessons, but the most important is to take the victories, whether big or small, when they come around. Equally I aim to worry a little less about how much impact my research has, or how much funding I attract (or not) and concentrate on enjoying my PhD and remembering that whilst not earth-shattering, my research is still necessary. All research is proper research.

 

 

 

 

How your brain plans actions with different body parts

Got your hands full? – How the brain plans actions with different body parts

by Phyllis Mania

STEM editor: Francesca Farina

Imagine you’re carrying a laundry basket in your hand, dutifully pursuing your domestic tasks. You open the door with your knee, press the light switch with your elbow, and pick up a lost sock with your foot. Easy, right? Normally, we perform these kinds of goal-directed movements with our hands. Unsurprisingly, hands are also the most widely studied body part, or so-called effector, in research on action planning. We do know a fair bit about how the brain prepares movements with a hand (not to be confused with movement execution). You see something desirable, say, a chocolate bar, and that image goes from your retina to the visual cortex, which is roughly located at the back of your brain. At the same time, an estimate of where your hand is in space is generated in somatosensory cortex, which is located more frontally. Between these two areas sits an area called posterior parietal cortex (PPC), in an ideal position to bring these two pieces of information – the seen location of the chocolate bar and the felt location of your hand – together (for a detailed description of these so-called coordinate transformations see [1]). From here, the movement plan is sent to primary motor cortex, which directly controls movement execution through the spinal cord. What’s interesting about motor cortex is that it is organised like a map of the body, so the muscles that are next to each other on the “outside” are also controlled by neuronal populations that are next to each other on the “inside”. Put simply, there is a small patch of brain for each body part we have, a phenomenon known as the motor homunculus [2].

eeg1

Photo of an EEG, by Gabriele Fischer-Mania

As we all know from everyday experience, it is pretty simple to use a body part other than the hand to perform a purposeful action. But the findings from studies investigating movement planning with different effectors are not clear-cut. Usually, the paradigm used in this kind of research works as follows: The participants look at a centrally presented fixation mark and rest their hand in front of the body midline. Next, a dot indicating the movement goal is presented to the left or right of fixation. The colour of the dot tells the participants, whether they have to use their hand or their eyes to move towards the dot. Only when the fixation mark disappears, the participants are allowed to perform the movement with the desired effector. The delay between the presentation of the goal and the actual movement is important, because muscle activity affects the signal that is measured from the brain (and not in a good way). The subsequent analyses usually focus on this delay period, as the signal emerging throughout is thought to reflect movement preparation. Many studies assessing the activity preceding eye and hand movements have suggested that PPC is organised in an effector-specific manner, with different sub-regions representing different body parts [3]. Other studies report contradicting results, with overlapping activity for hand and eye [4].

eeg2

EEG photo, as before.

But here’s the thing: We cannot stare at a door until it finally opens itself and I imagine picking up that lost piece of laundry with my eye to be rather uncomfortable. Put more scientifically, hands and eyes are functionally different. Whereas we use our hands to interact with the environment, our eyes are a key player in perception. This is why my supervisor came up with the idea to compare hands and feet, as virtually all goal-directed actions we typically perform using our hands can also be performed with our feet (e.g., see http://www.mfpa.uk for mouth and foot painting artists). Surprisingly, it turned out that the portion of PPC that was previously thought to be exclusively dedicated to hand movement planning showed virtually the same fMRI activation during foot movement planning [5]. That is, the brain does not seem to differentiate between the two limbs in PPC. Wait, the brain? Whereas fMRI is useful to show us where in the brain something is happening, it does not tell us much about what exactly is going on in neuronal populations. Here, the high temporal resolution of EEG allows for a more detailed investigation of brain activity. During my PhD, I used EEG to look at hands and feet from different angles (literally – I looked at a lot of feet). One way to quantify possible effects is to analyse the signal in the frequency domain. Different cognitive functions have been associated with power changes in different frequency bands. Based on a study that found eye and hand movement planning to be encoded in different frequencies [6], my project focused on identifying a similar effect for foot movements.

feet_pixabay

Source: Pixabay

This is not as straightforward as it might sound, because there are a number of things that need to be controlled for: To make a comparison between the two limbs as valid as possible, movements should start from a similar position and end at the same spot. And to avoid expectancy effects, movements with both limbs should alternate randomly. As you can imagine, it is quite challenging to find a comfortable position to complete this task (most participants did still talk to me after the experiment, though). Another important thing to keep in mind is the fact that foot movements are somewhat more sluggish than hand movements, owing to physical differences between the limbs. This circumstance can be accounted for by performing different types of movements; some easy, some difficult. When the presented movement goal is rather big, it’s easier to hit than when it’s smaller. Unsurprisingly, movements to easy targets are faster than movements to difficult targets, an effect that has long been known for the hand [7] but had not been shown for the foot yet. Even though this effect is obviously observed during movement execution, it has been shown to already arise during movement planning [8].

So, taking a closer look at actual movements can also tell us a fair bit about the underlying planning processes. In my case, “looking closer” meant recording hand and foot movements using infrared lights, a procedure called motion capture. Basically the same method is used to create the characters in movies like Avatar and the Hobbit, but rather than making fancy films I used the trajectories to extract kinematic measures like velocity and acceleration. Again, it turned out that hands and feet have more in common than it may seem at first sight. And it makes sense – as we evolved from quadrupeds (i.e., mammals walking on all fours) to bipeds (walking on two feet), the neural pathways that used to control locomotion with all fours likely evolved into the system now controlling skilled hand movements [9].

What’s most fascinating to me is the incredible speed and flexibility with which all of this happens. We hardly ever give a thought to the seemingly simple actions we perform every minute (and it’s useful not to, otherwise we’d probably stand rooted to the spot). Our brain is able to take in such a vast amount of information – visually, auditory, somatosensory – filter it effectively and generate motor commands in the range of milliseconds. And we haven’t even found out a fraction of how all of it works. Or to use a famous quote [10]: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

 [1] Batista, A. (2002). Inner space: Reference frames. Current Biology, 12(11), R380-R383.

[2] Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389-443.

[3] Connolly, J. D., Andersen, R. A., & Goodale, M. A. (2003). FMRI evidence for a ‘parietal reach region’ in the human brain. Experimental Brain Research153(2), 140-145.

[4] Beurze, S. M., Lange, F. P. de, Toni, I., & Medendorp, W. P. (2009). Spatial and Effector Processing in the Human Parietofrontal Network for Reaches and Saccades. Journal of Neurophysiology, 101(6), 3053–3062

[5] Heed, T., Beurze, S. M., Toni, I., Röder, B., & Medendorp, W. P. (2011). Functional rather than effector-specific organization of human posterior parietal cortex. The Journal of Neuroscience31(8), 3066-3076.

[6] Van Der Werf, J., Jensen, O., Fries, P., & Medendorp, W. P. (2010). Neuronal synchronization in human posterior parietal cortex during reach planning. Journal of Neuroscience30(4), 1402-1412.

[7] Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology47(6), 381.

[8] Bertucco, M., Cesari, P., & Latash, M. L. (2013). Fitts’ Law in early postural adjustments. Neuroscience231, 61-69.

[9] Georgopoulos, A. P., & Grillner, S. (1989). Visuomotor coordination in reaching and locomotion. Science, 245(4923), 1209–1210.

[10] Pugh, Edward M, quoted in George Pugh (1977). The Biological Origin of Human Values.

 

Space weather – predicting the future

by Aoife McCloskey

Early Weather Prediction

Weather is a topic that humans have been fascinated by for centuries and, dating back to the earliest civilisations ’till the present day, we have been trying to predict it. In the beginning, using the appearance of clouds or observing recurring astronomical events, humans were able to better predict seasonal changes and weather patterns. This was, of course, motivated by reasons of practicality such as agriculture or knowing when the best conditions to travel were, but additionally it stemmed from the innate human desire to develop a better understanding of the world around us.

Weather prediction has come a long way from it’s primordial beginning, and with the exponential growth of technological capabilities in the past century we are now able to model conditions in the Earth’s atmosphere with unprecedented precision. However, until the late 1800’s, we had been blissfully unaware that weather is not confined solely to our planet, but also exists in space.

Weather in Space

Weather, in this context, refers to the changing conditions in the Solar System and can affect not only our planet, but other solar system planets too. But what is the source of this weather in space? The answer is the biggest object in our solar system, the Sun. Our humble, middle-aged star is the reason we are here at all in the first place and has been our reliable source of energy for the past 4.6 billion years.

However, the Sun is not as stable or dependable as we perceive it to be. The Sun is in fact a very dynamic object, made up of extremely high temperature gases (also known as plasma). Just like the Earth, the Sun also generates its own magnetic field, albeit on a much larger scale than our planet. This combination of strong magnetic fields, and the fact that the Sun is not a solid body, leads to the build up of energy and, consequently, energy release. This energy release is what is known as a solar flare, simply put it is an explosion in the atmosphere of the Sun that produces extremely high-energy radiation and spits out particles that can travel at near-light speeds into the surrounding interplanetary space.

The Sun: Friend or Foe?

Sounds dangerous, right? Well yes, if you were an astronaut floating around in space, beyond the protection of the Earth, you would find yourself in a very undesirable position if a solar flare were to happen at the same time. For us here on Earth, the story is a bit different when it comes to being hit with the by-products of a solar flare. As I said earlier, our planet Earth produces its very own magnetic field, similar to that of a bar magnet. For those who chose to study science at secondary school, I’m sure you may recall the lead shavings and magnet experiment. Well, that’s pretty much what our magnetic field looks like, and luckily for us it acts as a protective shield against the high-energy particles that come hurtling our way on a regular basis from the Sun. One of the most well-known phenomena caused by the Sun is actually the Aurora Borealis, i.e., the northern lights (or southern lights depending on the hemisphere of the world you live).

aurora-1

Picture of the Aurora Borealis, taken during Aoife’s trip to Iceland in January 2016.

This phenomenon has been happening for millennia, yet until recent centuries we didn’t really understand why. What we know now is that the aurorae are caused by high-energy particles from the Sun colliding with our magnetic field, spiralling along the field lines and making contact with our atmosphere at both the north and south magnetic poles. While the aurorae are actually a favourable effect of space weather, as they are astonishingly beautiful to watch and photograph, there are unfortunately some negative effects too. These effects here on Earth range from satellite damage (GPS in particular), to radio communication blackout, to the more extreme case of electrical grid failure. Other effects are illustrated in the image below:

My PhD – Space Weather Forecasting

So, how do we predict when there is an event on the Sun that could have negative impacts here on Earth? Science, of course! In particular, in the area of Solar Physics there has been increasing focus on understanding the physical processes that lead to space weather phenomena and trying to find the best methods to predict when something such as a solar flare might occur.

It is well known that one should not directly view the Sun with the naked eye, therefore traditionally the image of the Sun was projected onto pieces of paper. Using this method, one of the first features observed on the Sun were large, dark spots that are now known as sunspots. These fascinated astronomers for quite some time and there is an extensive record of sunspots kept since the early 1800’s. These sunspots were initially traced by hand, on a daily basis, until photographic plates were invented and this practice became redundant. After many decades of recording these spots there appeared to be a pattern emerging, corresponding to a roughly 11-year cycle, where the number of spots would increase to a maximum and gradually decrease again. It was shown that this 11-year cycle was correlated with the level of solar activity, in other words the number of solar flares and how much energy they release can also be seen to follow this pattern.

carrington_sspots

Sunspot drawing by Richard Carrington, 01 September 1859

Leading on from this, it is clear that there exists a relationship between sunspots and solar flares, so logically they are the place to start when trying to forecast. My PhD project focuses on sunspots and how they evolve to produce flares. For a long time, sunspots have been classified according to their appearance. One of the most famous classification schemes was developed by Patrick McIntosh and has been used widely by the community to group sunspots by their size, symmetry and compactness (how closely packed are the spots) [1]. Generally, the biggest, baddest and ugliest groups of sunspots produce the most energetic, and potentially hazardous, flares. Our most recent work has been studying data from past solar cycles (1988-2010) and looking at how the evolution of these sunspot groups relates to the flares they produce [2]. I found that those that increase in size produce more flares than those that decrease in size. This has been something that has been postulated before in the past, and additionally it helps to answer an open question in the community as to whether sunspots produce more flares when they increase in size (grow) or when they decrease in size (decay). Using these results, I am now implementing a new way to predict the likelihood of a sunspot group to produce flares and additionally the magnitude of those flares.

 

Space weather is a topic that is now, more than ever, of great importance to our technology-dependent society. That is not to say that there will definitely be any catastrophic event in the near-future, but it is certainly a potential hazard that needs to be addressed on a global scale. In recent years there has been some significant investment in space weather prediction, with countries such as the UK and the U.S. both establishing dedicated space weather forecasting services. Here in Ireland, our research group at Trinity College has been working on improving the understanding of and prediction of space weather for the past ten years. I hope, in the near future, space weather forecasting will reach the same level of importance as the daily weather forecast, but for now – watch this space.

  1. McIntosh, Patrick S (1990), ‘The Classification of Sunspots’,  Solar Physics, p.251-267.
  2. McCloskey, Aoife (2016), ‘Flaring Rates and the Evolution of Sunspot Group McIntosh Classifications’, Solar Physics, p.1711-1738.

Assisted Reproductive Technologies and Irish Law

Who’s left holding the baby now? Assisted Reproductive Technologies and Irish Law

by Sarah Pryor

The rapid rate of development and expansion in usability of genetic technologies in the past decade is both a cause for celebration and a cause for concern.

There is an impetus on law and policy makers to act responsibly in creating and implementing legal tools to aid in the smooth operation and integration of these technological advances into society in order to mitigate the possibility of society enduring any negative impact from the existence and use of technologies in this growing area.

The question asked here is; do assistive reproduction technologies challenge the traditional concepts of parenthood generally, and motherhood specifically, and what impact does this have on Irish law and society?

Quite simply put, the answer is yes, these emerging technologies do challenge traditional familial concepts and norms. The answer as to what impact this has on Irish law and society is exceedingly more complicated.

Ethical concerns

Reproduction is becoming increasingly more medicalised, geneticised and commercialised. This has the potential to diminish the human condition and damage the human population.[1] In a time of scientific, social and legal change it is inevitable that there will be periods of uncertainty. It is under these conditions of uncertainty that identity and ethics must be debated, and boundaries must be established in order to ensure that no negative experiences come to the broader population due to the advancements being made in the area of assisted reproduction.

The ethical concerns surrounding the increased medicalisation of human reproduction range greatly.[2]

The most challenging element of reproductive technologies is the fact that the issues being debated are deeply personal and sensitive, meaning that no one experience is the same and as such, there is difficulty in establishing a standard of practice, as well as a legally and ethically balanced acceptance of the use of these procedures. These difficulties are inherent to discussion surrounding human reproduction.

Assisted Human Reproduction in Ireland

Assisted Human Reproduction (AHR) was not formally recognised as an area in need of governmental oversight until the year 2000 when the Commission for Assisted Human Reproduction, herein referred to as ‘the Commission’, was established and the need for comprehensive, stand alone, legislation in this area was recognised.[3]

The Commission and subsequent report were welcomed as a move towards the recognition of a set of newly emerging social norms in Ireland; both in terms of medicine and reproductive technologies and also in terms of the traditional nuclear family and the growth towards new familial norms. However, following the publication of the 2005 report there was little done in the way of proactive implementation of the set out recommendations.[4]

Political conversation centres around the disappointment that questions surrounding the protocol of AHR services and their use must be addressed via judicial channels and that there is not legislation in place to counteract the need to use the Irish Court System to get answers.[5]

The lack of legislation in this area means that the only avenue for the guidance of medical practitioners comes from the Irish Medical Council “Guide to Professional Conduct and Ethics for registered medical practitioners”.[6] Several cases in recent years have been brought to the High Court and Supreme Court in order to solve the maze this legal vacuum leaves patients struggling through.[7] These cases, as recently as 2014, have highlighted the necessity for legislation in the area in order to protect all parties involved.

The role of religion

It is important to recognise the cultural history of Ireland and the importance of the social and political role of the Catholic Church for many years. Older Irish generations were reared in a country in which contraception was illegal and women did not work once they were married as their societal role was in the home. Newly emerging technologies, such as surrogacy, further challenge these traditional values.

There is an unfortunate pattern of political and religious control over a woman’s right to reproduce and the conditions in which it is ‘right’ for a woman to have a baby. For a long time in Ireland, there was no real separation of church and State. The ramifications of this have rippled throughout Irish history and up to the present day – no more so than in the area of the reproductive rights of women.

Parallels with the Repeal the 8th campaign 

Although distinctly different from the abortion debate, and the argument for the repeal of the 8th amendment, certain parallels can be drawn in how the government has responded to calls from various groups to provide guidance in the area of assisted reproduction and how these calls have been largely brushed to the side. On the introduction of the Children and Family Relationships Act 2015, Minister for Justice & Equality Francis Fitzgerald removed any reference to surrogacy because it was too large an issue to merely be a feature of a more generalised bill, so there is indication that positive movements are being made in this area – the question is when will they actually be formulated into real, working policies, laws and protocols?

ARTs and the Marriage Equality referendum

Until 2015, marriage in Ireland was exclusively available for heterosexual couples. The 34th Amendment of the Irish Constitution changed this, effectively providing for a more equal society in which traditional Irish values towards marriage were replaced with a more accepting stance, something which was voted for by the Irish public through a referendum.[8]

The gravity of such a change in Irish society has implications beyond just marriage. Laws regarding areas such as adoption were relevant only to the married couple and, within that context, this meant only heterosexual couples. Irish family law was written with the traditional ‘mother, father and children’ family in mind. It is fair to say that family dynamics have changed significantly, and the movement away from traditional concepts of family is increasing. With the passing of the Marriage Referendum, marriage in the context of law and society has taken on a new meaning, and the symbolic nature of this recognition of a new familial norm is plain to see. The Irish electorate voted for this, and public consultations on Assisted Reproductive Technologies (ARTs) have illustrated the support of the Irish people for ARTs, and for legislation regulating their use – and yet, still there is none.

ARTs are used by heterosexual and homosexual couples alike. The Children and Family Relationships Act 2015 has made movements towards acknowledging new familial norms in Ireland and was a welcomed symbol of the future for Irish society as increasingly liberal and accepting. Although many pressing issues are not addressed within the Act, such as surrogacy, the support for the enactment of new measures regarding familial relationships is a deeply reassuring acknowledgement of the changing, evolving nature of Irish society and their views towards non-traditional family units. While this is to be welcomed, it simply doesn’t go far enough.

The role of the mother

One area that has not been addressed in any significant way is the greatly changed role of the mother.

Mater semper certa est – the mother is always certain. This is the basis on which Irish family law operates and it is this historical, unshakeable concept that is being shaken to its core by the emergence of ARTs.

Traditional concepts of motherhood are defined solely through the process of gestation.[9] A birth mother, in the context of Irish law, is the legal mother.[10] This has remained a point of contention in the Irish courts, demonstrated in the 2014 Supreme Court case addressing the rights of a surrogate mother to her genetically linked children to whom she did not give birth. Denham CJ addressed the ‘lacuna’ in Irish law, emphasising the responsibilities of the Oireachtas, in saying that:

“Any law on surrogacy affects the status and rights of persons, especially those of the children; it creates complex relationships, and has a deep social content. It is, thus, quintessentially a matter for the Oireachtas.”

Chief Justice Denham further stated that:

“There is a lacuna in the law as to certain rights, especially those of the children born in such circumstances. Such lacuna should be addressed in legislation and not by this Court. There is clearly merit in the legislature addressing this lacuna, and providing for retrospective situations of surrogacy.”[11]

The emergence of ARTs as common practice, particularly regarding egg and sperm donation, surrogacy and embryo donation, have created a new concept of parenthood, and more specifically motherhood.

There are deeply segregated emerging views over who exactly is the legal mother, and the social mother, the rights that each participant has, and who is responsible for the donor or surrogate child.

Whilst some of these issues were addressed in both the Commission Report and the 2013 RCSI Report, such as the right of the donor child to the information of their donor, neither delve deeply into the implications of such medical processes on concepts of motherhood and parenthood.

Three fragmented concepts of motherhood now exist; social, gestational and genetic.[12] Although there are established ideologies of parental pluralism within society regarding adoption, the nature of the situation in which a child is born though the use of ARTs is fundamentally different from an adoption agreement which is accounted for in Irish law.

Feminist views on ARTs

Feminist views differ greatly in their resounding opinions on the emergence of assistive reproduction technologies. Arguments are made opposing ARTs as methods of increased control over a woman’s reproduction through commercialisation and reinforcement of the pro-natalist ideologies.[13] Others argue in favour of ARTs in stating that their development allows women more freedom over their reproductive choices and enables women to bear children independently of another person and at a time that is suitable to her; an example of this being the use of IVF by a woman at a later stage in her life.[14]

These complexities exist before even considering the social and legal role of parents in same sex relationships – what relevance does the role of the mother have for a gay couple? What relevance does the role of a father have for a lesbian couple? Does the increasing norm of homosexual couples having children via surrogate mitigate any need for these socially constructed familial roles and highlight the irrelevance of these roles in modern society? The same questions can be asked of a single man or woman seeking to have a child via surrogate – should a person only have a child if they are in a committed relationship? Surely not, as single parents currently exist in Ireland, have done so for some time, and are raising their children without objection from society or the state.

‘The law can no longer function for its purpose’

Regardless of where one’s stance lies on the emergence of these technologies, it is undeniably clear that their use is challenging normative views and practices of parenthood. The traditional, socially established norms are shifting from what was once a quite linear and nuclear view. ARTs allow for those who previously could not have genetically linked children to do so via medical treatments. It is in this way that the situation under current Irish law is exacerbated, and the law can no longer function for its purpose.

Something needs to be done, so that whoever wants to be, can be left holding the baby!

[1] Sarah Franklin and Celia Roberts, Born and Made: An Ethnography of Preimplantation Genetic Diagnosis (Princeton University Press 2006).

[2] Sirpa Soini and others, ‘The Interact between Assisted Reproductive Technologies and Genetics: Technical, Social, Ethical and Legal Issues’ (2006) 14 European Journal of Human Genetics.

[3] David J Walsh and others, ‘Irish Public Opinion on Assisted Human Reproduction Services: Contemporary Assessments from a National Sample’.

[4] Deirdre Madden, ‘Delays over Surrogacy Has Led to Needless Suffering for Families’ Irish Independent (2013) <https://www.nexis.com/auth/bridge.do?rand=0.4949951547474648&gt; accessed 25 June 2016.

[5] Roche v. Roche 2009

See also, MR & DR v. An tArd Chlaraitheoir 2014

[6] David J Walsh and others, ‘Irish Public Opinion on Assisted Human Reproduction Services: Contemporary Assessments from a National Sample’.

[7] See Roche v. Roche 2009. See also MR & DR V. An tArd Chlaraitheoir 2014

[8] 34th amendment of the Constitution (Marriage Equality) Act 2015.

[9] Andrea E Stumpf, ‘Redefining Mother: A Legal Matrix for New Reproductive Technologies’ (1986) 96 The Yale Law Journal 187 <http://www.jstor.org/stable/pdf/796440.pdf?_=1471277905944&gt; accessed 16 June 2016.

[10] See, MR And DR v an t-ard-chláraitheoir & ors: Judgments & determinations: Courts service of Ireland [2014] IESC 60.  [S.C. no.263 of 2013]

[11] Ibid, para 113, para 116.

[12] SA Hammons, ‘Assisted Reproductive Technologies: Changing Conceptions of Motherhood?’ (2008) 23 Affilia 270 <http://claradoc.gpa.free.fr/doc/254.pdf&gt; accessed 4 August 2016.

[13] SA Hammons, ‘Assisted Reproductive Technologies: Changing Conceptions of Motherhood?’ (2008) 23 Affilia 270 <http://claradoc.gpa.free.fr/doc/254.pdf&gt; accessed 4 August 2016. See also, Gimenez, 1991, p.337

[14] See, Bennett, 2003 and Firestone, 1971

Detecting Parkinson’s Disease with your mobile phone

DETECTING PARKINSON’S DISEASE BEFORE SYMPTOMS ARISE

by Reham Badaway, in collaboration with Dr. Max Little.

So, what if I told you that in your pocket right now, you have a device that may be able to detect for the symptoms of a brain disease called Parkinson’s, much earlier than doctors themselves can detect for the disease? I’ll give you a minute to empty out the contents of your pockets. Have you guessed what it is? It’s your smartphone! Not only can your trusty smartphone keep you in touch with family and friends, or help you look busy at a party that you know no-one at, it can also detect for the very early symptoms of a debilitating disease. One more reason to love your smartphone!

What is Parkinson’s disease?

So, what is Parkinson’s disease (PD)? PD is a brain disease which significantly restricts movement. Some of the symptoms of PD include slowness of movement, trembling of the hands and legs, the resistance of the muscles to movement, and loss of balance. All of these movement problems (symptoms) are extremely debilitating and affect the quality of life for those diagnosed with the disease. Unfortunately, it is only in the late stages of the disease, i.e. when the symptoms of the disease are extremely apparent, that doctors can confidently detect PD. There is currently no cure for the disease. Detecting the disease early on can help us find a cure, or find medicines that aim to slow down disease progression. Thus, methods that can detect PD before doctors themselves can detect for the disease, i.e. in the early stages of the disease, are pivotal.

Smartphone sensing

So, how can we go about detecting the disease early on in a non-invasive, cheap and easily accessible manner? Well, we believe that smartphones are the solution. Smartphones come equipped with a large variety of sensors to enhance your experience with your smartphone (Fig 1). Over the last few years, abnormal characteristics in the walking pattern of individuals with PD have been successfully detected using a smartphone sensor known as an accelerometer. Accelerometers can detect movement with high precision at very low cost, making them perfect for wide-scale application.

reham-1

Fig 1: Sensors, satellites and radio frequency in Smartphones

Detecting Parkinson’s disease before symptoms arise

Interestingly, subtle movement problems have been reported in individuals with a high risk of developing PD using sensors similar to those found in smartphones, specifically when given a difficult activity to do such as walking while counting backwards. Individuals at risk of developing the disease are individuals who are expected to develop the disease in the later stages of their life due to say a genetic mutation, but have not yet developed the key symptoms required for PD diagnosis. The presence of subtle movement problems in individuals with a high risk of developing PD indicates that the symptoms of PD exist in the early stages of the disease progression, just subtly. Unfortunately, these subtle movement problems are so subtle that individuals at risk of developing PD, as well as doctors, cannot detect them – so we must go looking for them. It is crucial that we can screen individuals for these subtle movement problems if we are to detect the disease in the early stages. The ability of smartphone sensors to detect the subtle movement problems in the early stages of PD has not yet been investigated. Using smartphones as a screening tool for detecting PD early on will mean a more widely accessible and cost-effective screening method.

Our solution to the problem

We aim to distinguish individuals at risk of developing PD from risk-free individuals by analysing their walking pattern measured using a smartphone accelerometer.

How does it work?

So, how would it work? Users download a smartphone app, in which they are instructed to place their smartphone in their pocket and walk in a straight line for 30 seconds. During these 30 seconds, a smartphone accelerometer records the user’s walking pattern (Fig 2).

reham-2

Fig 2: Smartphone records user walking

The data collected from the accelerometer is then downloaded on to a computer so we can examine the presence of subtle movement problems in an individual’s walking pattern. However, to ensure that the subtle movement problems that we observe in an individual’s walking pattern is due to PD, we aim to simulate the user’s walking pattern via modelling the underlying mechanisms that occur in the brain during PD. If the simulated walking pattern matches the walking pattern collected from the user’s smartphone (Fig 3), we can look back at our model of the basal ganglia (BG)- an area in the brain often associated with PD – to see if it is predictive of PD.

 

reham-3

 

If it is predictive of PD, and we observe subtle movement problems in the user’s walking pattern, we can classify an individual as being at risk of developing PD. Thus, an individual’s health status will be based on a plausible link between their physical and biological characteristics. In cases in which the biological and physical evidence do not stack up, for example when we observe subtle movement problems in an individual’s walking pattern but the information drawn from the BG is not indicating PD, we can dismiss the results in order to prevent a misdiagnosis. A misdiagnosis can have a significant impact on an individual’s health and psychology. Thus, it is pivotal that the methods that we build allow us to identify scenarios in which the model is not capable of accurately predicting an individual’s health status, a problem which a lot of current techniques in the field lack.

To simulate the user’s walking pattern, we aim to mathematically model the BG and use it as input into another mathematical model of the mechanics of human walking. The BG model consists of many variables to make it work. To find the values for the different variables of the BG model such that it simulates the user’s walking pattern, we will use a statistical technique known as Approximate Bayesian Computation (ABC). ABC works by running many simulations of the BG model until it simulates a walking pattern that is a close match to the user’s walking pattern.

Ultimately our approach aims to provide insight into an individual’s brain deterioration through their walking pattern, measured using smartphone accelerometers, in order to know how their health is changing.

Benefits

As well as identifying those at risk of developing PD from healthy individuals, our approach provides the following benefits:

  • Providing insight into how the disease affects movement both before and after diagnosis.
  • Identifying disease severity in order to decide on the right dosage of medication for patients.
  • Tracking the effect of drugs on symptom severity for PD patients and those at risk.

Application

Apple recently launched ResearchKit, which is a collection of smartphone applications that aims to monitor an individual’s health. Companies such as Apple are realising the potential of smartphones to screen for diseases. The ability to monitor patients long-term, in a non-invasive manner, through smartphones is promising, and can provide a more accurate picture of an individual’s health.

Advances in smartphone sensing are likely to have a substantial impact in many areas of our lives. However, how far can we go with monitoring people without jeopardizing their privacy? How do we prevent the leakage of sensitive information collected from millions of people? The growing evolution of sensor-enabled smartphones presents innovative opportunities for mobile sensing research, but it comes with many challenges that need to be addressed.