Researching through Recovery: Embarking on a PhD post-brain surgery

By Sinead Matson, B.A., H.Dip. Montessori, M.Ed.


Anyone who has had the misfortune to undergo a craniotomy should do a PhD. Seriously. It makes sense. Both paths have similar hurdles: Imposter syndrome – check! Struggle with writing – check! Trouble expressing your thoughts – check! Extreme tiredness – check, check! It’s physiotherapy, but for your brain.

I joke of course, because each person’s individual recovery is different, but doing a PhD has personally given me the space to recover from a craniotomy while still actively working on my career and passion. I was always going to embark on a doctoral degree but in October 2014 (ten weeks after my second child was born) I had four successive tonic-clonic seizures which ultimately led to the discovery and removal of a large meningioma (brain tumour) four days later. When I woke up from surgery I couldn’t move the right-hand side of my body except for raising my arm slightly; my speech and thought process was affected too. Of course, I panicked, but the physiotherapist was on hand to tell me that while the brain had forgotten how to talk to the muscle – the muscle never forgets. I instantly relaxed, “muscle memory! I’ve got this” I thought to myself – forever the Montessori teacher.

Nobody tells you that recovering from brain surgery is exhausting, so exhausting. Every day I had to relearn things I had previously known. Every single sense is heightened and a ten-minute walk around the supermarket is a sensory overload. However, I never questioned the fact that I would start college the following September; in fact, it drove me to do my physio and get physically better. I even applied for a competitive scholarship and won it. I can never explain enough how much of a boost that was to my self-esteem. There is nothing like brain surgery to make you question your identity and your cognitive skills in a profession that values thinking, research, articulating new ideas, and writing. It is like an attack on your very being.


When I started, I could not have been more accommodated by the Education department in Maynooth University, but in a manner which was subtle and encouraging whilst still pushing me to do a little bit more. My supervisor struck a delicate balance between supportive and always encouraging me to look a little further and read more. I never felt mollycoddled or out of my depth (well… no more than the average PhD student).

Of course, there are challenges. Aren’t there always? It can be frustrating (not to mention embarrassing) when you cannot process a conversation as quickly as it is happening at meetings, conferences, or seminars; it’s the same for when you answer a question but know the words you are saying are not matching what you are trying to articulate. Submitting a piece of writing to anyone, anywhere, is the most vulnerable thing that you can experience, especially when your language centre has been affected and you know your grammar and phrasing might not always be up to par. Transitions flummox me, particularly verbal transitions like the start of a presentation, introducing and thanking a guest speaker, taking on the position of chairing a symposium, and day to day greetings. I lose all words, forget etiquette, and generally stammer. I forever find myself answering questions or reliving scenarios from the day in the shower!

So, what’s different between mine and any other doctoral student’s experience you ask? Well, I’m not sure. I see my fellow students all have the same worries and vulnerabilities. We all have discussed our feelings of imposter syndrome at various points thus far, our excitement and disbelief when our work is accepted for presentation or publication, and our utter distress at not being able to articulate what we really wanted to say in front of a visiting professor. I do know this: it used to be easier; I used to do it better; I never had problems with writing or verbal transitions before; it is harder for me now. But (BUT) I now have a whole team of people who share my feelings and frustrations. I now have a community who champion my successes and comfort me with their own tales when I have bad days. I now feel less isolated and more normal. They allow me…no…they push me to do more, to believe I could travel to India alone to research; to not let epilepsy or fear to hold me back; to believe that I could negotiate the research process on the ground with preschool children and their parents and not get overwhelmed. They have read papers and assignments for me before I submit them and they expect the same of me. They simultaneously allow me room to vent (and take the lift when I’m too tired to walk) and they push me to be more adventurous with my reading and theory – to take risks I may never have taken.

All-in-all, I cannot think of a better way to recover from brain surgery and all it entails than the absolute privilege of completing a PhD. It gives me a space – a safe space – to recover in. The research process itself has helped me learn who I am again, what I stand for, and what I believe. It has pushed me so far outside of my comfort zone in a way that I’m not sure I would have done otherwise but I am positive is vital to my full recovery. It has exercised my own personal cognitive abilities, reasoning skills, verbal and written expression so much more than any therapy could have, and it has given me, not a cheerleading team, but a community of researchers who are on the same journey – in a way.

I’m not saying it’s for everyone – no two recoveries are the same. However, I wish there was (and I did search for) someone who could have told me before the surgery, but particularly while I was in recovery, that life doesn’t have to stop. That it is not only possible to research while in recovery from brain surgery, but that it can also have a transformative effect on your life and your sense of identity; that it will push you outside of every comfort zone you’ve ever had, and it will be exhilarating.

2015-10-26 15.54.54



How your brain plans actions with different body parts

Got your hands full? – How the brain plans actions with different body parts

by Phyllis Mania

STEM editor: Francesca Farina

Imagine you’re carrying a laundry basket in your hand, dutifully pursuing your domestic tasks. You open the door with your knee, press the light switch with your elbow, and pick up a lost sock with your foot. Easy, right? Normally, we perform these kinds of goal-directed movements with our hands. Unsurprisingly, hands are also the most widely studied body part, or so-called effector, in research on action planning. We do know a fair bit about how the brain prepares movements with a hand (not to be confused with movement execution). You see something desirable, say, a chocolate bar, and that image goes from your retina to the visual cortex, which is roughly located at the back of your brain. At the same time, an estimate of where your hand is in space is generated in somatosensory cortex, which is located more frontally. Between these two areas sits an area called posterior parietal cortex (PPC), in an ideal position to bring these two pieces of information – the seen location of the chocolate bar and the felt location of your hand – together (for a detailed description of these so-called coordinate transformations see [1]). From here, the movement plan is sent to primary motor cortex, which directly controls movement execution through the spinal cord. What’s interesting about motor cortex is that it is organised like a map of the body, so the muscles that are next to each other on the “outside” are also controlled by neuronal populations that are next to each other on the “inside”. Put simply, there is a small patch of brain for each body part we have, a phenomenon known as the motor homunculus [2].


Photo of an EEG, by Gabriele Fischer-Mania

As we all know from everyday experience, it is pretty simple to use a body part other than the hand to perform a purposeful action. But the findings from studies investigating movement planning with different effectors are not clear-cut. Usually, the paradigm used in this kind of research works as follows: The participants look at a centrally presented fixation mark and rest their hand in front of the body midline. Next, a dot indicating the movement goal is presented to the left or right of fixation. The colour of the dot tells the participants, whether they have to use their hand or their eyes to move towards the dot. Only when the fixation mark disappears, the participants are allowed to perform the movement with the desired effector. The delay between the presentation of the goal and the actual movement is important, because muscle activity affects the signal that is measured from the brain (and not in a good way). The subsequent analyses usually focus on this delay period, as the signal emerging throughout is thought to reflect movement preparation. Many studies assessing the activity preceding eye and hand movements have suggested that PPC is organised in an effector-specific manner, with different sub-regions representing different body parts [3]. Other studies report contradicting results, with overlapping activity for hand and eye [4].


EEG photo, as before.

But here’s the thing: We cannot stare at a door until it finally opens itself and I imagine picking up that lost piece of laundry with my eye to be rather uncomfortable. Put more scientifically, hands and eyes are functionally different. Whereas we use our hands to interact with the environment, our eyes are a key player in perception. This is why my supervisor came up with the idea to compare hands and feet, as virtually all goal-directed actions we typically perform using our hands can also be performed with our feet (e.g., see for mouth and foot painting artists). Surprisingly, it turned out that the portion of PPC that was previously thought to be exclusively dedicated to hand movement planning showed virtually the same fMRI activation during foot movement planning [5]. That is, the brain does not seem to differentiate between the two limbs in PPC. Wait, the brain? Whereas fMRI is useful to show us where in the brain something is happening, it does not tell us much about what exactly is going on in neuronal populations. Here, the high temporal resolution of EEG allows for a more detailed investigation of brain activity. During my PhD, I used EEG to look at hands and feet from different angles (literally – I looked at a lot of feet). One way to quantify possible effects is to analyse the signal in the frequency domain. Different cognitive functions have been associated with power changes in different frequency bands. Based on a study that found eye and hand movement planning to be encoded in different frequencies [6], my project focused on identifying a similar effect for foot movements.


Source: Pixabay

This is not as straightforward as it might sound, because there are a number of things that need to be controlled for: To make a comparison between the two limbs as valid as possible, movements should start from a similar position and end at the same spot. And to avoid expectancy effects, movements with both limbs should alternate randomly. As you can imagine, it is quite challenging to find a comfortable position to complete this task (most participants did still talk to me after the experiment, though). Another important thing to keep in mind is the fact that foot movements are somewhat more sluggish than hand movements, owing to physical differences between the limbs. This circumstance can be accounted for by performing different types of movements; some easy, some difficult. When the presented movement goal is rather big, it’s easier to hit than when it’s smaller. Unsurprisingly, movements to easy targets are faster than movements to difficult targets, an effect that has long been known for the hand [7] but had not been shown for the foot yet. Even though this effect is obviously observed during movement execution, it has been shown to already arise during movement planning [8].

So, taking a closer look at actual movements can also tell us a fair bit about the underlying planning processes. In my case, “looking closer” meant recording hand and foot movements using infrared lights, a procedure called motion capture. Basically the same method is used to create the characters in movies like Avatar and the Hobbit, but rather than making fancy films I used the trajectories to extract kinematic measures like velocity and acceleration. Again, it turned out that hands and feet have more in common than it may seem at first sight. And it makes sense – as we evolved from quadrupeds (i.e., mammals walking on all fours) to bipeds (walking on two feet), the neural pathways that used to control locomotion with all fours likely evolved into the system now controlling skilled hand movements [9].

What’s most fascinating to me is the incredible speed and flexibility with which all of this happens. We hardly ever give a thought to the seemingly simple actions we perform every minute (and it’s useful not to, otherwise we’d probably stand rooted to the spot). Our brain is able to take in such a vast amount of information – visually, auditory, somatosensory – filter it effectively and generate motor commands in the range of milliseconds. And we haven’t even found out a fraction of how all of it works. Or to use a famous quote [10]: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

 [1] Batista, A. (2002). Inner space: Reference frames. Current Biology, 12(11), R380-R383.

[2] Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389-443.

[3] Connolly, J. D., Andersen, R. A., & Goodale, M. A. (2003). FMRI evidence for a ‘parietal reach region’ in the human brain. Experimental Brain Research153(2), 140-145.

[4] Beurze, S. M., Lange, F. P. de, Toni, I., & Medendorp, W. P. (2009). Spatial and Effector Processing in the Human Parietofrontal Network for Reaches and Saccades. Journal of Neurophysiology, 101(6), 3053–3062

[5] Heed, T., Beurze, S. M., Toni, I., Röder, B., & Medendorp, W. P. (2011). Functional rather than effector-specific organization of human posterior parietal cortex. The Journal of Neuroscience31(8), 3066-3076.

[6] Van Der Werf, J., Jensen, O., Fries, P., & Medendorp, W. P. (2010). Neuronal synchronization in human posterior parietal cortex during reach planning. Journal of Neuroscience30(4), 1402-1412.

[7] Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology47(6), 381.

[8] Bertucco, M., Cesari, P., & Latash, M. L. (2013). Fitts’ Law in early postural adjustments. Neuroscience231, 61-69.

[9] Georgopoulos, A. P., & Grillner, S. (1989). Visuomotor coordination in reaching and locomotion. Science, 245(4923), 1209–1210.

[10] Pugh, Edward M, quoted in George Pugh (1977). The Biological Origin of Human Values.


Sitting in the dark: the importance of light in theatre

I’ve spent a lot of the past year sitting in the dark – literally. For people who work in theatre, this may come as no surprise. In the eight years I spent working full-time as a lighting assistant/production electrician, I could quite easily go for three or four days in a row without seeing any sunlight. I’ve often thought it odd that the people who “create” light for live performance, people who use light as their primary creative medium, spend so much time in the dark. If you’re unfamiliar with the theatre production process, here’s a (very brief and very simplified!) rundown:
In most regional and London producing theatres, work on a production begins about four to six months prior to the first preview. This can be significantly longer on larger shows, particularly those in the West End. About a week before the first preview, the cast, director, and design team move into the theatre space itself to start technical rehearsals. By this stage, the set has been built, costumes made, lights and speakers rigged, etc. The technical rehearsal is the start of what is called the production week (also known as “hell week” in some American theatres on account of the long days). Technical rehearsals are the only time the entire company is together in the performance space, and they are – as the name suggests – focused primarily on the technical and design elements of a production. Technical rehearsals are often very “stop and start” as cues, scene changes, costume changes, etc. are run multiple times until all parties are comfortable. Once the whole production is worked through in this manner, this is followed by a dress rehearsal (often two or three, plus notes sessions) before the first public performance.

The lighting designer

For a lighting designer, the first day of technical rehearsals is often the most difficult. All of the lighting designer’s pre-production research, the conversations they have had with the designer, director and theatre’s head of lighting, and the plans they have drawn and had implemented by the theatre’s lighting department converge on this day, and there is enormous pressure on the lighting designer to “get it right” – funding situations in most UK theatres are such that time, money and resources are at a premium and at this point there is not enough of any of those to start over or make significant changes. This pressure is compounded by the fact that lighting is the sole visual design element that can only be created in the performance space. During the pre-production period, set designers produce a scale modelbox, alongside technical drawings, sketches and storyboards, and costume designers may use artistic drawings in conjunction with fabric swatches, for example, to help articulate their process and creative ideas. For both set and costume design, the actual product is built over several weeks and can be seen as a work-in-progress during this time. Moreover, the materials of set and costume design are tangible and the work can be observed, commented on, tweaked and refined outside and, crucially, before entering the actual performance space. Similar comparisons and tools do not exist for lighting designers. Computer visualisation software may be used; however, these programs rarely provide the detail needed to fully explain, describe or develop the potential of light outside a performance space.
In addition, these days tend to involve the most negotiation and adjustment as creative teams (especially the lighting designer) learn to navigate the “language” and “grammar” of a production, while also refining the spoken language and grammar they use to articulate it. It is this process that my research focuses on. How do lighting designers use language to articulate ideas about light and lighting, a material and a process that is largely intangible? How do they additionally use language to exercise agency and exert influence in situations of creative collaboration?

My research

To answer these questions, I sit in the dark, behind the lighting designer, armed with two recording devices. One of these records the ambient conversation, usually between the director or designer and the lighting designer. The other records the conversation on “cans” (UK theatre slang for the headsets worn by all members of the design and technical teams to facilitate conversation without having to resort to shouting backstage!).
The darkness provides an ideal environment for conducting my fieldwork. Even though I am acting as an “overt insider” (Merton, 1972; Greene, 2014), the darkness makes it possible for me to fade into the background and remain largely unnoticed by the people I am observing – which is simultaneously useful and disconcerting. There is something anonymising about the dark, but it can also be quite liberating. There’s plenty of interesting research on audience behaviour and fascinating studies on people’s behaviour generally in the dark — but for now, I’ll just say what an illuminating (see what I did there?) experience sitting in the dark has been!
Greene, M.J. 2014. On the inside looking in: methodological insights and challenges in conducting qualitative insider research. The Qualitative Report. 19(How To Article 15), pp.1–13.
Merton, R.K. 1972. Insiders and outsiders: a chapter in the sociology of knowledge.American Journal of Sociology. 78(1), pp.9–47.

Space weather – predicting the future

by Aoife McCloskey

Early Weather Prediction

Weather is a topic that humans have been fascinated by for centuries and, dating back to the earliest civilisations ’till the present day, we have been trying to predict it. In the beginning, using the appearance of clouds or observing recurring astronomical events, humans were able to better predict seasonal changes and weather patterns. This was, of course, motivated by reasons of practicality such as agriculture or knowing when the best conditions to travel were, but additionally it stemmed from the innate human desire to develop a better understanding of the world around us.

Weather prediction has come a long way from it’s primordial beginning, and with the exponential growth of technological capabilities in the past century we are now able to model conditions in the Earth’s atmosphere with unprecedented precision. However, until the late 1800’s, we had been blissfully unaware that weather is not confined solely to our planet, but also exists in space.

Weather in Space

Weather, in this context, refers to the changing conditions in the Solar System and can affect not only our planet, but other solar system planets too. But what is the source of this weather in space? The answer is the biggest object in our solar system, the Sun. Our humble, middle-aged star is the reason we are here at all in the first place and has been our reliable source of energy for the past 4.6 billion years.

However, the Sun is not as stable or dependable as we perceive it to be. The Sun is in fact a very dynamic object, made up of extremely high temperature gases (also known as plasma). Just like the Earth, the Sun also generates its own magnetic field, albeit on a much larger scale than our planet. This combination of strong magnetic fields, and the fact that the Sun is not a solid body, leads to the build up of energy and, consequently, energy release. This energy release is what is known as a solar flare, simply put it is an explosion in the atmosphere of the Sun that produces extremely high-energy radiation and spits out particles that can travel at near-light speeds into the surrounding interplanetary space.

The Sun: Friend or Foe?

Sounds dangerous, right? Well yes, if you were an astronaut floating around in space, beyond the protection of the Earth, you would find yourself in a very undesirable position if a solar flare were to happen at the same time. For us here on Earth, the story is a bit different when it comes to being hit with the by-products of a solar flare. As I said earlier, our planet Earth produces its very own magnetic field, similar to that of a bar magnet. For those who chose to study science at secondary school, I’m sure you may recall the lead shavings and magnet experiment. Well, that’s pretty much what our magnetic field looks like, and luckily for us it acts as a protective shield against the high-energy particles that come hurtling our way on a regular basis from the Sun. One of the most well-known phenomena caused by the Sun is actually the Aurora Borealis, i.e., the northern lights (or southern lights depending on the hemisphere of the world you live).


Picture of the Aurora Borealis, taken during Aoife’s trip to Iceland in January 2016.

This phenomenon has been happening for millennia, yet until recent centuries we didn’t really understand why. What we know now is that the aurorae are caused by high-energy particles from the Sun colliding with our magnetic field, spiralling along the field lines and making contact with our atmosphere at both the north and south magnetic poles. While the aurorae are actually a favourable effect of space weather, as they are astonishingly beautiful to watch and photograph, there are unfortunately some negative effects too. These effects here on Earth range from satellite damage (GPS in particular), to radio communication blackout, to the more extreme case of electrical grid failure. Other effects are illustrated in the image below:

My PhD – Space Weather Forecasting

So, how do we predict when there is an event on the Sun that could have negative impacts here on Earth? Science, of course! In particular, in the area of Solar Physics there has been increasing focus on understanding the physical processes that lead to space weather phenomena and trying to find the best methods to predict when something such as a solar flare might occur.

It is well known that one should not directly view the Sun with the naked eye, therefore traditionally the image of the Sun was projected onto pieces of paper. Using this method, one of the first features observed on the Sun were large, dark spots that are now known as sunspots. These fascinated astronomers for quite some time and there is an extensive record of sunspots kept since the early 1800’s. These sunspots were initially traced by hand, on a daily basis, until photographic plates were invented and this practice became redundant. After many decades of recording these spots there appeared to be a pattern emerging, corresponding to a roughly 11-year cycle, where the number of spots would increase to a maximum and gradually decrease again. It was shown that this 11-year cycle was correlated with the level of solar activity, in other words the number of solar flares and how much energy they release can also be seen to follow this pattern.


Sunspot drawing by Richard Carrington, 01 September 1859

Leading on from this, it is clear that there exists a relationship between sunspots and solar flares, so logically they are the place to start when trying to forecast. My PhD project focuses on sunspots and how they evolve to produce flares. For a long time, sunspots have been classified according to their appearance. One of the most famous classification schemes was developed by Patrick McIntosh and has been used widely by the community to group sunspots by their size, symmetry and compactness (how closely packed are the spots) [1]. Generally, the biggest, baddest and ugliest groups of sunspots produce the most energetic, and potentially hazardous, flares. Our most recent work has been studying data from past solar cycles (1988-2010) and looking at how the evolution of these sunspot groups relates to the flares they produce [2]. I found that those that increase in size produce more flares than those that decrease in size. This has been something that has been postulated before in the past, and additionally it helps to answer an open question in the community as to whether sunspots produce more flares when they increase in size (grow) or when they decrease in size (decay). Using these results, I am now implementing a new way to predict the likelihood of a sunspot group to produce flares and additionally the magnitude of those flares.


Space weather is a topic that is now, more than ever, of great importance to our technology-dependent society. That is not to say that there will definitely be any catastrophic event in the near-future, but it is certainly a potential hazard that needs to be addressed on a global scale. In recent years there has been some significant investment in space weather prediction, with countries such as the UK and the U.S. both establishing dedicated space weather forecasting services. Here in Ireland, our research group at Trinity College has been working on improving the understanding of and prediction of space weather for the past ten years. I hope, in the near future, space weather forecasting will reach the same level of importance as the daily weather forecast, but for now – watch this space.

  1. McIntosh, Patrick S (1990), ‘The Classification of Sunspots’,  Solar Physics, p.251-267.
  2. McCloskey, Aoife (2016), ‘Flaring Rates and the Evolution of Sunspot Group McIntosh Classifications’, Solar Physics, p.1711-1738.

Assisted Reproductive Technologies and Irish Law

Who’s left holding the baby now? Assisted Reproductive Technologies and Irish Law

by Sarah Pryor

The rapid rate of development and expansion in usability of genetic technologies in the past decade is both a cause for celebration and a cause for concern.

There is an impetus on law and policy makers to act responsibly in creating and implementing legal tools to aid in the smooth operation and integration of these technological advances into society in order to mitigate the possibility of society enduring any negative impact from the existence and use of technologies in this growing area.

The question asked here is; do assistive reproduction technologies challenge the traditional concepts of parenthood generally, and motherhood specifically, and what impact does this have on Irish law and society?

Quite simply put, the answer is yes, these emerging technologies do challenge traditional familial concepts and norms. The answer as to what impact this has on Irish law and society is exceedingly more complicated.

Ethical concerns

Reproduction is becoming increasingly more medicalised, geneticised and commercialised. This has the potential to diminish the human condition and damage the human population.[1] In a time of scientific, social and legal change it is inevitable that there will be periods of uncertainty. It is under these conditions of uncertainty that identity and ethics must be debated, and boundaries must be established in order to ensure that no negative experiences come to the broader population due to the advancements being made in the area of assisted reproduction.

The ethical concerns surrounding the increased medicalisation of human reproduction range greatly.[2]

The most challenging element of reproductive technologies is the fact that the issues being debated are deeply personal and sensitive, meaning that no one experience is the same and as such, there is difficulty in establishing a standard of practice, as well as a legally and ethically balanced acceptance of the use of these procedures. These difficulties are inherent to discussion surrounding human reproduction.

Assisted Human Reproduction in Ireland

Assisted Human Reproduction (AHR) was not formally recognised as an area in need of governmental oversight until the year 2000 when the Commission for Assisted Human Reproduction, herein referred to as ‘the Commission’, was established and the need for comprehensive, stand alone, legislation in this area was recognised.[3]

The Commission and subsequent report were welcomed as a move towards the recognition of a set of newly emerging social norms in Ireland; both in terms of medicine and reproductive technologies and also in terms of the traditional nuclear family and the growth towards new familial norms. However, following the publication of the 2005 report there was little done in the way of proactive implementation of the set out recommendations.[4]

Political conversation centres around the disappointment that questions surrounding the protocol of AHR services and their use must be addressed via judicial channels and that there is not legislation in place to counteract the need to use the Irish Court System to get answers.[5]

The lack of legislation in this area means that the only avenue for the guidance of medical practitioners comes from the Irish Medical Council “Guide to Professional Conduct and Ethics for registered medical practitioners”.[6] Several cases in recent years have been brought to the High Court and Supreme Court in order to solve the maze this legal vacuum leaves patients struggling through.[7] These cases, as recently as 2014, have highlighted the necessity for legislation in the area in order to protect all parties involved.

The role of religion

It is important to recognise the cultural history of Ireland and the importance of the social and political role of the Catholic Church for many years. Older Irish generations were reared in a country in which contraception was illegal and women did not work once they were married as their societal role was in the home. Newly emerging technologies, such as surrogacy, further challenge these traditional values.

There is an unfortunate pattern of political and religious control over a woman’s right to reproduce and the conditions in which it is ‘right’ for a woman to have a baby. For a long time in Ireland, there was no real separation of church and State. The ramifications of this have rippled throughout Irish history and up to the present day – no more so than in the area of the reproductive rights of women.

Parallels with the Repeal the 8th campaign 

Although distinctly different from the abortion debate, and the argument for the repeal of the 8th amendment, certain parallels can be drawn in how the government has responded to calls from various groups to provide guidance in the area of assisted reproduction and how these calls have been largely brushed to the side. On the introduction of the Children and Family Relationships Act 2015, Minister for Justice & Equality Francis Fitzgerald removed any reference to surrogacy because it was too large an issue to merely be a feature of a more generalised bill, so there is indication that positive movements are being made in this area – the question is when will they actually be formulated into real, working policies, laws and protocols?

ARTs and the Marriage Equality referendum

Until 2015, marriage in Ireland was exclusively available for heterosexual couples. The 34th Amendment of the Irish Constitution changed this, effectively providing for a more equal society in which traditional Irish values towards marriage were replaced with a more accepting stance, something which was voted for by the Irish public through a referendum.[8]

The gravity of such a change in Irish society has implications beyond just marriage. Laws regarding areas such as adoption were relevant only to the married couple and, within that context, this meant only heterosexual couples. Irish family law was written with the traditional ‘mother, father and children’ family in mind. It is fair to say that family dynamics have changed significantly, and the movement away from traditional concepts of family is increasing. With the passing of the Marriage Referendum, marriage in the context of law and society has taken on a new meaning, and the symbolic nature of this recognition of a new familial norm is plain to see. The Irish electorate voted for this, and public consultations on Assisted Reproductive Technologies (ARTs) have illustrated the support of the Irish people for ARTs, and for legislation regulating their use – and yet, still there is none.

ARTs are used by heterosexual and homosexual couples alike. The Children and Family Relationships Act 2015 has made movements towards acknowledging new familial norms in Ireland and was a welcomed symbol of the future for Irish society as increasingly liberal and accepting. Although many pressing issues are not addressed within the Act, such as surrogacy, the support for the enactment of new measures regarding familial relationships is a deeply reassuring acknowledgement of the changing, evolving nature of Irish society and their views towards non-traditional family units. While this is to be welcomed, it simply doesn’t go far enough.

The role of the mother

One area that has not been addressed in any significant way is the greatly changed role of the mother.

Mater semper certa est – the mother is always certain. This is the basis on which Irish family law operates and it is this historical, unshakeable concept that is being shaken to its core by the emergence of ARTs.

Traditional concepts of motherhood are defined solely through the process of gestation.[9] A birth mother, in the context of Irish law, is the legal mother.[10] This has remained a point of contention in the Irish courts, demonstrated in the 2014 Supreme Court case addressing the rights of a surrogate mother to her genetically linked children to whom she did not give birth. Denham CJ addressed the ‘lacuna’ in Irish law, emphasising the responsibilities of the Oireachtas, in saying that:

“Any law on surrogacy affects the status and rights of persons, especially those of the children; it creates complex relationships, and has a deep social content. It is, thus, quintessentially a matter for the Oireachtas.”

Chief Justice Denham further stated that:

“There is a lacuna in the law as to certain rights, especially those of the children born in such circumstances. Such lacuna should be addressed in legislation and not by this Court. There is clearly merit in the legislature addressing this lacuna, and providing for retrospective situations of surrogacy.”[11]

The emergence of ARTs as common practice, particularly regarding egg and sperm donation, surrogacy and embryo donation, have created a new concept of parenthood, and more specifically motherhood.

There are deeply segregated emerging views over who exactly is the legal mother, and the social mother, the rights that each participant has, and who is responsible for the donor or surrogate child.

Whilst some of these issues were addressed in both the Commission Report and the 2013 RCSI Report, such as the right of the donor child to the information of their donor, neither delve deeply into the implications of such medical processes on concepts of motherhood and parenthood.

Three fragmented concepts of motherhood now exist; social, gestational and genetic.[12] Although there are established ideologies of parental pluralism within society regarding adoption, the nature of the situation in which a child is born though the use of ARTs is fundamentally different from an adoption agreement which is accounted for in Irish law.

Feminist views on ARTs

Feminist views differ greatly in their resounding opinions on the emergence of assistive reproduction technologies. Arguments are made opposing ARTs as methods of increased control over a woman’s reproduction through commercialisation and reinforcement of the pro-natalist ideologies.[13] Others argue in favour of ARTs in stating that their development allows women more freedom over their reproductive choices and enables women to bear children independently of another person and at a time that is suitable to her; an example of this being the use of IVF by a woman at a later stage in her life.[14]

These complexities exist before even considering the social and legal role of parents in same sex relationships – what relevance does the role of the mother have for a gay couple? What relevance does the role of a father have for a lesbian couple? Does the increasing norm of homosexual couples having children via surrogate mitigate any need for these socially constructed familial roles and highlight the irrelevance of these roles in modern society? The same questions can be asked of a single man or woman seeking to have a child via surrogate – should a person only have a child if they are in a committed relationship? Surely not, as single parents currently exist in Ireland, have done so for some time, and are raising their children without objection from society or the state.

‘The law can no longer function for its purpose’

Regardless of where one’s stance lies on the emergence of these technologies, it is undeniably clear that their use is challenging normative views and practices of parenthood. The traditional, socially established norms are shifting from what was once a quite linear and nuclear view. ARTs allow for those who previously could not have genetically linked children to do so via medical treatments. It is in this way that the situation under current Irish law is exacerbated, and the law can no longer function for its purpose.

Something needs to be done, so that whoever wants to be, can be left holding the baby!

[1] Sarah Franklin and Celia Roberts, Born and Made: An Ethnography of Preimplantation Genetic Diagnosis (Princeton University Press 2006).

[2] Sirpa Soini and others, ‘The Interact between Assisted Reproductive Technologies and Genetics: Technical, Social, Ethical and Legal Issues’ (2006) 14 European Journal of Human Genetics.

[3] David J Walsh and others, ‘Irish Public Opinion on Assisted Human Reproduction Services: Contemporary Assessments from a National Sample’.

[4] Deirdre Madden, ‘Delays over Surrogacy Has Led to Needless Suffering for Families’ Irish Independent (2013) <; accessed 25 June 2016.

[5] Roche v. Roche 2009

See also, MR & DR v. An tArd Chlaraitheoir 2014

[6] David J Walsh and others, ‘Irish Public Opinion on Assisted Human Reproduction Services: Contemporary Assessments from a National Sample’.

[7] See Roche v. Roche 2009. See also MR & DR V. An tArd Chlaraitheoir 2014

[8] 34th amendment of the Constitution (Marriage Equality) Act 2015.

[9] Andrea E Stumpf, ‘Redefining Mother: A Legal Matrix for New Reproductive Technologies’ (1986) 96 The Yale Law Journal 187 <; accessed 16 June 2016.

[10] See, MR And DR v an t-ard-chláraitheoir & ors: Judgments & determinations: Courts service of Ireland [2014] IESC 60.  [S.C. no.263 of 2013]

[11] Ibid, para 113, para 116.

[12] SA Hammons, ‘Assisted Reproductive Technologies: Changing Conceptions of Motherhood?’ (2008) 23 Affilia 270 <; accessed 4 August 2016.

[13] SA Hammons, ‘Assisted Reproductive Technologies: Changing Conceptions of Motherhood?’ (2008) 23 Affilia 270 <; accessed 4 August 2016. See also, Gimenez, 1991, p.337

[14] See, Bennett, 2003 and Firestone, 1971

CHILD SOLDIERS: Where are the girls?

CHILD SOLDIERS: Where are the girls?  Kids, guns and the Patriarchy

By Marie Penicaut

Much has been written lately about African child soldiers.[1] We, in the West, are all familiar with the image of an eight or ten year old boy, holding an AK-47 too big for him, in a pseudo-military uniform, his eyes crying for help. We see him in newspapers and on television. We hear his horrifying story in documentaries, interviews, and sometimes self-written memoirs. Since Blood Diamond[2], we also see him in fiction films, poignant and stereotypical representations of these kids’ tragic lives that we too readily take for granted. And, as Nigerian author Chimamanda Ngozi Adichie wonderfully puts it in an inspiring TedTalk, “the problem with stereotypes is not that they are untrue, but that they are incomplete. They make the single story become the only story”.[3]



The ‘typical’ child soldier

But where are the girls in all of that? Why don’t we see pictures of little girls carrying AK-47s? Why is there virtually no girl – not a single one – in Netflix’s critically acclaimed Beasts of No Nation[4], while many studies have proven that they constitute up to 40% of all child soldiers in some African contexts? Why are they so often completely ignored by academic literature, governments, international organisations and NGOs alike?



Agu’s all-boys unit marching towards combat. Screenshot from Beasts of No Nation.

The answer should not come as a surprise. Once again, the Patriarchy strikes: society puts us in two clear-cut categories, where according to our biological sex – male or female[5] – we are expected to behave in a certain way. Girls will naturally be peaceful, pacifist, and passive; boys will be inherently violent, aggressive, and impulsive. Hence the common belief that on one side, ‘girls don’t fight’, while on the other, ‘boys will be boys’ – which inevitably leads to the idea that war is the realm of men, and of men uniquely.

No wonder, then, that girl child soldiers are invisible, even when confronted with evidence that 10 to 30% of child soldiers worldwide are female, and 30 to 40% in recent African conflicts.[6]

When – and if – mentioned, it is only as simple camp followers. As the ‘good little women’ they are, they cook, do the laundry and take care of the youngest. But in reality, many receive military training and fight just like the boys.[7] During the Mozambican War of Independence (1964-74), which opposed the Portuguese government and FREMILO (The Mozambique Liberation Front), the rebels had mixed and female-only military units where girls and young women fought for the liberation of their country.[8] War was an opportunity for them to escape their gender roles. They were treated just the same as men. But once the country became independent in 1975, it was not long before they were sent back to the kitchen, and the crucial role they played was progressively forgotten.



Johnny Mad Dog or the stereotypical child soldier narrative

We should not underestimate the power of the media and of pop-culture. They both represent and influence the way we make sense of the world. The first thing I did when I started researching child soldiering in Africa (for my master’s dissertation) was to try to find as many fiction films and documentaries on the topic I could. Before entering the more nuanced and detailed academic discussion, I wanted to have the exact same perception of the phenomenon as everyone else.

I was shocked when I watched Johnny Mad Dog[9], the ultraviolent and ultra-clichéd adaptation of the eponymous novel by Emmanuel Dongala[10]. It tells Johnny’s story, abducted at 9 by rebels, now 15, in yet another unnamed African country torn by a senseless conflict – the Western discourse on African child soldiers is also profoundly racist: most movies are entirely decontextualized, as if the story could take place anywhere on the continent, negating the vast diversity of its 54 countries and the complex reasons that lead to armed conflict.

In the book, there are two narrators: Johnny and Laokolé, a strong and smart girl, who manages her way through a world of violence and chaos. But Sauvaire completely silences her to put Johnny at the centre of the story. She becomes a character of secondary importance. Even worse: while in the book she cold-bloodedly plans to kill Johnny, and does it, as she knows he intends to rape and kill her, the film ends on her indecision whether to shoot at him in self-defence. Her originally strong agency is simply erased.

Dongala’s resistant discourse is violated and distorted to conform to the expectations of a public for which violence is the monopoly of males.



Johnny Mad Dog’s last image: Laokolé pointing a gun towards Johnny, breathing heavily, undecided.


Girl soldiers, the “ultimate victim[s] in need of rescue”[11]

If you are active on social media, there is a good chance that you have heard of the Kony2012[12] phenomenon. The 30-minute video posted on YouTube by Invisible Children, an NGO built by three American missionaries, was created with the aim of fighting the child-soldiering the three “discovered” in Uganda. The viral video – which gained 100 million views in less than a week – sums up pretty well all the stereotypes on child combatants. It also illustrates the difference of treatment between girls and boys in the global discourse: “the girls are turned into sex slaves, and the boys into child soldiers”. Things are simple. Girls do all the chores and are sex slaves. Boys are forced to fight and to commit atrocities. Girls don’t fight and boys don’t get raped. Even more than their male counterparts, girls are voiceless victims in need of rescue by the West.



Kony and his ‘army of children’. Source: Screenshot of Kony2012

Many girls and women are victims of sexual violence, especially in the climate of conflict and instability that has affected a number of African countries in the past decades. But stories of rape and abuse too often eclipse other stories of bravery, resilience and survival.

Even more than boys, girls are denied any agency, any voice; they are denied the possibility to speak out and tell their story as they experienced it and not as we want to hear it.

In some contexts, becoming a soldier can be empowering for them. They can gain power, a surrogate family where they had none, and escape their traditional gender roles.[13] Their experience is too often reduced to the sexual violence they may or may not have undergone. In virtually every documentary I have watched for my dissertation project, girls are interviewed uniquely to talk about their experience of sexual violence, and often asked to provide gruesome details to satisfy the journalist’s, and the public’s, morbid curiosity.

It is not the first and certainly not the last time that women have been misunderstood and misrepresented because of sexist stereotypes. But the tragedy lies in the consequences this has on the ground, for real girls that have served weeks, months, and sometimes years in militias. Because ‘girls don’t fight’, many demobilisation, disintegration and rehabilitation programmes[14] exclude them. Only 5% benefit from them.[15] And when they do, their special needs are rarely addressed: no female clothing in the aid packages, no tampons or pads, no reproductive healthcare, etc. Skills training and camp activities are often biased towards males – learning masonry, carpentry, mechanics etc.[16] When going back to civilian life, because they are labelled as sexual victims, they are affected by a stigma of sexual activity. Whether real or not, this stigma leads to social exclusion. Many girls hide their rebel lives from their family and community and decide not to register for demobilisation because they are too afraid of the consequences – of being seen as monsters, as dangerous rebels, as ‘bush wives’[17] that can no longer marry.

More than anything else, girl child soldiers are victims of the Patriarchy. In the West, which ignores and silences them; and in their own societies that stigmatise and exclude them both as rebels and as trespassers of their gender roles. The child soldier phenomenon is a complex one. Its gender dimension is only one aspect of the issue, but one that deserves much more attention than it gets now.

Movies like Beasts of No Nation, Blood Diamond and Johnny Mad Dog, with a large audience and good critiques, are missed opportunities to challenge a simplistic, essentialist and dangerous understanding of child soldiers.

They perpetuate many harmful ideas and are representative of the status quo on the place of women in war: none.  “Just as these films were made mostly by whites and thus show a white bias, so were they made mostly by men and show a male bias.”[18]



[1] Understood as “any person below 18 years of age who is or who has been recruited or used by an armed force or armed group in any capacity, including but not limited to children, boys and girls, used a fighters, cooks, porters, messengers, spies, or for sexual purposes” (The Paris Principles, 2007).

[2] Blood Diamond, 2006. Directed by Edward Zwick.

[3] Available at:

[4] Beasts of No Nation, 2015. Directed by Cary J. Fukunaga.

[5] Many do not identify with these two categories.

[6] Denov, 2010, p. 13.

[7] Keairns, 2002, p. 13; Annan et al., 2009, p. 9.

[8] West, 2005.

[9] Johnny Mad Dog, 2008. Directed by Jean-Sébastien Sauvaire.

[10] Dongala, E. (2002) Johnny Chién Méchant. Paris: Le Serpent à Plumes.

[11] Macdonald, 2008, p. 136.

[12] Available at:

[13] Valder, 2014, p. 44.

[14] UN-led child-specific programmes whose goal is to facilitate their return to civilian life. NGOs often intervene and collaborate at different steps of the process (UNDDR Resource Centre).

[15] Taylor-Jones, 2016, p. 185.

[16] Coutler, 2009, p. 64.

[17] Girls and women forced to ‘marry’ within the rebel group.

[18] Cameron, 1994, p. 188.

Maths: the same in every country?

by Rose Cook, PhD candidate at the Institute of Education, University College London.

Think women aren’t good at maths? Depends on where you’re a woman. 


(We never miss a chance to quote Mean Girls here at Women Are Boring)

Do you know the difference between Celsius and Fahrenheit? Can you interpret information from line graphs in news articles? Calculate how many wind turbines would be needed to produce a certain amount of energy (given the relevant information)?

These may seem like basic tasks, but if you are a woman living in the UK, Germany or Norway, the chances are you would struggle with them more than a comparable man. If you live in Poland, however, you might even outperform a male counterpart.

Why this variation in skills, and why does it appear in some countries and not others?

For some, these findings, from the 2011 international survey of adult skills, run by the OECD,  will confirm their existing beliefs. In spite of women being more academically successful than men, the perception that ‘women can’t do maths’ is widely held. A recent experiment [1] showed that both genders believe this to be true: both male and female subjects were more likely to select men to perform a mathematical task that, objectively, both genders fulfil equally well. In her successful book ‘The Female Brain’, Louann Brinzedine argued that women are ‘hard wired’ for communication and emotional connection, while men’s brains are oriented towards achievement, solitary work and analytical pursuits.

Another camp of social scientists argue that such narratives misrepresent the facts.  Janet Shibley Hyde and colleagues insist that, at least in the United States, men and women’s cognitive abilities are characterised by similarity rather than difference. Reviewing findings across many studies of gender differences on standardised mathematics tests, these authors found that ‘even for difficult items requiring substantial depth of knowledge, gender differences were still quite small’[2].

The fact that gender differences show up on an international survey of numeracy skills is a puzzling addition to an already contentious picture. Of course, not all maths tests are created equal. The difference may in some way reflect the way the survey conceptualises skills. Distinct from mathematical ability, applied numeracy skills are described as:

‘the ability to use, apply, interpret, and communicate mathematical information and ideas’.[3]

Crucially, individuals who are ‘numerate’ should be able to apply these abilities to situations in everyday life. Perhaps these ‘everyday’ maths skills are more biased by gender than the measures used in other studies?

Numeracy: the ‘new literacy

I argue that we should take these gender differences seriously. More and more, jobs now require numeracy skills, both to perform basic tasks and to support ICT skills. Outside work, numeracy skills are increasingly required to make sense of the world around us. They help us to grasp concepts such as interest rates and inflation, which help us to deal with money. Moreover, according to the British Academy,

‘the ability to understand and interpret data is an essential feature of life in the 21st century: vital for the economy, for our society and for us as individuals. The ubiquity of statistics makes it vital that citizens, scientists and policy makers are fluent with numbers’.

The importance of numeracy has been recognised recently in the UK with the establishment of an All-Party Parliamentary Group for Maths and Numeracy, the National Numeracy charity, and initiatives such as Citizen Maths.

International variation

Particularly curious is the large variation across countries in the size of the gender difference. Figure 1, below, shows that, among adults aged between 16 and 65, the male advantage in applied numeracy skills is particularly large in Germany, the Netherlands and Norway, while it is virtually non-existent in Poland and Slovakia. The graph shows raw differences in average skill scores; although gaps reduce somewhat when controlling for age, family and immigration background and education, they remain.

Figure 1: Mean numeracy skills by gender, International Survey of Adult Skills, 2012


Source: Author’s calculations using data from the OECD Survey of Adult Skills (PIAAC). Survey and replicate weights are applied. Numeracy scores range from zero to 500. For more information on the survey, please see:

Any genetic component is unlikely to vary internationally [4], suggesting a substantial role for cultural, institutional or economic factors that vary across countries.

My PhD study

Given that the survey tests adults who have many experiences behind them, isolating the causes of gender differences and cross-country variation is far from simple. We are socialised into gendered preferences, motivations and skills from our earliest years [5]. We go on to make gendered choices in our educational lives, our careers and our leisure activities. All of these life domains contribute to the skills we end up with in adulthood. To some, a choice-based explanation is unproblematic; determining one’s own destiny is a core value in many contemporary societies. However, this side-steps the question of where preferences come from. Skill differences in adulthood may well reflect individuals’ choices; however, the choices themselves are likely to be influenced by a complex mixture of cultural, educational, economic and institutional factors; which vary in their salience across countries.

In my PhD study, I focus on education and labour market explanations. A key task for my research is disentangling why gender differences in numeracy skills are relatively large in countries typically considered ‘gender egalitarian’. For example, Scandinavian countries consistently top the rankings of  the World Economic Forum’s Global Gender Gap Report, and are held up as bastions of gender equality. Yet Norway, Sweden and Denmark show among the largest gender differences in adults’ applied numeracy skills. Poland, Slovakia and Spain are not known for being particularly progressive on gender equality, yet they show among the smallest differences.

School and skills

One possibility is that gender differences arise from what girls and boys are exposed to while they are at school. Despite a similar basic structure, education systems across the world differ in the extent to which subjects are optional or compulsory. For example, in the UK, mathematics was not compulsory in upper secondary education until recently; whereas in other countries this has long been the case. Where numerate subjects are not compulsory, they may be less valued, and this could have created more scope for gender to affect subject and career choices. There is also wide variation in the types of mathematics learning boys and girls are exposed to across countries, as well as between schools and classes within countries.

Work and skills

Another possibility is that differences in skills are related to the types of jobs that women and men pursue once they leave education. In the majority of countries in the study, occupational segregation is still widespread in spite of female’s superior performance in education, and is partly to blame for the continuing gender pay gap.  Gender occupational segregation is particularly rife in Scandinavian countries, although this has been improving in recent years [6]. Countries with strong gender segregation in jobs promote gender norms about what careers are appropriate and accessible for men and women. This is likely to drive the early choices that contribute to skills in adulthood. In contrast, in some countries gender segregation of jobs is less pronounced, which may set more egalitarian norms for skill development. Moreover, given the link between more demanding, highly skilled jobs and skill development in adulthood, concentration into lower paid, more routine jobs could affect the extent to which women are able to gain skills at work. In some countries’ labour markets, women may perceive weaker incentives to develop mathematical skills than their male counterparts, preferring more typically ‘feminine’ ones, such as communication and literacy skills.

In my view, skills gaps are among the hurdles we need to overcome in order to attain full economic equality between men and women. Using international comparisons, my research aims to locate gender differences in applied numeracy skills within a broader, institutional context.  This is important both to correct the assumption that differences are ‘fundamental’ or ‘natural’, and to design effectively-targeted policies to equalise skills. I use a variety of quantitative techniques in my research which isolate factors associated with gender differences at both the individual and country levels. This should broaden the discussion beyond the common focus on encouraging girls to make gender ‘atypical’ choices in education, which neglects both males and the broader social context in which skill differences develop. Moreover, while there is a large amount of research on gender and education, skills inequalities among adults are less often addressed. Yet they affect adults’ lives in profound ways [7]. I hope to show some of the ways in which skill differences among adults are not fixed by early experiences and biology, but malleable according to social context.


[1] Reuben, E., Sapienza, P. and Zingales, L. (2014). ‘How stereotypes impair women’s careers in science.’ Proceedings of the National Academy of Sciences, 111 (12), 4403-4408.

[2] Hyde, Janet S., et al. (2008) Gender similarities characterize math performance. Science 321 (5888) pp. 494-495 (p.495)

[3] OECD (2013) PIAAC Numeracy: A conceptual framework (p. 20) Paris: OECD.[4]

[4] Penner, A.M. (2008) Gender differences in extreme mathematical achievement: An international perspective on biological, social, and societal factors. American Journal of Sociology 114 (supplement) S138–S170.

[5] Maccoby, E. E., and D’Andrade, R. G. (1966) The development of sex differences. Stanford University Press.

[6] Bettio F and Verashchagina A (2009) Gender Segregation in the Labour Market: Root Causes, Implications and Policy Responses in the EU. Brussels: European Commission.

[7] Carpentieri, J. C., Lister, J., Frumkin, L., & Carpentieri, J. (2010). Adult numeracy: a review of research. London: NRDC.

Detecting Parkinson’s Disease with your mobile phone


by Reham Badaway, in collaboration with Dr. Max Little.

So, what if I told you that in your pocket right now, you have a device that may be able to detect for the symptoms of a brain disease called Parkinson’s, much earlier than doctors themselves can detect for the disease? I’ll give you a minute to empty out the contents of your pockets. Have you guessed what it is? It’s your smartphone! Not only can your trusty smartphone keep you in touch with family and friends, or help you look busy at a party that you know no-one at, it can also detect for the very early symptoms of a debilitating disease. One more reason to love your smartphone!

What is Parkinson’s disease?

So, what is Parkinson’s disease (PD)? PD is a brain disease which significantly restricts movement. Some of the symptoms of PD include slowness of movement, trembling of the hands and legs, the resistance of the muscles to movement, and loss of balance. All of these movement problems (symptoms) are extremely debilitating and affect the quality of life for those diagnosed with the disease. Unfortunately, it is only in the late stages of the disease, i.e. when the symptoms of the disease are extremely apparent, that doctors can confidently detect PD. There is currently no cure for the disease. Detecting the disease early on can help us find a cure, or find medicines that aim to slow down disease progression. Thus, methods that can detect PD before doctors themselves can detect for the disease, i.e. in the early stages of the disease, are pivotal.

Smartphone sensing

So, how can we go about detecting the disease early on in a non-invasive, cheap and easily accessible manner? Well, we believe that smartphones are the solution. Smartphones come equipped with a large variety of sensors to enhance your experience with your smartphone (Fig 1). Over the last few years, abnormal characteristics in the walking pattern of individuals with PD have been successfully detected using a smartphone sensor known as an accelerometer. Accelerometers can detect movement with high precision at very low cost, making them perfect for wide-scale application.


Fig 1: Sensors, satellites and radio frequency in Smartphones

Detecting Parkinson’s disease before symptoms arise

Interestingly, subtle movement problems have been reported in individuals with a high risk of developing PD using sensors similar to those found in smartphones, specifically when given a difficult activity to do such as walking while counting backwards. Individuals at risk of developing the disease are individuals who are expected to develop the disease in the later stages of their life due to say a genetic mutation, but have not yet developed the key symptoms required for PD diagnosis. The presence of subtle movement problems in individuals with a high risk of developing PD indicates that the symptoms of PD exist in the early stages of the disease progression, just subtly. Unfortunately, these subtle movement problems are so subtle that individuals at risk of developing PD, as well as doctors, cannot detect them – so we must go looking for them. It is crucial that we can screen individuals for these subtle movement problems if we are to detect the disease in the early stages. The ability of smartphone sensors to detect the subtle movement problems in the early stages of PD has not yet been investigated. Using smartphones as a screening tool for detecting PD early on will mean a more widely accessible and cost-effective screening method.

Our solution to the problem

We aim to distinguish individuals at risk of developing PD from risk-free individuals by analysing their walking pattern measured using a smartphone accelerometer.

How does it work?

So, how would it work? Users download a smartphone app, in which they are instructed to place their smartphone in their pocket and walk in a straight line for 30 seconds. During these 30 seconds, a smartphone accelerometer records the user’s walking pattern (Fig 2).


Fig 2: Smartphone records user walking

The data collected from the accelerometer is then downloaded on to a computer so we can examine the presence of subtle movement problems in an individual’s walking pattern. However, to ensure that the subtle movement problems that we observe in an individual’s walking pattern is due to PD, we aim to simulate the user’s walking pattern via modelling the underlying mechanisms that occur in the brain during PD. If the simulated walking pattern matches the walking pattern collected from the user’s smartphone (Fig 3), we can look back at our model of the basal ganglia (BG)- an area in the brain often associated with PD – to see if it is predictive of PD.




If it is predictive of PD, and we observe subtle movement problems in the user’s walking pattern, we can classify an individual as being at risk of developing PD. Thus, an individual’s health status will be based on a plausible link between their physical and biological characteristics. In cases in which the biological and physical evidence do not stack up, for example when we observe subtle movement problems in an individual’s walking pattern but the information drawn from the BG is not indicating PD, we can dismiss the results in order to prevent a misdiagnosis. A misdiagnosis can have a significant impact on an individual’s health and psychology. Thus, it is pivotal that the methods that we build allow us to identify scenarios in which the model is not capable of accurately predicting an individual’s health status, a problem which a lot of current techniques in the field lack.

To simulate the user’s walking pattern, we aim to mathematically model the BG and use it as input into another mathematical model of the mechanics of human walking. The BG model consists of many variables to make it work. To find the values for the different variables of the BG model such that it simulates the user’s walking pattern, we will use a statistical technique known as Approximate Bayesian Computation (ABC). ABC works by running many simulations of the BG model until it simulates a walking pattern that is a close match to the user’s walking pattern.

Ultimately our approach aims to provide insight into an individual’s brain deterioration through their walking pattern, measured using smartphone accelerometers, in order to know how their health is changing.


As well as identifying those at risk of developing PD from healthy individuals, our approach provides the following benefits:

  • Providing insight into how the disease affects movement both before and after diagnosis.
  • Identifying disease severity in order to decide on the right dosage of medication for patients.
  • Tracking the effect of drugs on symptom severity for PD patients and those at risk.


Apple recently launched ResearchKit, which is a collection of smartphone applications that aims to monitor an individual’s health. Companies such as Apple are realising the potential of smartphones to screen for diseases. The ability to monitor patients long-term, in a non-invasive manner, through smartphones is promising, and can provide a more accurate picture of an individual’s health.

Advances in smartphone sensing are likely to have a substantial impact in many areas of our lives. However, how far can we go with monitoring people without jeopardizing their privacy? How do we prevent the leakage of sensitive information collected from millions of people? The growing evolution of sensor-enabled smartphones presents innovative opportunities for mobile sensing research, but it comes with many challenges that need to be addressed.

The wonders of kelp, and why we need to save it.

‘Deforestation of the Sea: A closer look at valuable kelp forests in shallow seas around Britain’ by Jess Fisher.

 ‘I can only compare these great aquatic forests… with the terrestrial ones in the intertropical regions. Yet if in any country a forest was destroyed, I do not believe nearly so many species of animals would perish as would here, from the destruction of the kelp’

Charles Darwin (1834) Tierra del Fuego, Chile

Kelp forests: the rainforests of the ocean

A few weeks ago, I settled happily into Finding Dory on a Saturday night. Towards the end, the little blue fish drifts through the giant kelp forests, devoid of life, and sadly proclaims ‘…there’s nothing here but kelp!’. Having studied this oceanic plant, I can confirm that this is 100% scientifically incorrect: well done Pixar.

Kelp forests actually have around the same levels of biodiversity as a tropical rainforest. But why should you care?

Because kelp can do everything: it’s home to hundreds of thousands of marine species, it can be used as a fertiliser and a biofuel, it can be extracted to use in cosmetics like make-up and toothpaste, amongst many more uses. In 1908, Japanese biochemist Professor Ikeda isolated monosodium glutamate (or MSG – one of the things that makes Asian food so great) from kelp. Who knew science could be so delicious?!

Why is kelp disappearing?

Unfortunately, kelp is reported to be disappearing. This is mostly because of climate change making the oceans uninhabitable for some species, but also that more people are harvesting kelp from the wild. Lots of people are even beginning to call it a superfood. While its rapid growth rate (up to half a metre per day in some species) suggests that harvesting kelp should not really be a problem, conservation scientists are worried that all the marine life living in kelp forests will take quite a bit longer to return. Britain is especially important for kelp (because of the variation in habitats and rocky shores) which is why I started working on a project looking to test novel monitoring methods for kelp, so we can potentially measure what is actually happening.

How our project works

Kayaking into the open ocean near Plymouth, we fought through choppy waves into a prevailing wind, whilst I continually threw cold seawater with my paddle onto my kayak-partner, who was sitting behind me! Lots of kelp lives in the subtidal zone (beneath the sea surface even at low tide), and so the plan was to beam sonar onto the seabed from a kayak, look at the graph that the sonar gives back, and then use a GoPro camera to visually verify assumptions that we were making about which graphic patterns denoted kelp. For example:


 This was one of four kayak trips the team made to test the method. Amongst some other objectives, the main aim is to ask whether sonar can be used to monitor kelp at a Britain-wide scale. The findings will be given to our funder, The Crown Estate, who manages development on the British coastline (The Crown Estate is owned by the Queen of the United Kingdom). They would like to eventually create some guidelines for sustainably harvesting wild kelp, so that this valuable seaweed resource (and its associated flora and fauna) will be available for future generations for years to come. Some kelp snapshots from the seabed:

Counting the cost of losing kelp forests

Kelp forests are reported to be worth billions of pounds. In the northeast Atlantic, young lobster live in the kelp, and are eventually fished by a lobster industry worth £30 million alone. Is it worth keeping? Certainly. Is it worth monitoring incase of declines? Definitely.

 Inspired? Check out the Big Seaweed Search, Capturing Our Coast, and Floating Forests for some citizen science kelp-focussed initiatives. You can also read about the project on ZSL Wild Science.


Death and Me

By: Dr. Ruth Penfold-Mounce, Lecturer in Criminology, University of York, UK.

During my criminology PhD research into the relationship between celebrity and crime at the University of Leeds some 10 years ago I came across an interesting story. It entailed the relocation of the mummified arm of murderer, George Carpenter. Dr Charles Kindersley had retained the arm after dissection in 1813 and kept it in his home as a souvenir until it was donated in 1938 to the police museum in Marlborough before being passed on to the National Funeral Museum, London in 2005. I was fascinated by this macabre tourist-like act conducted by a doctor and on returning home to my husband that night (and much to his bemusement) burst out with: ‘Darling, there’s a mummified arm in Wiltshire!’

This marked the beginning of my scholarly love affair with death and culture.

Death and Culture

Being a cultural criminologist based in a sociology department with a research interest in crime, popular culture and celebrity, and death is an unusual combination. It has its advantages, such as being able to draw on my combined research interests and film with the BBC’s Hairy Bikers. I talked them through the murder of George Cornell by the Kray Twins in the Blind Beggar Pub in the East End of London in 2015 (as pictured below).

I also discovered just how hard it is to walk, talk and hold crime scene photos at the same time. It turns out that filming for television is more difficult than I anticipated.

However, as an interdisciplinary scholar I face some unique challenges. I have to constantly work at making sure I do not disappear between the boundaries of disciplines.I battle with not being criminological enough for criminology journals, and yet too crime-based for sociology journals, and too popular culture rooted for death studies journals.Thank goodness for journals such as Mortality that welcomes engagement with death from a variety of disciplinary approaches.


Dr. Penfold-Mounce featured with the BBC’s Hairy Bikers

I have had to work hard to establish a death and culture scholarly community by drawing likeminded scholars together through various events including running day symposiums like Negotiating Morbid Spaces (2014) and Marginal Death Research: Doing Edgework (2015). I even ran a three day international conference Death and Culture (2016) where 90 scholars came together from over 15 different disciplines to talk about death from a cultural perspective. The result has been that I no longer feel so isolated, and a strong death network has been formed, it is growing, and it has connected researchers across the globe.

Gazing on Death and the Dead

A driving force of my work in death and culture is my passion to stop people thinking that death is taboo.

Death is actually ever present, ranging from Disney movies (pretty much every Disney character has dead parents think Bambi, Frozen, The Lion King etc.) to executions being filmed in Syria and placed on Youtube. We see more graphic death than ever before. The big barrier that seems to make people think death is taboo is that much of what we see is mediated. In other words, seeing death on television or in film (ie mediated death) gives us a softening lens through which to engage with death. It means that popular culture makes seeing death more palatable and even normal. As such it would seem that it is ok to watch death and see inside the violated human body (CSI autopsies are a great illustration of this) but we are less comfortable chatting about it in personal terms in general conversation. As you can imagine, I do not share this restraint. Instead I work hard at being open about death and making the dead visible. I want to attract people’s attention and get them thinking and talking about death and the dead.

Conveniently for me, death has been particularly evident in 2016. In fact 2016 has been a very productive year for my research. We have witnessed an unanticipated boom in terms of deaths amongst the famous, including:

  • singer David Bowie
  • actor Alan Rickman
  • radio and television presenter Terry Wogan
  • magician Paul Daniels
  • comedians Victoria Wood and Ronnie Corbett
  • musician Prince
  • entertainer and ventriloquist Keith Harris
  • boxer Muhammed Ali
  • actor Gene Wilder

Whilst a common response has been grief or amazement or just general outcry – my response is ‘That’s perfect for my research’.

This peak in celebrity deaths led me to become interested in the posthumous careers of the famous dead and I’ve written about how lucrative being dead can be by using a case study of Marilyn Monroe for Death and the Maiden blog. It would seem that being dead can be a successful career move for many celebrities. My enthusiasm for the famous dead, particularly recent deaths, has provoked responses of concern at my apparent glee at the death of another human.

Please do not interpret my enthusiasm for this topic as macabre or dismissive of the loss of these individuals or dismissive of those suffering a loss. Instead, my enthusiasm is rooted in exploring death within our culture and how the famous dead helps a wide audience engage with mortality.

Since researching celebrity and death it has become clear that the famous dead can have value, not just in economic terms, but also as a cultural symbol to explore fears about life ending. The celebrity dead demonstrate that an individual can have a life in death and not just a life after death. In my book ‘Death, The Dead and Popular Culture’ (with Palgrave Macmillan due out in 2017) I examine not only the value of the famous dead but also the entertainment that the dead in popular culture can contribute to society through the Undead (zombies and vampires) and also authentic corpses (models or live actors who play the dead in a non-fantasy setting). Consuming the dead and death is commonplace and everywhere and provides a safe arena in which to explore cultural fears about mortality.

So what is next for me and death?

Well so far in 2016 I have hung out by Dick Turpin’s grave for The York Press to discuss the famous dead and tourism, and desperately tried not to smile for the camera or rattle the beer cans which were around my ankles. I have also been interviewed about violence against the female dead in television drama with Radio 4.


Dr. Penfold-Mounce at Dick Turpin’s grave.

I have run a workshop on the famous dead at the Before I Die Festival in York and made plans to run an interactive session for the public on ‘Spectacular Justice’ at the York Festival of Ideas in June 2017. I have also taken on more fabulous doctoral students many of whom are focusing on death in relation to popular culture or crime. So I think I will just go and finish writing about ‘A Corpse for Christmas’, a lecture I am giving at St Barts Pathology Museum this Christmas and then get working on my new book with Palgrave Macmillan on ‘Death, the Dead and Popular Culture’. After all, I can rest when I am dead.


Images from ‘ A Corpse for Christmas’ the topic of one of Dr. Penfold-Mounce’s upcoming lectures.