Tackling the ethical approval process

Forms over function: Ethics, ethnography and the NHS

by Carol Robinson

At times last year I forgot that I was doing a PhD. It’s not that I was having a wild time as a student. No, by 9am every day I turned up to the office my department has kindly provided, settled down at my desk and worked solidly until some point after 5pm. Some of that time I’d be keeping on top of email, or attending departmental meetings, but mostly, I was working. Twitter doesn’t distract me, I had an organised weekly list of things to do that I worked through, and things were progressing nicely, thank you.

So why did I forget that I was working towards a PhD? Because for most of that time everything I did was aimed at getting ethical approval for my research. So it was almost a shock to look up and remember that wasn’t really my goal. My goal is to do the PhD research, to contribute to human knowledge and understanding, and to do it in a way that improves people’s lives. For a while however, compiling what became 91 pages of ethics forms plus supporting documents and all the bureaucracy that goes with that completely eclipsed the research.

I always knew I’d need to get ethical approval for my work. What I didn’t appreciate was how time-consuming, frustrating and complicated this would be. I used to listen to other people’s stories of wrestling with UK’s Integrated Research Application System, or with the NHS Health Research Authority’s byzantine processes and think either that they were exaggerating for effect or that perhaps their project wasn’t, well, good enough. I’d had approval from the prison service for England and Wales for two previous research projects; how hard could it be? I now apologise whole-heartedly that these thoughts even crossed my mind.

I did make life harder for myself by wanting to research dying prisoners, thus requiring both health service and prison service approval, as well as that of my University. The prison service process was fairly straightforward and familiar. The real trouble was with the NHS processes, and with the relationship between the three bodies. What kept me going for several weeks, as I tried to untangle the mass of acronyms and synonyms involved, was the thought that I was gaining useful experience. At the end of all this, I thought, I’ll be able to put on my CV that I understand the process, know how to fill in the form and could liaise with a health Research Ethics Committee. Not true. The process is so capricious that all such an entry in my CV would prove is that I once had the mental fortitude to see an application through to its conclusion.

Although my colleagues will tell you I sighed out loud quite a bit, I did make it through the time when an overnight update to the IRAS website hived my answers off into two separate forms, one of which I couldn’t see. I didn’t scream when I discovered just before submission that this should be changed back to one form. I stayed cheerful as my participant information sheet, carefully written to suit people not that keen on reading, expanded to yet another page with all the extra information I was asked to include. I only muttered a modest amount when asked to add the (to the participants) totally meaningless IRAS reference number to it. I maintained my outward equilibrium whilst I confirmed I would not be doing things I’d never thought of (wearing clerical dress was my favourite such request, closely followed by audio-recording outside of interviews). But I confess my heart did sink when someone I was relying on to understand what should happen next said this would be a learning process for them too.

Being a reasonable person, I did appreciate that part of the difficulty was that I was having to fit getting approval for sociological research into a process intended for clinical trials. The mismatch only seems to be partly recognised by the bodies responsible. So, whilst there’s a protocol template to complete aimed at qualitative research, I still had to say I wasn’t using ionising radiation or using human tissue samples. And whilst there are ways to amend the project once it’s been approved, there’s no appreciation that good sociological research is often iterative. Instead, there’s the assumption that you will know all possible scenarios in advance. With this comes an assumed relationship to the research participants; they are to be the subjects, not the co-creators of research knowledge. There is no scope for an understanding of ethical research that deviates from a generic (clinical) ideal, and consequently, the best of a discipline’s specific characteristics and of its newer research methodologies can be lost. I say newer, but in practice even my well established chosen ethnographic methodologies sit uncomfortably with the process of getting ethical approval from a health research authority.

There was a tendency in the guidelines provided to use language in unexpected ways. Have you ever had that experience of all the words making sense individually, but being incomprehensible when put together? I found myself trying to draft emails to effectively ask “so if ‘host organisation’ doesn’t mean ‘the organisation hosting the research’, what does it mean?” I struggled, along with my supervisors, and it turned out, the ethics committee staff, to understand what the REC had wanted when it asked whether I had an ‘honorary contract’. Later on, the REC asked if the scientific validity of the study has been confirmed independently of the academic supervisors, giving as an example of how this might be achieved “a University PhD review process”. None of us, not my academic supervisors, not the university ‘sponsor’ that I’d discovered along the way was also needed, knew what this meant. We were stumped, and resorted to gently approaching a professor elsewhere to see if they could provide such an independent scientific review, and quickly. In the end, this was not needed –all that was meant was would the University’s ethics committee be looking at it. Yes, of course.

There were funny moments too. Having had my application reviewed by a Research Ethics Committee that met in Essex, I then discovered how similarly I pronounce ‘Ethics’ and ‘Essex’, on the phone, to a poor, kindly person trying to understand which ethics committee had looked at it. Eventually, I said, “the one that met in Chelmsford” and we moved on. Having three ethics committees look at your work is not fun. As things are, it’s inevitable for research such as this, but unsurprisingly their expectations are not always compatible. The prison service doesn’t want any contact details for external people, such as academic supervisors, included on Participant Information Sheets; the NHS expects this. The University wants email addresses only; prisoners don’t have email. The NHS REC regarded the notices that prison governors would issue to let prisoners and staff know about the research as ‘posters’ that the REC should scrutinise, so needed the final text agreeing before I could get their approval – 6 months in advance of the governor issuing the text. Prison governors are incredibly busy people, so I am indebted to them for having calmly accepted this.

There is, outwardly, plenty of advice available on NHS websites. Much of it is out of date, hard to find, or impossible to understand. There are flow charts describing a parallel world, ‘start here’ guides buried beyond discovery, and directories that are out of date. Lovely, kind and supportive staff within the NHS R&D offices or working with RECs do their best, but if your project is unusual, there are things they can’t be expected to know, such as that there’s a limited number of Health RECs who will look at prison applications, until it’s nearly too late.

I’m not alone in this. In my struggle to understand the process, I came across numerous articles by academics similarly venting their frustrations, including one that fairly calmly reflecting on the problems, before revealing that their own project had spent the entire initial research budget trying to get permissions for research. Wiser people before me have also found that processes designed for quantitative-based medical interventions and clinical trials cannot adjust to the needs of qualitative research. And yet not much seems to have changed. My gripes may seem small, but behind them is a bigger issue, that of the imbalance of power between researchers and research ethics committees and the lack of accountability of the people, some experts, some lay people, appointed to make such important decisions.

So now I have all the ethical approvals I need, 10 months after I first starting filling in the forms, I’m remembering fondly why I’m here. It comes in flashes; the possibility of time to open that new book I’ve been eyeing up, something on the news that reminds me of the relevance of my research interests, a chance conversation with a colleague. Best of all was a recent conversation with a senior manager at one of the prisons I’ll be visiting for fieldwork. We’d not spoken before, but within minutes she’d reminded me why I’m doing this, why it matters that I’ve survived through all these hurdles. Out there are people who are doing their best in tough circumstances, and good quality research may just be able to help them. I’m looking forward to getting on with it.

Advertisements

Not anonymous enough? Research data and issues of anonymity.

by Carol Robinson, doctoral researcher, University of York.

CR1

Recently, I settled down to enjoy an article by one of my favourite academic writers. It was everything I’d hoped it would be: well written, thought provoking and interesting. It took a new approach to its subject and had a campaigning edge that I sympathised with.  And then, towards the end of it, I realised that I knew one of the people who had participated in the study being reported. Not that I knew them in terms of recognising a type, but that I actually knew them. My first response was one of disappointment. I want my academic heroes to be flawless. My next thought was along the lines of ‘will anyone else know them?’ followed quickly by the question ‘does it matter?’

A quick search on-line resulted in a Wikipedia page that confirmed other people would be able to identify the participant if they wished. The academic had not revealed their interviewee’s age or location, but from the context it was clear that they were referring to a member of a small group and once more specific information was given, anyone with a curious mind and an internet connection could produce a name. From my knowledge of the individual, further details in the article then confirmed what I had found.  Anyone else would be able to identify them, even if they lacked my certainty.

So, does it matter? The article probably won’t be widely read, even in academia, and it’s therefore doubtful that anyone else will do the searching to put a name to this participant. It’s possible the participant wouldn’t mind if they were named, although the author gave no indication that they’d consented to this. The encounter that was described didn’t include anything particularly controversial or personally revealing. If they read it, the person might not like some of the ways they were portrayed but there was no obvious information that could be used against them. But shouldn’t the participant have been assured of anonymity regardless?

Anonymity is one of the things I have to think about in my own research, which is around deaths in prison, two subjects with particular sensitivities. It is one of the hallmarks of ethical conduct, together with confidentiality and informed consent, necessary not least because twentieth century history has too many examples of exploitation and damage occurring in the name of ‘research’. Anonymization arguably has a value in its own right.  Attempting anonymization, even if we secretly admit we may fail, is a way of preserving the idea of academic integrity, of seeking to avoid the exploitation of other people’s generosity that would taint our work. It is evidence of academic vigour. This links back to my initial disappointment that an experienced academic had made a mistake. If the anonymization was ineffectual, were there other aspects of this article that were in some way dubious?

Demonstrating that we have followed the conventions of academic research, whether by correctly referencing our sources or by using recognised methodologies, is part of staking our claim to be academics. It shows a respect for the traditions of our particular discipline, and in the case of techniques such as anonymization, establishes our research as ethically valid. And if ethical validity is lost, it is arguable that other forms of credibility are lost too.

Research ethics committees usually insist on anonymity and confidentiality for people participating in any research, especially vulnerable participants, as a way of protecting them. It is assumed that some harm or loss may befall an individual if their identity is known, if the stories and experiences they share and which become the researcher’s data are in some way linked backed to them as a person living in the real world, beyond the study report or academic article. Sometimes, as in my own research, this is associated with taboo subjects or criminal activity, where there may be very real consequences if anonymity is not maintained.

In seeking ethical approval for research involving prisoners, deemed to be vulnerable because of their incarcerated status, I am encouraged to think through how I will record and store my data in a way that protects their identity. The specific threat is rarely stated. Although it may be poor practice, is failing to anonymise a person really putting them at risk of harm? In many cases, there is perhaps no direct link between a possible failure to anonymise effectively and a harmful consequence for the participant; the information revealed has to have the potential to be used in a way that would confer harm. However, there is often a simple presumption that all people participating in research should be protected, which ignores the question of whether harm is likely to follow from identification.

In all aspects of our lives, most of us share personal information continually.  We willingly offer up personal information all the time, giving our names, addresses and even bank account details to near strangers, trusting without evidence that they will be used for the purpose we intend. We share our views in conversations that can be overhead by others and via on-line discussions with unknown interlocutors. We post pictures on social media, link them to others without their consent, and live surrounded by cameras. Why do we persist in thinking we can anonymise research participants?

Researchers may use pseudonyms, but often a participant’s gender, age, nationality, race or class are pertinent to the research and so cannot be hidden. We can limit access to some findings, but that poses its own ethical dilemmas. And when the research needs to focus on participants from a small group, as in the case of the article I was reading, anonymization becomes so much harder to achieve.

I have experienced this in my own research. Last year, I interviewed uniformed prison staff with experience of working with terminally ill prisoners, in a prison where there were few female officers. The interviews gave really useful insights into the work prison officers perform with dying prisoners but I was painfully aware that the female interviewees may be identifiable by other staff in the prison, despite my best efforts at anonymization, simply because they belonged to such a small group. Even with a wider pool of participants, in a tight-knit world such as a prison anonymization is hard to maintain. Surely we should not abandon useful research because it involves a small group or close-knit communities?

Indeed, should we even try to anonymise our research participants? Most of the time I would say yes, but there are times when far from protecting our participants, doing so actually risks inflicting a harm.  As researchers, we promise anonymity to ethics committee on behalf of other people, who may not wish for it. Very often, participants may have offered to help the researcher because they too care about the issue that is driving the research and want to have an impact on the situation. They may want to have their voices heard, and by extension, themselves credited. When we anonymise them, we keep their voices, but hide their faces. For vulnerable participants in particular, this is potentially a misuse of power. It is a way for the researcher to exert their positional power and claim control. Nicely anonymised, our participants may not even be able to spot themselves in our final reports and presentations. They can’t see how they are represented, and so they can’t hold us to account. There are ways round this, involving them in the production of the final report, but in my discipline at least, few researchers seem to opt for these approaches.

Lastly, I found myself thinking ‘what does one do if one spots that an academic has not sufficiently anonymised their data?’. It is not easy to be certain what responsibility we have when we spot something problematic with someone else’s work. In the case of the article I read, the peer reviewers had been content with the text, the editorial board satisfied and the article is now published. The damage, if there were any, is done and in an age of on-line journal access, probably un-doable.

I asked colleagues, and was struck by two responses in particular, widely divergent but both from science faculties. One, coming from a discipline where the professional accountability of practitioners is paramount, felt strongly that I should contact either the author directly to alert them to the problem, or the journal anonymously to suggest they review their procedures. From another department, a colleague suggested I keep quiet, and not draw attention to the problem or myself. For them, raising the matter with the author would only make things worse. Each response of course reflected the culture and values of the particular academic disciple. In some academic disciplines, where the use of human participants is rare, the question of the quality of participant anonymization may rarely come up. But for many disciplines, including my own, where the involvement of human participants is so often essential to a research project, this is an issue that can occur at any time. Do we as academics have a collective responsibility to revisit anonymization?

 

Emotion Rules in Feminist Book Reviews: An Inroad to Improving Feminist Relationships

By: Lisa Kalayji

WAB 2Swimming through the endless tidal wave of demoralising political think pieces and scholarly jibber-jabber in my mostly academic Twitter feed, I came upon an account called ‘ShitMyReviewersSay’, which features the cruelly scathing comments that anonymous peer reviewers write about the hopefully-to-be-published academic journal articles of their colleagues. The account’s handle? @YourPaperSucks.

Its purpose, other than to give us an opportunity to chuckle at what, under different circumstances, makes us want to either cry or set a university building ablaze, is to highlight the absurd magnitude of the viciousness that peer reviewers will direct at their colleagues when given a chance to do so anonymously.

It’s cathartic to have a laugh at this sort of thing, but when it doesn’t come in the form of a satirical Twitter account, our reaction is a lot different. ‘What the hell?!’ we wonder incredulously. ‘Couldn’t you express your criticism in a less ruthless and petty way? What good does it do you to ruin someone’s day and treat their carefully nurtured brainchild of a paper like garbage?’

ShitMyReviewersSay reminded me of the book reviews in Trouble and Strife, the radical feminist magazine I’m doing my PhD research with.

Trouble and Strife published a fair number of book reviews – feminists write a lot of books! – and over the course of my research I’ve found that there’s a vast deal we can learn about a group of people, be they academics, radical feminists, or any other group, from the way they review each other’s writing.

My research is about emotion culture: the system of rules and social norms that prevail in a society or social group which affect how people feel emotionally and how they express those emotions. Book reviews contain a treasure trove of clues about the emotion culture of the social group that the reviews come from, but in order to see those clues, you need to know some of the things sociologists have learned over the last few decades about how emotions work.

Emotions are relational

As the term ‘relational’ suggests, emotions come up in relationships between people. Because psychology dominates the popular lexicon we use to talk about and make sense of emotions, we tend to think of emotions as states which exist inside of us, are linked to our neurochemistry and our personal histories, and are mostly governed by things like innate human needs for social bonding. All of those things are partially true, but what the sociological study of emotions has revealed is that emotions are actually relational.

Why we feel the way we do in any given situation is constituted by our relationships to the people and things around us and what we understand those things to be and mean.

There isn’t anything in our genetic code that makes us get annoyed when a friend we’re supposed to meet for lunch shows up half an hour late (though our biology is necessary for us to be able to experience feelings), and the feeling of annoyance isn’t something inside of us that emanates outward through the things we say or do (though we do express emotions in that way). We’re annoyed at someone (that’s the relation), and the reason for that annoyance is what we think the lateness signifies. We’re busy people! Don’t they think we have better things to do than sit around waiting? We have to be back at work soon – now we’re going to have to rush through lunch! Our awareness that our friend knows that it’s considered rude to keep someone waiting and that it’s an inconvenience to us is what makes us annoyed – their indifference to our needs and to the agreed conventions of how keeping a lunch date with someone works creates our feeling. Likewise, though, if we found out that they’d been delayed because a stranger attacked them on the street and nearly broke their jaw, our annoyance would quickly give way to concern – what their lateness showed about our relationship to them would have changed, and with it, our feelings about it.

Emotions are subject to rules

Much like there are social rules about how we’re supposed to behave in different sorts of situations, there are also rules about how we’re supposed to feel and how we’re supposed to express feelings. If an adult is audibly crying at, say, a fancy restaurant or a business meeting, that would seem inappropriate, and probably make everyone around them quite uncomfortable. If they were at a funeral, however, that would be considered normal and appropriate, and no one would be bothered.

Even if feelings aren’t expressed, there are rules about how we’re supposed to feel.

If, for example, you’re a bit off your game at work because your sister died last week and you’re in grief, and while not actually admonishing you for it, you get the sense that your boss is annoyed with you for not being your sharpest self right now, you might get upset or angry at them. When someone is in grief, we expect others to respond with compassion, even if that grief peripherally causes some inconvenience to others – it’s a violation of the social norms of compassion and empathy to get annoyed at someone for being grieved, even if the annoyance is mostly hidden and not openly expressed. The rules are also different depending on what the characteristics of the people involved are. If that person crying in the restaurant is an infant, while people might still not be pleased about the noise, it wouldn’t make them feel awkward and uncomfortable, because we consider it normal behaviour for babies to cry regardless of time or place.

These are all some general aspects of how emotions in social life work in ordinary social situations. What my research is about, though, is the specifically political dimension of emotions in social life.

Social norms about emotions are deeply political, even in most seemingly innocuous daily interactions like those I described above. Rules about who is allowed to feel or express what feelings towards whom divides along a lot more political lines than the differences between adults and children. Anger is generally considered more appropriate in men than in women (and in women is more likely to be characterised as histrionics or emotional instability), and vulnerability more appropriate in women than in men (with men’s abilities to be ‘proper’ men called into question if they cry, especially in public). Rules about emotions are also racialised – even very slight expressions of anger from black men are interpreted as very threatening because black men are culturally conceived of as inherently threatening, while much stronger expressions of anger from white men (or women) are regarded as less threatening and are more likely to be considered justified. Our prevailing cultural conceptions about what characteristics different kinds of people innately have give rise to specific, and often strictly socially enforced, rules about who can feel what and how their feelings can be expressed.

Emotions in feminist book reviews

Feminists do a lot of writing, and a lot about how emotions work in feminism can be learned from examining the books, magazines, pamphlets, manifestos, and websites they write. I’m researching radical feminism, a specific type of feminism (there are a lot of them) which emerged during the ‘second wave’ of the Women’s Liberation Movement in the late 1960s, and continues today. From 1983-2002, a radical feminist collective the UK published a magazine called Trouble and Strife, and a lot of radical feminist political thought from that period can be found there.

WAB 1Because feminist politics is so substantially borne out through reading and writing, one of the central strategies that feminists use to think through politics is by reading and debating one another’s writing. For that reason, unsurprisingly, Trouble and Strife published quite a few book reviews, wherein contributing authors to the magazine reviewed books authored by other feminists. By comparing these reviews, and the responses to them that readers communicated to the magazine through letters to the editors, we can see radical feminist emotional politics in action.

What I’ve found is that the emotion rules in radical feminism are different for relationships between radical feminists than they are when dealing with someone outside that political community. When dealing with fellow radical feminists, they’re more considerate of one another’s feelings, express their criticisms more hesitantly and gently, and are more appreciative of the aspects of the work that they agree with. On the rare occasion that someone breaks this rule and is harshly critical of someone within the radical feminist community, there’s a strong backlash, with others writing letters to the magazine to express strong objections to those criticisms having been published, and some questioning the political identity of the magazine as a whole in light of their decision to publish exacting reviews.

This will ring true for many feminists who currently engage in online activism, who are familiar with the more receptive audiences within their own political communities, and harsher (and sometimes outright vitriolic) criticism from feminists who have a fundamentally different set of political values.

This has profound implications for the future of feminism: if feminists who disagree on crucial political issues are more willing to upset one another, and less desirous of understanding where others are coming from, then we’re likely to see a continuation of the entrenched infighting that has plagued feminism for decades. I’m not suggesting here that we should return to the ‘happy sisterhood’ of yesteryear (which, as many feminists have pointed out, never actually existed). What I do want to highlight, though, is that if we want to understand why conflicts between feminists get so heated and can be so divisive, understanding the emotion rules which give shape to feminists’ relationships with each other is a crucial piece of the puzzle.

Once we become more aware of these rules and how our own feelings are shaped by them, we can act to change them, and while this won’t solve all of feminism’s problems, it can go a long way toward generating more fruitful dialogues between feminists who belong to different political communities.

This strategy can be extended to other social movements as well, and it has rarely been a matter of more urgency than it is right now for social movements to be able to prevent the breakdown of their political projects due to irreconcilable conflicts from within their communities. During the currently ongoing period of rapid and disorientating social and political change, understanding the emotion rules of social movements can help us to ensure that efforts to enact positive social change are successful, and examining the way we speak to, speak of, and write about one another is one tool we can use for making sense of our emotion cultures.

You can find all issues of Trouble and Strife on their website at troubleandstrife.org.

Researching through Recovery: Embarking on a PhD post-brain surgery

By Sinead Matson, B.A., H.Dip. Montessori, M.Ed.

IMG_5092

Anyone who has had the misfortune to undergo a craniotomy should do a PhD. Seriously. It makes sense. Both paths have similar hurdles: Imposter syndrome – check! Struggle with writing – check! Trouble expressing your thoughts – check! Extreme tiredness – check, check! It’s physiotherapy, but for your brain.

I joke of course, because each person’s individual recovery is different, but doing a PhD has personally given me the space to recover from a craniotomy while still actively working on my career and passion. I was always going to embark on a doctoral degree but in October 2014 (ten weeks after my second child was born) I had four successive tonic-clonic seizures which ultimately led to the discovery and removal of a large meningioma (brain tumour) four days later. When I woke up from surgery I couldn’t move the right-hand side of my body except for raising my arm slightly; my speech and thought process was affected too. Of course, I panicked, but the physiotherapist was on hand to tell me that while the brain had forgotten how to talk to the muscle – the muscle never forgets. I instantly relaxed, “muscle memory! I’ve got this” I thought to myself – forever the Montessori teacher.

Nobody tells you that recovering from brain surgery is exhausting, so exhausting. Every day I had to relearn things I had previously known. Every single sense is heightened and a ten-minute walk around the supermarket is a sensory overload. However, I never questioned the fact that I would start college the following September; in fact, it drove me to do my physio and get physically better. I even applied for a competitive scholarship and won it. I can never explain enough how much of a boost that was to my self-esteem. There is nothing like brain surgery to make you question your identity and your cognitive skills in a profession that values thinking, research, articulating new ideas, and writing. It is like an attack on your very being.

IMG_3986

When I started, I could not have been more accommodated by the Education department in Maynooth University, but in a manner which was subtle and encouraging whilst still pushing me to do a little bit more. My supervisor struck a delicate balance between supportive and always encouraging me to look a little further and read more. I never felt mollycoddled or out of my depth (well… no more than the average PhD student).

Of course, there are challenges. Aren’t there always? It can be frustrating (not to mention embarrassing) when you cannot process a conversation as quickly as it is happening at meetings, conferences, or seminars; it’s the same for when you answer a question but know the words you are saying are not matching what you are trying to articulate. Submitting a piece of writing to anyone, anywhere, is the most vulnerable thing that you can experience, especially when your language centre has been affected and you know your grammar and phrasing might not always be up to par. Transitions flummox me, particularly verbal transitions like the start of a presentation, introducing and thanking a guest speaker, taking on the position of chairing a symposium, and day to day greetings. I lose all words, forget etiquette, and generally stammer. I forever find myself answering questions or reliving scenarios from the day in the shower!

So, what’s different between mine and any other doctoral student’s experience you ask? Well, I’m not sure. I see my fellow students all have the same worries and vulnerabilities. We all have discussed our feelings of imposter syndrome at various points thus far, our excitement and disbelief when our work is accepted for presentation or publication, and our utter distress at not being able to articulate what we really wanted to say in front of a visiting professor. I do know this: it used to be easier; I used to do it better; I never had problems with writing or verbal transitions before; it is harder for me now. But (BUT) I now have a whole team of people who share my feelings and frustrations. I now have a community who champion my successes and comfort me with their own tales when I have bad days. I now feel less isolated and more normal. They allow me…no…they push me to do more, to believe I could travel to India alone to research; to not let epilepsy or fear to hold me back; to believe that I could negotiate the research process on the ground with preschool children and their parents and not get overwhelmed. They have read papers and assignments for me before I submit them and they expect the same of me. They simultaneously allow me room to vent (and take the lift when I’m too tired to walk) and they push me to be more adventurous with my reading and theory – to take risks I may never have taken.

All-in-all, I cannot think of a better way to recover from brain surgery and all it entails than the absolute privilege of completing a PhD. It gives me a space – a safe space – to recover in. The research process itself has helped me learn who I am again, what I stand for, and what I believe. It has pushed me so far outside of my comfort zone in a way that I’m not sure I would have done otherwise but I am positive is vital to my full recovery. It has exercised my own personal cognitive abilities, reasoning skills, verbal and written expression so much more than any therapy could have, and it has given me, not a cheerleading team, but a community of researchers who are on the same journey – in a way.

I’m not saying it’s for everyone – no two recoveries are the same. However, I wish there was (and I did search for) someone who could have told me before the surgery, but particularly while I was in recovery, that life doesn’t have to stop. That it is not only possible to research while in recovery from brain surgery, but that it can also have a transformative effect on your life and your sense of identity; that it will push you outside of every comfort zone you’ve ever had, and it will be exhilarating.

2015-10-26 15.54.54

 

The view from here: fighting disillusionment as an American expatriate

by Cindy Withjack

c4jt321 

You spend all your time talking, not working. You are an expatriate, see? You hang around cafés. –Ernest Hemingway, The Sun Also Rises

 

I was wearing an Esmeralda crewneck sweatshirt the first time I heard someone say the President should be ashamed of himself. I was either reading or spinning around in circles, and I liked Esmeralda best because she looked most like me. There were at least three adults, perched like gargoyles on the couch edge and they, along with a sizeable portion of America, were all at once captivated and scandalized; the 42nd President of the United States had brought shame upon all our kettle black homes. I had yet to understand the difference between peaches and impeachment, and in twenty years time I would be an expatriate.

I was an expat before America changed hands, before Bernie Sanders was officially out of the running, before Hillary Clinton was deemed a ‘nasty woman.’ America felt to me considerably far away during my Master’s program in England where I was writing a short story collection and finalizing PhD applications, still trying to decide if it was weird to put milk in my tea. In the postgraduate pub or university café, I was often asked how I was allowing this to happen—‘this’ being the rise of Donald Trump—and I responded, with my significantly less charming accent, that I held much less clout than they assumed. And yet, it was unnerving how guilty I felt, how relieved, to be so far away from America. I busied myself with PhD applications asking that I demonstrate my intentions: my plan to contribute something new and significant to academia and why. This portion of the applications felt timely; in my case wanting to contribute something significant meant being present, from afar, in the matters of America. While the critical and creative aspects of my proposed novel materialized, I returned again and again to that awareness of guilty-relief, which did not add to my work as much as it hindered it.

During my Master’s program, in spite of American news and Brexit, I produced a sizeable portfolio of more than twenty short stories. This output created in my mind, alongside minor paranoia, an almost mystical idea of how my novel would come together. Compared to the struggles I had faced in my life to date, I felt confident in my ability to go into any PhD program with squared shoulders. There was, I believed, a surge in Intersectional Feminism, morality, and accountability. In my belief that I would change the world, I assumed the world was changing with me. Not so quietly, there was a disconnect forming, a disillusionment that would burrow its way into my studies and my writing.

I watched Donald Trump become elected the 45th President of The United States on five screens. Receiving the news this way, five different times, each one on a slight delay with varying accents and facial expressions, was both remarkable and necessary; my brain wanted to understand absolutely, without cushion or crutch, despite the disappointment that followed. America, the grassy place my immigrant parents felt was best, had let down so many of us in just a few hours. As a devoted academic I wanted precise control over the way my brain absorbed and processed the information, which meant having an early morning Q&A with myself: How did we get here? (We were always here.) Who let this happen? (We did.) What happens next? (Go to sleep.) Still, the idea of this particular President dictating what happened next with my freedom, my body, and my future was unfathomable.

My Master’s program had recently ended; I decided on a PhD program, but it was still several months away. I was appreciative that I had nowhere to be, no deadline, no expectations. I allowed myself time to wallow, stayed inside for 24 hours after the election, wondering how long I could go without disclosing my nationality as to avoid being forced into discussing what had just occurred, finally leaving to pick up a pizza. Mumbling as few words as possible while paying, I gave myself away.

            ‘Where are you from?’ asked a man to my right.

            ‘Is it that noticeable?’ I stalled.

            ‘You’re definitely American.’

            I sighed feeling both embarrassed and defensive.

            ‘What a huge mistake,’ he said. ‘How could you let that happen?’

Here I considered laughing, but truthfully I cannot remember how I actually responded. I was sleep deprived and hungry, and in hindsight, I can only imagine all the best possible retorts forming one giant metaphorical middle finger.

What followed were several months of cyclical social media overload followed by social media blackout, before I returned my attention to books, having distractedly cast them aside and, for the first time in my life, I found no comfort there. The abundance of news easily became overwhelming despite my feeling that remaining informed was a requirement. Wouldn’t it be negligent and irresponsible to distance myself from the news, both good and bad, and to potentially find myself ignorant about the state of the world? The anxiety of activism—attempting to quell my resentment by becoming more involved, and sharing important articles, and signing petitions felt at times like two steps forward followed by one very long backslide—left me exhausted and unfocused. Fighting disillusionment proved difficult following Donald Trump’s first week in office, and I went into day one of my PhD program feeling completely derailed.

Roughly two months into Donald Trump’s presidency, and a rough two months it has been indeed, I still feel derailed, but I am listening to Purple Rain on repeat. I am writing less but reading more, and since my Master’s graduation I have been skeptical of the idea that I can contribute something of real significance during such a tumultuous time; those twenty short stories seem so very long ago. It is in our nature, people like to generalize about writers, to be self-deprecating and melodramatic, and I totally agree. Writing as a profession is hard all on its own; add to that a complete upheaval of the things a writer holds dear—freedom of speech, reproductive rights, racial justice, issues of immigration, LGBTQ rights—and things get a bit more complicated. However, ‘[t]his is precisely the time when artists go to work,’ Toni Morrison’s words try to remind me. ‘There is no time for despair, no place for self-pity, no need for silence, no room for fear. We speak, we write, we do language. That is how civilizations heal.’ The year is only just beginning, so there is still time for me to latch onto Morrison’s words and follow through. I have no immediate plans to return to America, and as my program is the same length as one presidential term, I have at least four years to read, spin around in circles, and write a novel. It only took a year for me to genuinely enjoy black tea. A lot can happen in four years.

 

Improving future asthma care

L0040548 Flyer and advert for "Potter's Asthma Cure"

5.4 million people in the UK have asthma, and every ten seconds, someone in the UK has a potentially life-threatening asthma attack. On average, three people a day die from an asthma attack in the UK – in 2014 (the most recent data available), 1216 people died from asthma. Many of these deaths are preventable, and continued use of asthma medication is an important factor in this (Asthma UK, 2017). But many people don’t stick to their asthma medication routines. Kathy Hetherington writes about her research into a new method of asthma treatment which is significantly reducing the risks associated with severe asthma.

My PhD investigates patient’s response to inhaled steroids using novel monitoring technology. I have spent the past year coordinating this project throughout the UK, within the Refractory Asthma Stratification Programme-UK, (RASP-UK). I work alongside Professor Liam Heaney and Professor Judy Bradley in Queen’s University, and Professor Richard Costello in the Royal College of Surgeons Ireland. As a young researcher in Northern Ireland I am excited in the knowledge that my PhD has the potential to improve future asthma care.

The Problem

Many asthmatics do not use their inhalers correctly. As a result, they don’t receive their prescribed dosage of inhaled steroid. Within Queen’s University Belfast and the Belfast City Hospital, we have developed and implemented a new method of observing and monitoring how patients use their inhalers. This revelation is significantly reducing the risks associated with severe asthma.

In RASP-UK severe asthma centres we record Fractional exhaled Nitric Oxide (FeNO), which is a measure of lung inflammation. An elevated FeNO is a predictor of worsening asthma symptoms or even an asthma attack. Those who continue to have an elevated FeNO are usually considered high-risk patients who need daily oral steroids alongside their inhalers. This elevated FeNO could be due to steroid resistance, or not continuing to use their inhaler (this is known as non-adherence). Determining inhaled steroid response in a difficult asthma population is a major problem in a clinical setting.

The Intervention

Within RASP-UK, we have established and further validated a clinical test using daily FeNO measurements (using a Niox Vero machine – Figure 2) alongside some additional inhaled steroid. The remote monitoring technology we use alongside this test is called an INCA™ (INhaled Compliance Aid) device. The INCA™ (Figure 1) was developed by Professor Richard Costello in conjunction with Vitalograph and is designed to work with the diskus inhaler. The INCA™ device records a time and date when the microphone inside it is activated, and records a sound file of the inhaler being used; these sound files can then be transferred to a computer. The sound files are then uploaded onto a server via a data compression utility programme where it is analysed by an automated and validated sound analysis algorithm. This combination allows us to create a remote assessment of inhaled steroid response and thus identify non-adherence to inhalers. We then communicate this information to the patients to try and improve their adherence to their inhaled treatment.

With further development, we created a web-based interface (Figure 3) to deploy FeNO suppression testing across the UK though our established RASP-UK Severe Asthma Centres. Here, we examined the utility of FeNO suppression testing to predict inhaled steroid responsiveness after a further 30 days on a normal inhaler. This period of prolonged monitoring provides further feedback on patient inhaler use and technique, using the unique presentation method below, enabling us to identify facilitators and barriers which may be involved in optimising inhaler adherence. We are constantly increasing the precision and user-friendliness of this hardware and software so that the data is easily interpreted and demonstrated to the patient.

asthma3

Figure 3 Data from the Vitalograph server following upload of one week FeNO suppression data and INCA™. The Vitalograph server shows activation and usage of both FeNO machine and INCA™ device (A) and depicts the FeNO data as precentage change from baseline as originally described (y1-aixs figure A).  The INCA™ device time and date stamps the number of inhaler uses (y2-axis – Figure A) and this is shown alongside technique analysis (B). Possible technique errors which can be identified and reported are shown in Graphic 3.

asthma4

The Future

Though we are only a year into our project, 250 patients in severe asthma centres throughout the UK have carried out FeNO suppression testing. Many have gone on to improve their inhaler usage and asthma control and decrease the inflammation in their lungs. We have presented our UK multi-centre data at conferences all over the world and interest in our project is increasing. In the past 6 months I have had the privilege of being a key note speaker at Severe Asthma Masterclasses and Specialist Asthma Meetings. This summer I have been invited as a symposium speaker at the European Academy of Allergy & Clinical Immunology in Helsinki, Finland which will undoubtedly be the highlight of my career to date!

My PhD has given me the opportunity to be able to work with a wide range of fantastic professors, clinicians, patients and co-ordinators. This PhD has convinced me that we can use this unique test and methods of presentation to improve asthma care throughout the world. I can’t express how much this thought excites and drives me; it is with great humility and privilege that I will continue to contribute to this extraordinary field.

“Dr. Kearney or: How I Learned to Stop Worrying and Love Impostor Syndrome”

by Eve Kearney

belle

I was at a family gathering recently, when as I was stuffing my face with free, home cooked food, an aunt approached me and said the words that all research students dread: “How’s being back at school going?” Apart from making it sound like I’m back wearing a uniform and taking my Junior Cert again, that question makes me stifle a sigh of despair.  I only started my PhD in English in September, and am still struggling to define what my actual research project will be on, so condensing it to a party-friendly sound bite is definitely not on my radar at the moment, nor is answering the follow up question that always comes: “And what are you going to do with that?”  In short, Aunt Jen, I don’t know how my research is going, and I sure don’t know what I’m going to do in four years with another diploma in my hand and a few more letters after my name.

The past few months have shown me that despite what I was preparing myself for, a PhD is hard.  Sure, it’s not as hard as being a real doctor and saving lives, or starting a family, or moving to a brand new country like so many of my friends are doing right now, but compared to a BA, or even a Masters, it is hard.  Gone are the days of going to class and having your ideas validated, or being graded, or even being able to discuss ideas with your friends – if I want to discuss contemporary masculinities, my fellow PhD friends will want to talk about the Victorian bestseller, or medieval syntax discrepancies.  My supervisor has been nothing but helpful and supportive, but every time I re-read an email draft, making sure it hits the right tone of humour and intelligence, I internally cringe as I hit send, fearing that I’m being too needy or bothering her with my questions – after all, I am a strong, independent, researcher who don’t need no hand-holding…right?

My whole academic career, I knew I wanted to do a PhD – I knew that coming up with original ideas and contributing to my field was for me, and even after I took a year out after my Masters, moving to Canada and starting a new life, the decision to come back to Dublin to work with some incredible people was never difficult. I have been encouraged by countless members of the department that my research ideas are good, and heck, I got As through all of my undergrad, but yet, to this day I’m still not convinced that my thesis is worth dedicating four years of my life to.  Impostor Syndrome is a very real part of academia, and a study as early as 1978 showed that it’s more likely to affect high-achieving females than any other group[1].  Even writing that last sentence made me pause: am I a high-achieving female?  Impostor Syndrome tells me that I’m not, and it tells me that I’ve only gotten this far through luck, or charm, or by fooling everyone around me. Likewise, comparing myself to everyone in the department is a trap that I often fall in to.  It seems that every day, someone is getting a grant, or having a paper published, or jetting off to an exciting conference, while I sit at my desk and try to put together an abstract so that I can keep up.  It’s a real struggle to remember that I am good at what I do, that my research matters, is original, will be a benefit to those who read it in the future.  It feels boastful to say that, but it’s the truth, and I shouldn’t be doing a PhD if I didn’t actually believe it.  I’m only in the third month of my research – papers and conferences will come, and hopefully the feeling of success will come with them.

Wait.  If a PhD is so hard and terrible, why am I even sticking with it? Why do I get out of bed every morning and put in the 9 – 5 on campus?  Because if something is hard, it’s worth doing.  And because I really do love every moment of it. Before I started in September, I pictured the next four years of my life as drinking martinis in the staff bar and using fancy words in conversations with other research students.  While it’s turned out that I’m not actually allowed in the staff bar, and I mispronounce most of the words other people around me are using, it’s turned out better than I imagined.  That feeling you get when everything you’ve been thinking about for weeks just clicks, and suddenly you’re typing a couple of thousand words of inspired greatness is unparalleled, even if it turns out that you end up deleting most of it the next day!  The community I’ve found in UCD and beyond of similarly terrified individuals has been a constant support to me – sure, we’re all quietly competing for publication and funding, but if I’m ever freaking out about something, there’s a list of people I can talk to or grab a pint with, and I know I’m on a lot of lists, too.  The challenge of self-discipline and self-motivation is something I’m finding most difficult, but again, when something goes right and everything makes sense, all the wailing and gnashing of teeth suddenly seems worth it.  And the most important thing I’ve learned so far is that drinking on a weeknight or during the afternoon isn’t irresponsible – it’s “networking”!

I was actually “networking” with one of my friends a couple of weeks ago, an amazing researcher in Trinity working on parasites, and we were lamenting about how none of our research was going how we were hoping.  For me, that’s not being motivated enough, for my friend, it’s none of her experiments going as planned – I definitely have it easy compared to a science PhD!  There was a pause in the conversation, and as I looked around, the thought hit me.  “You know what?” I announced. “To everyone else, the fact that we’re doing a PhD is pretty impressive.  Maybe we just need to be impressed with ourselves?”  We laughed and had another pint, but that idea has stuck with me since.  To answer your question, school is going great, Aunt Jen.  And when I’m finished in four years, I don’t know what I’ll do.  But I know I’ll be impressed with myself.

Maybe.

[1] http://www.paulineroseclance.com/pdf/ip_high_achieving_women.pdf

How your brain plans actions with different body parts

Got your hands full? – How the brain plans actions with different body parts

by Phyllis Mania

STEM editor: Francesca Farina

Imagine you’re carrying a laundry basket in your hand, dutifully pursuing your domestic tasks. You open the door with your knee, press the light switch with your elbow, and pick up a lost sock with your foot. Easy, right? Normally, we perform these kinds of goal-directed movements with our hands. Unsurprisingly, hands are also the most widely studied body part, or so-called effector, in research on action planning. We do know a fair bit about how the brain prepares movements with a hand (not to be confused with movement execution). You see something desirable, say, a chocolate bar, and that image goes from your retina to the visual cortex, which is roughly located at the back of your brain. At the same time, an estimate of where your hand is in space is generated in somatosensory cortex, which is located more frontally. Between these two areas sits an area called posterior parietal cortex (PPC), in an ideal position to bring these two pieces of information – the seen location of the chocolate bar and the felt location of your hand – together (for a detailed description of these so-called coordinate transformations see [1]). From here, the movement plan is sent to primary motor cortex, which directly controls movement execution through the spinal cord. What’s interesting about motor cortex is that it is organised like a map of the body, so the muscles that are next to each other on the “outside” are also controlled by neuronal populations that are next to each other on the “inside”. Put simply, there is a small patch of brain for each body part we have, a phenomenon known as the motor homunculus [2].

eeg1

Photo of an EEG, by Gabriele Fischer-Mania

As we all know from everyday experience, it is pretty simple to use a body part other than the hand to perform a purposeful action. But the findings from studies investigating movement planning with different effectors are not clear-cut. Usually, the paradigm used in this kind of research works as follows: The participants look at a centrally presented fixation mark and rest their hand in front of the body midline. Next, a dot indicating the movement goal is presented to the left or right of fixation. The colour of the dot tells the participants, whether they have to use their hand or their eyes to move towards the dot. Only when the fixation mark disappears, the participants are allowed to perform the movement with the desired effector. The delay between the presentation of the goal and the actual movement is important, because muscle activity affects the signal that is measured from the brain (and not in a good way). The subsequent analyses usually focus on this delay period, as the signal emerging throughout is thought to reflect movement preparation. Many studies assessing the activity preceding eye and hand movements have suggested that PPC is organised in an effector-specific manner, with different sub-regions representing different body parts [3]. Other studies report contradicting results, with overlapping activity for hand and eye [4].

eeg2

EEG photo, as before.

But here’s the thing: We cannot stare at a door until it finally opens itself and I imagine picking up that lost piece of laundry with my eye to be rather uncomfortable. Put more scientifically, hands and eyes are functionally different. Whereas we use our hands to interact with the environment, our eyes are a key player in perception. This is why my supervisor came up with the idea to compare hands and feet, as virtually all goal-directed actions we typically perform using our hands can also be performed with our feet (e.g., see http://www.mfpa.uk for mouth and foot painting artists). Surprisingly, it turned out that the portion of PPC that was previously thought to be exclusively dedicated to hand movement planning showed virtually the same fMRI activation during foot movement planning [5]. That is, the brain does not seem to differentiate between the two limbs in PPC. Wait, the brain? Whereas fMRI is useful to show us where in the brain something is happening, it does not tell us much about what exactly is going on in neuronal populations. Here, the high temporal resolution of EEG allows for a more detailed investigation of brain activity. During my PhD, I used EEG to look at hands and feet from different angles (literally – I looked at a lot of feet). One way to quantify possible effects is to analyse the signal in the frequency domain. Different cognitive functions have been associated with power changes in different frequency bands. Based on a study that found eye and hand movement planning to be encoded in different frequencies [6], my project focused on identifying a similar effect for foot movements.

feet_pixabay

Source: Pixabay

This is not as straightforward as it might sound, because there are a number of things that need to be controlled for: To make a comparison between the two limbs as valid as possible, movements should start from a similar position and end at the same spot. And to avoid expectancy effects, movements with both limbs should alternate randomly. As you can imagine, it is quite challenging to find a comfortable position to complete this task (most participants did still talk to me after the experiment, though). Another important thing to keep in mind is the fact that foot movements are somewhat more sluggish than hand movements, owing to physical differences between the limbs. This circumstance can be accounted for by performing different types of movements; some easy, some difficult. When the presented movement goal is rather big, it’s easier to hit than when it’s smaller. Unsurprisingly, movements to easy targets are faster than movements to difficult targets, an effect that has long been known for the hand [7] but had not been shown for the foot yet. Even though this effect is obviously observed during movement execution, it has been shown to already arise during movement planning [8].

So, taking a closer look at actual movements can also tell us a fair bit about the underlying planning processes. In my case, “looking closer” meant recording hand and foot movements using infrared lights, a procedure called motion capture. Basically the same method is used to create the characters in movies like Avatar and the Hobbit, but rather than making fancy films I used the trajectories to extract kinematic measures like velocity and acceleration. Again, it turned out that hands and feet have more in common than it may seem at first sight. And it makes sense – as we evolved from quadrupeds (i.e., mammals walking on all fours) to bipeds (walking on two feet), the neural pathways that used to control locomotion with all fours likely evolved into the system now controlling skilled hand movements [9].

What’s most fascinating to me is the incredible speed and flexibility with which all of this happens. We hardly ever give a thought to the seemingly simple actions we perform every minute (and it’s useful not to, otherwise we’d probably stand rooted to the spot). Our brain is able to take in such a vast amount of information – visually, auditory, somatosensory – filter it effectively and generate motor commands in the range of milliseconds. And we haven’t even found out a fraction of how all of it works. Or to use a famous quote [10]: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

 [1] Batista, A. (2002). Inner space: Reference frames. Current Biology, 12(11), R380-R383.

[2] Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389-443.

[3] Connolly, J. D., Andersen, R. A., & Goodale, M. A. (2003). FMRI evidence for a ‘parietal reach region’ in the human brain. Experimental Brain Research153(2), 140-145.

[4] Beurze, S. M., Lange, F. P. de, Toni, I., & Medendorp, W. P. (2009). Spatial and Effector Processing in the Human Parietofrontal Network for Reaches and Saccades. Journal of Neurophysiology, 101(6), 3053–3062

[5] Heed, T., Beurze, S. M., Toni, I., Röder, B., & Medendorp, W. P. (2011). Functional rather than effector-specific organization of human posterior parietal cortex. The Journal of Neuroscience31(8), 3066-3076.

[6] Van Der Werf, J., Jensen, O., Fries, P., & Medendorp, W. P. (2010). Neuronal synchronization in human posterior parietal cortex during reach planning. Journal of Neuroscience30(4), 1402-1412.

[7] Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology47(6), 381.

[8] Bertucco, M., Cesari, P., & Latash, M. L. (2013). Fitts’ Law in early postural adjustments. Neuroscience231, 61-69.

[9] Georgopoulos, A. P., & Grillner, S. (1989). Visuomotor coordination in reaching and locomotion. Science, 245(4923), 1209–1210.

[10] Pugh, Edward M, quoted in George Pugh (1977). The Biological Origin of Human Values.

 

Sitting in the dark: the importance of light in theatre

theatre-light-design
I’ve spent a lot of the past year sitting in the dark – literally. For people who work in theatre, this may come as no surprise. In the eight years I spent working full-time as a lighting assistant/production electrician, I could quite easily go for three or four days in a row without seeing any sunlight. I’ve often thought it odd that the people who “create” light for live performance, people who use light as their primary creative medium, spend so much time in the dark. If you’re unfamiliar with the theatre production process, here’s a (very brief and very simplified!) rundown:
In most regional and London producing theatres, work on a production begins about four to six months prior to the first preview. This can be significantly longer on larger shows, particularly those in the West End. About a week before the first preview, the cast, director, and design team move into the theatre space itself to start technical rehearsals. By this stage, the set has been built, costumes made, lights and speakers rigged, etc. The technical rehearsal is the start of what is called the production week (also known as “hell week” in some American theatres on account of the long days). Technical rehearsals are the only time the entire company is together in the performance space, and they are – as the name suggests – focused primarily on the technical and design elements of a production. Technical rehearsals are often very “stop and start” as cues, scene changes, costume changes, etc. are run multiple times until all parties are comfortable. Once the whole production is worked through in this manner, this is followed by a dress rehearsal (often two or three, plus notes sessions) before the first public performance.

The lighting designer

For a lighting designer, the first day of technical rehearsals is often the most difficult. All of the lighting designer’s pre-production research, the conversations they have had with the designer, director and theatre’s head of lighting, and the plans they have drawn and had implemented by the theatre’s lighting department converge on this day, and there is enormous pressure on the lighting designer to “get it right” – funding situations in most UK theatres are such that time, money and resources are at a premium and at this point there is not enough of any of those to start over or make significant changes. This pressure is compounded by the fact that lighting is the sole visual design element that can only be created in the performance space. During the pre-production period, set designers produce a scale modelbox, alongside technical drawings, sketches and storyboards, and costume designers may use artistic drawings in conjunction with fabric swatches, for example, to help articulate their process and creative ideas. For both set and costume design, the actual product is built over several weeks and can be seen as a work-in-progress during this time. Moreover, the materials of set and costume design are tangible and the work can be observed, commented on, tweaked and refined outside and, crucially, before entering the actual performance space. Similar comparisons and tools do not exist for lighting designers. Computer visualisation software may be used; however, these programs rarely provide the detail needed to fully explain, describe or develop the potential of light outside a performance space.
In addition, these days tend to involve the most negotiation and adjustment as creative teams (especially the lighting designer) learn to navigate the “language” and “grammar” of a production, while also refining the spoken language and grammar they use to articulate it. It is this process that my research focuses on. How do lighting designers use language to articulate ideas about light and lighting, a material and a process that is largely intangible? How do they additionally use language to exercise agency and exert influence in situations of creative collaboration?

My research

To answer these questions, I sit in the dark, behind the lighting designer, armed with two recording devices. One of these records the ambient conversation, usually between the director or designer and the lighting designer. The other records the conversation on “cans” (UK theatre slang for the headsets worn by all members of the design and technical teams to facilitate conversation without having to resort to shouting backstage!).
The darkness provides an ideal environment for conducting my fieldwork. Even though I am acting as an “overt insider” (Merton, 1972; Greene, 2014), the darkness makes it possible for me to fade into the background and remain largely unnoticed by the people I am observing – which is simultaneously useful and disconcerting. There is something anonymising about the dark, but it can also be quite liberating. There’s plenty of interesting research on audience behaviour and fascinating studies on people’s behaviour generally in the dark — but for now, I’ll just say what an illuminating (see what I did there?) experience sitting in the dark has been!
References:
Greene, M.J. 2014. On the inside looking in: methodological insights and challenges in conducting qualitative insider research. The Qualitative Report. 19(How To Article 15), pp.1–13.
Merton, R.K. 1972. Insiders and outsiders: a chapter in the sociology of knowledge.American Journal of Sociology. 78(1), pp.9–47.

Space weather – predicting the future

by Aoife McCloskey

Early Weather Prediction

Weather is a topic that humans have been fascinated by for centuries and, dating back to the earliest civilisations ’till the present day, we have been trying to predict it. In the beginning, using the appearance of clouds or observing recurring astronomical events, humans were able to better predict seasonal changes and weather patterns. This was, of course, motivated by reasons of practicality such as agriculture or knowing when the best conditions to travel were, but additionally it stemmed from the innate human desire to develop a better understanding of the world around us.

Weather prediction has come a long way from it’s primordial beginning, and with the exponential growth of technological capabilities in the past century we are now able to model conditions in the Earth’s atmosphere with unprecedented precision. However, until the late 1800’s, we had been blissfully unaware that weather is not confined solely to our planet, but also exists in space.

Weather in Space

Weather, in this context, refers to the changing conditions in the Solar System and can affect not only our planet, but other solar system planets too. But what is the source of this weather in space? The answer is the biggest object in our solar system, the Sun. Our humble, middle-aged star is the reason we are here at all in the first place and has been our reliable source of energy for the past 4.6 billion years.

However, the Sun is not as stable or dependable as we perceive it to be. The Sun is in fact a very dynamic object, made up of extremely high temperature gases (also known as plasma). Just like the Earth, the Sun also generates its own magnetic field, albeit on a much larger scale than our planet. This combination of strong magnetic fields, and the fact that the Sun is not a solid body, leads to the build up of energy and, consequently, energy release. This energy release is what is known as a solar flare, simply put it is an explosion in the atmosphere of the Sun that produces extremely high-energy radiation and spits out particles that can travel at near-light speeds into the surrounding interplanetary space.

The Sun: Friend or Foe?

Sounds dangerous, right? Well yes, if you were an astronaut floating around in space, beyond the protection of the Earth, you would find yourself in a very undesirable position if a solar flare were to happen at the same time. For us here on Earth, the story is a bit different when it comes to being hit with the by-products of a solar flare. As I said earlier, our planet Earth produces its very own magnetic field, similar to that of a bar magnet. For those who chose to study science at secondary school, I’m sure you may recall the lead shavings and magnet experiment. Well, that’s pretty much what our magnetic field looks like, and luckily for us it acts as a protective shield against the high-energy particles that come hurtling our way on a regular basis from the Sun. One of the most well-known phenomena caused by the Sun is actually the Aurora Borealis, i.e., the northern lights (or southern lights depending on the hemisphere of the world you live).

aurora-1

Picture of the Aurora Borealis, taken during Aoife’s trip to Iceland in January 2016.

This phenomenon has been happening for millennia, yet until recent centuries we didn’t really understand why. What we know now is that the aurorae are caused by high-energy particles from the Sun colliding with our magnetic field, spiralling along the field lines and making contact with our atmosphere at both the north and south magnetic poles. While the aurorae are actually a favourable effect of space weather, as they are astonishingly beautiful to watch and photograph, there are unfortunately some negative effects too. These effects here on Earth range from satellite damage (GPS in particular), to radio communication blackout, to the more extreme case of electrical grid failure. Other effects are illustrated in the image below:

My PhD – Space Weather Forecasting

So, how do we predict when there is an event on the Sun that could have negative impacts here on Earth? Science, of course! In particular, in the area of Solar Physics there has been increasing focus on understanding the physical processes that lead to space weather phenomena and trying to find the best methods to predict when something such as a solar flare might occur.

It is well known that one should not directly view the Sun with the naked eye, therefore traditionally the image of the Sun was projected onto pieces of paper. Using this method, one of the first features observed on the Sun were large, dark spots that are now known as sunspots. These fascinated astronomers for quite some time and there is an extensive record of sunspots kept since the early 1800’s. These sunspots were initially traced by hand, on a daily basis, until photographic plates were invented and this practice became redundant. After many decades of recording these spots there appeared to be a pattern emerging, corresponding to a roughly 11-year cycle, where the number of spots would increase to a maximum and gradually decrease again. It was shown that this 11-year cycle was correlated with the level of solar activity, in other words the number of solar flares and how much energy they release can also be seen to follow this pattern.

carrington_sspots

Sunspot drawing by Richard Carrington, 01 September 1859

Leading on from this, it is clear that there exists a relationship between sunspots and solar flares, so logically they are the place to start when trying to forecast. My PhD project focuses on sunspots and how they evolve to produce flares. For a long time, sunspots have been classified according to their appearance. One of the most famous classification schemes was developed by Patrick McIntosh and has been used widely by the community to group sunspots by their size, symmetry and compactness (how closely packed are the spots) [1]. Generally, the biggest, baddest and ugliest groups of sunspots produce the most energetic, and potentially hazardous, flares. Our most recent work has been studying data from past solar cycles (1988-2010) and looking at how the evolution of these sunspot groups relates to the flares they produce [2]. I found that those that increase in size produce more flares than those that decrease in size. This has been something that has been postulated before in the past, and additionally it helps to answer an open question in the community as to whether sunspots produce more flares when they increase in size (grow) or when they decrease in size (decay). Using these results, I am now implementing a new way to predict the likelihood of a sunspot group to produce flares and additionally the magnitude of those flares.

 

Space weather is a topic that is now, more than ever, of great importance to our technology-dependent society. That is not to say that there will definitely be any catastrophic event in the near-future, but it is certainly a potential hazard that needs to be addressed on a global scale. In recent years there has been some significant investment in space weather prediction, with countries such as the UK and the U.S. both establishing dedicated space weather forecasting services. Here in Ireland, our research group at Trinity College has been working on improving the understanding of and prediction of space weather for the past ten years. I hope, in the near future, space weather forecasting will reach the same level of importance as the daily weather forecast, but for now – watch this space.

  1. McIntosh, Patrick S (1990), ‘The Classification of Sunspots’,  Solar Physics, p.251-267.
  2. McCloskey, Aoife (2016), ‘Flaring Rates and the Evolution of Sunspot Group McIntosh Classifications’, Solar Physics, p.1711-1738.