The Wheat We Eat: Quality or Quantity?

Building better bread: Using genetics to study senescence and nutrient content in wheat.

by Sophie Harrington, John Innes Centre

Wheat provides over 20% of the calories consumed worldwide, the second most of any crop after rice (1). Nearly all of us will eat wheat in one form or another every day—staple foods like bread and pasta as well as our favourite treats, from cake and biscuits to certain types of beer. For many cultures, wheat has been essential for thousands of years –  it was originally domesticated around 10,000 years ago. The wheat we eat today is descended from 3 different kinds of wild grasses which crossed together at different times to produce the wild ancestor of wheat (Figure 1)(2). Some of us can take it for granted now that we’ll be able to pop down to the corner shop and pick up a loaf of bread at a moment’s notice, but it took thousands of years of selection by farmers to get to the wheat that we’d recognise today.

SH2

Figure 1: Wheat originated from two separate crosses between wild grasses. The first occurred around 400,000 years ago, producing wild emmer. Wild emmer then crossed with a different grass around 10,000 years ago. This final cross produced Triticum aestivum, which would be domesticated into bread wheat by humans. At each cross, the genomes of the wild grasses were combined, resulting in Triticum aestivum containing 3 separate genomes (shown as “AABBDD”, with each letter corresponding to one of the ancestral genomes). Figure courtesy of Dr. Cristobal Uauy.

This process of selection was accelerated in the mid-1900s, during the period called the “Green Revolution.” A combination of research into better breeding techniques and new chemical fertilizers, among other factors, contributed to the substantial increase in yield seen during this period. One critical change involved reducing the height of wheat plants which allowed more energy from photosynthesis to be moved into the grain rather than being stored in the leaves and stems of the plants. The yield increases that came about due to the Green Revolution were essential to keep up with the demands of the growing world population.

Most of the work during the Green Revolution was focused on increasing yield alone, boosting the calories that could be extracted from a single field of wheat. But the benefits of wheat extend far beyond calories along. Perhaps surprisingly, wheat provides 25% of the global protein intake (1). Most of us would think of meat or beans as our main sources of protein, but as a staple crop wheat is essential for our protein intake. The nutrients present in the wheat grain, like iron and zinc, are also essential in our diet.

Campaigns to eradicate hunger have had unprecedented success in recent years, and over 89% of the world’s population are able to obtain enough calories for their basic needs (3). Yet increasingly it is the nutrient content of our diets that is leading to the growing health crises globally. At one extreme, malnutrition, defined as the lack of essential nutrients in a diet that has sufficient calories, is one of the leading causes of childhood stunting (3). At the other extreme, obesity in both childhood and adulthood is more common, partly a result of highly calorific food with poor nutritional value becoming so easily available.

Quality Control

During the development of wheat, the period of growth known as “senescence” is critical in regulating the amounts of proteins and nutrients in the developing grain. This is the period where wheat changes from its living, green state to the dead, yellowing state that is so familiar to us at the end of summer. As the leaves die, the molecules in the leaf start to break down and the elements that make up these molecules are transported from the leaves into the developing grain. At the same time, proteins and carbohydrates are also being remobilised from the leaves and moved to the grain. It’s this movement of nutrients and protein that is essential in establishing the quality of the grain. Different levels of protein determine what the grain can be used for. Bread making requires high-protein flour—this protein makes gluten which creates the structure of bread. At the bottom end of the scale, lower quality wheat can be used as feed for livestock and poultry. However, while increased quality is desired, historically a trade-off has been seen between wheat quality and yield (Figure 2).

SH3

Figure 2: Increasing quality and yield often leads to a trade-off. As senescence moves later, yield tends to increase, while quality (such as protein and nutrient levels) tends to decrease. The reverse is found with earlier senescence. This leads to a balancing act with the timing of senescence—how can you maximise both yield and quality?

My research is focused on understanding how the process of senescence is controlled in wheat in the hope that we can use this knowledge to increase the nutritional quality of wheat grains. I’m particularly interested in studying genes that are involved in regulating senescence. These genes are called transcription factors, and they act as master regulators in the cell. Transcription factors are able to bind to DNA and influence the expression of other genes. Oftentimes, changing how a transcription factor is expressed can have a large impact on many other downstream targets.

Previous work found a specific transcription factor, known as NAM-B1, which promoted the onset of senescence (4). When this transcription factor wasn’t active, senescence in wheat was significantly delayed (Figure 3). This delayed senescence was also correlated to a drop in the nutritional content of the wheat grain. This suggested that the timing of senescence could directly influence the levels of nutrients and proteins in the grain. Notably, grain size was not affected by the change in nutrient content and senescence timing, suggesting that studying the NAM-B1 gene might provide insight into how to break the trade-off between quality and yield.

SH4

Figure 3: Reducing the action of NAM-B1 (left) leads to delayed senescence in wheat compared to the wild-type plant (right). Panel from (4).

I’m now trying to identify new transcription factors that also regulate the timing of senescence. One way that we’re approaching this question is to look for proteins that interact with NAM-B1. We know that the NAM-B1 transcription factor is only functional when it is bound to another transcription factor in the same family, called NACs. This partner might be another copy of itself, or it could possibly be a different NAC transcription factor entirely. We hypothesised that NAC transcription factors that bind NAM-B1 might also regulate senescence. To study this, we can use different experimental techniques in species as varied as yeast and Nicotiana benthamiana, a relative of tobacco, to look for proteins that can bind to NAM-B1.

Once I’ve identified proteins that bind to NAM-B1, the next question is what these proteins do in the wheat plant. A recently developed resource, the wheat TILLING population, has started to make this process much quicker and easier (5). This is a large set of different lines of wheat that have been mutated by a chemical known as ethyl methanesulfonate (or EMS). This chemical leads to specific single-base-pair changes in the DNA sequence. This means that, in at least one of the thousands of different wheat lines, you’re very likely to find a mutation that knocks-out the action of your favourite gene. All of the mutated wheat lines in this TILLING population have had their genes sequenced. This means that all of the mutations in the genes have been identified and catalogued. Now it’s very easy for us to search for mutations in a gene we’re interested in, and we can order the lines we want online.

After identifying mutations in the genes I’m interested in, I then need to start making crosses before I can look at the effect. This is because, unlike us, wheat is a polyploid. This means that wheat has three different genomes, a legacy of the way wheat was domesticated from three different wild grasses (Figure 1). One of the big effects of this is that there are usually at least 3 copies of each gene—one for each genome. So a mutation in one of the three genes may not actually make any difference to the plant, as the other two copies can compensate. As a result, it’s very important to make crosses so that all of the copies of the genes have mutations in them. Otherwise it would be very easy to think that a gene isn’t important as a single mutation doesn’t cause any change. This polyploidy is one of the reasons that breeding in wheat has historically been so difficult, as random mutations are unlikely to happen more than one copy and are thus often obscured—what can be called the “hidden variation” (2).

Once you’ve found your candidate genes, identified mutated lines, and made all of your crosses, you’re ready to see if your gene has an effect. I do most of my trials in the greenhouse, so that I can look at my plants on a smaller scale than you would need for the field. By scoring for senescence onset and progression in my mutant plants, I’m able to identify whether my mutants influence the timing of senescence (Figure 4). This is quite important as earlier senescence may lead to increased nutrient content, so it’s a useful proxy as it’s quick and cheap to study. After identifying mutant lines that have an interesting phenotype (in this case variation in senescence timing), I can directly measure the levels of nutrients such as iron and zinc in the grain. This is an essential final step to see how the variation in senescence timing correlates with the grain nutrient content.

SH5

Figure 4: Variation in chlorophyll breakdown in mutant plants. The mutant plant on the left has yellow leaves, indicating that the chlorophyll is being broken down much earlier than the wild-type plant on the right. This suggests that certain pathways associated with senescence are being activated earlier in the mutant plant.

Currently in my research, I’m still in the process of scoring my plants for senescence and identifying interesting mutants. Wheat takes quite a long time to grow in the greenhouse—about 4 months from seed to seed—so it takes quite an investment of time to get through the generations needed for crossing. A new technique for wheat growth called, appropriately, “Speed Breeding” is starting to change this (6). By growing wheat under special LED lighting for 22 hours a day in rooms where the environment is kept constant we can reduce the time for each generation to between 8 and 10 weeks. This is a significant time saving, and is incredibly powerful particularly for generation of new lines from crosses.

It still remains to be seen whether the proteins that I found to interact with NAM-B1 play a significant role in regulating senescence. There are some promising initial results from the mutants I’ve developed, but it will require another few sets of experiments in the glasshouse and the field before I’m sure we’ve honed in on good candidates. Watch this space!

Advertisements

How I Changed from Science to Technology

by Azahara Fernández Guizán

AF4

How I changed from Science to Technology

I was never a kid that was sure about what professional career I wanted when I grew up. And this has been a good thing for me, because it has let me experience many different fields, and led me to where I am today.

I was born in the north of Spain, in a mining zone of Asturias. My father was a coal miner and my mother a housewife. I attended a local school and a local high school. My grandmother says I was an unusual kid, preferring to be bought a book rather than a box of sweets. I also started learning English when I was 6 years old, and spent my free time reading historical novels and biographies.

I enjoyed visiting museums and monuments, and I used to search for information in my town’s library before going on an excursion. I loved to write stories and tales, and had always obtained high marks in school, which led my teachers to suggest that I study medicine. But I always changed my mind –  from architecture, to journalism or even dentistry, depending on the book I was reading or the museum I’d just visited.

At that age, only one thing was clear: I wanted to be an independent and strong woman like the ones that inspired me. I hadn’t seen many role models during my primary education, but one teacher told us about Marie Curie. At the library, I discovered Rita Levi-Montalcini and the Brontë sisters.

 

SECONDARY STUDIES

During the last year of high-school I was a mess, and the pressure was high because I had to make a decision. All I had were doubts

In Spain at that time, after finishing your last secondary education course, the students that want to continue to a degree have to take a general exam, the PAU. You could choose the subjects you want to be tested on and, after the exams took place, you were given a mark calculated to take account of your secondary school marks and the results of PAU exams. According to this mark, you could register for certain degrees.

At that point, I decided to take more exams than necessary on the PAU in order to have more options in different types of degree, for example, science, engineering, or languages… But the worst moment of my student life came, and I had to decide.

I had two options on my mind: a Software Engineering degree, and a Biology degree. I must admit it, but at that time I only knew engineering stereotypes and I never liked video games or anything related with hardware, so I decided that a Biology degree would suit me better.

BIOLOGY DEGREE AND NEUROSCIENCE MASTERS

During my degree, I decided that plants and animals were not my passion, but I loved Microbiology, Genetics, Immunology and Neuroscience. I discovered more female role models, researchers who really inspired me, whose lives were incredible to me. I worked hard during my degree and travelled a lot during the summers, thanks to some scholarships that I was awarded (I spent one month in Lowestoft, another in Dublin, and another one in Toronto), and started learning German.

AF2

Azahara in the lab

During the second year of my biology degree, I decided that I would become a scientist, and started to look for a professor who would let me gain some experience in their laboratory.

During my penultimate year, I started working in a Neuroscience laboratory, studying the 3D eye degenerating pattern on C3H/He rd/rd mice. After finishing my degree, I decided to enrol in a Masters of Neuroscience and Behavioural Biology in Seville. During this masters, I worked in another Neuroscience laboratory doing electrophysiological studies, trying to understand how information is transformed in the cerebellar hippocampus circuit and how this mechanism could allow us to learn and memorise.

This was a period of my life where I worked a lot of hours, the experiments were very intense, and I had the opportunity to meet important scientist from all the world. I also had a physics peer that analysed all our data, and developed specific programmes in Matlab, which impressed me profoundly.

IMMUNOLOGY PHD

After this period, I continued working in Science, but I decided to start my PhD on Immunology, back in Asturias.

I worked in a laboratory in which, due to my friends in the lab, every day was special. We worked hard studying different types of tumours and testing different molecules, but also had the time to share confidences and laughs. After three years, I became a PhD in Immunology, and as it was the normal thing to do, I started looking for a post-doc position.

Rather than feeling happy or enthusiastic about the future, I discovered myself being upset and demotivated. I really didn’t want to carry on being a scientist. A huge sensation of failure invaded me, but as J.K. Rowling said “It is impossible to live without failing at something, unless you live so cautiously that you might as well not lived at all. In which case, you’ve failed by default”.

I want to specify that I don’t consider my PhD a waste of time – it has given me, apart from scientific publications, many important aptitudes and abilities, such as team work, analysis, problem solving, leadership, organisation skills, effective work habits, and better written and oral communication.

BECOMING A SOFTWARE DEVELOPER

As you might imagine, this was a hard moment of my life. I was unemployed, and doubtful about my professional career – just as I had been after high school.

Thanks to my husband, who supported me while converting my career, I decided to give software development a try.  As I didn’t have the necessary money or time to start a new degree, I signed up for a professional course in applications software development. The first days were difficult since all the other students were young and I didn’t feel at ease.

But as I learned software languages as HTML, CSS, JavaScript and Java, I also participated with good results in some software competitions which allowed me to gain confidence.

AF3

In 2015 I started working as software developer in .Net MVC, a language that I hadn’t studied during my course, but I had the necessary basics to learn it quickly and become part of a team. For me, one of the most marvellous things about software development is that it consists of team-work.

I also discovered that there are a lot of people working in this field that love to exchange knowledge, and I regularly go to events and meetups. I have also started recently giving talks, and workshops, some of them technological, with the aim of promoting the presence of women in technology.

AF4

Women and girls need to be encouraged to discover what software development really is. The software industry needs them. Software can be better, but only if it is developed by diverse teams with different opinions, backgrounds, and knowledge.

Social Egg Freezing, the Law and Women’s Autonomy: Are We Putting All Our Eggs into One Frozen Basket?

eggf

Image from someecards.

by Virginia Novaes Procópio de Araujo, Dublin City University

Lisa is 37 years old and she has just broken up with her long-term boyfriend. She always imagined that this relationship would lead to marriage and children. Lisa is stable and happy in her career. However, she is now worried that if she does not meet someone new, and soon, her biological clock will be merciless with her and she will be left childless. After a visit to a fertility clinic she decides to freeze her eggs, in order to remove the pressure of having to rush into a new relationship. She wants time and is not ready to date again. She wants to raise a child with a committed partner and believes that freezing her eggs will offer her the best chance of ensuring this.

The story of Lisa is fictional, but reflects the current experience of many women who are availing of social egg freezing.

SPERM, EMBRYOS, EGGS AND THE BIRTH OF SOCIAL EGG FREEZING

Sperm has been successfully frozen since the 1950s using a technique called slow-freezing, and embryo freezing has been an established technique since 1992.[1] On the other hand, egg freezing has been considered experimental until very recently. This was mainly due to the fact that eggs contain a higher amount of water than embryos.[2] The slow freezing of eggs results in the formation of ice crystals, which damage the cell and result in lower success rates.[3] Therefore, historically, egg freezing was only accessible to women with cancer or genetic diseases which cause premature infertility, as a small chance to conceive in the future was better than none at all.[4]

The experimental status of egg freezing was lifted in 2012 in Europe[5] and 2013 in the USA[6] due to advances in freezing methods, particularly a process known as vitrification, which involves rapid cooling of the eggs in liquid nitrogen without the formation of ice crystals. This is highly effective for egg freezing. Therefore, egg freezing began to be offered to healthy, fertile women and social egg freezing was born. This is the idea that women freeze their eggs due to lifestyle reasons, which include: to prevent age-related infertility, to postpone motherhood due to their career, to find a suitable partner, to be financially stable, to be psychologically and emotionally ready to become a mother, and to expand their reproductive autonomy.[7]

eggf2

Eggs cryopreserved in liquid nitrogen. Image from Kinderwunsch & Hormonzentrum Frankfurt 

 

LAW, AUTONOMY AND FEMINIST BIOETHICS

My research looks at social egg freezing in Europe from a legal and feminist bioethical perspective. I am assessing the impact of the law on social egg freezing in Europe, particularly in the United Kingdom and Ireland to determine if the law enhances or diminishes women’s reproductive options. For instance, my research has identified that Austria, France and Malta have specific law prohibiting egg freezing for non-medical reasons,[8] diminishing women’s options in those countries.

In the context of autonomy, traditional liberal Bioethics tends to have an individualistic and self-sufficient approach, disregarding the influence power relations (“competing social forces”) can have on someone’s autonomy.[9] In a liberal society, freedom is given to the individual to do as they please with their body, as long as they do not cause harm to others.[10] This highlights the rights of an individual and removes the focus on the responsibilities that may arise from that choice, for example, a child and its well-being.[11]

However, the literature demonstrates that women take their relationships and the power structures that surround them into account when making decisions.[12] For instance, a woman that decides to freeze her eggs is not only thinking about herself, but also about her parents (the future grandparents), her future partner or husband, the health of her future baby (as younger eggs are preferable to avoid chromosomal abnormalities), her finances, her maturity, her employment situation and even society (to increase birth rates in an ageing population). Considering the numerous competing social forces, a woman may feel empowered or oppressed by social egg freezing, and that is why my research adopts a relational autonomy approach from Feminist Bioethics, particularly the theory of self-trust developed by Carolyn McLeod.

Trust is a relational aspect of life involving two people: a patient trusts their doctor on the grounds of an established moral relationship (doctor-patient). Self-trust lacks the two entities, as when one trusts oneself, they are optimistic they will act in a competent manner and within their moral commitment.[13] It is relational in the sense that it is moulded by the responses of others and societal norms, as the other gives a truthful and respectful feedback about yourself.[14] Therefore, if a doctor does not inform realistically of potential risks and future outcomes of egg freezing, a woman may make poor choices.

Research shows that women of reproductive age are misinformed regarding cost, process and effectiveness of egg freezing, and that they want to be accurately informed about it.[15] Further, studies[16] demonstrate that residents and health professionals in the area of Obstetrics and Gynaecology lack accurate information about fertility decline due to age, they have conservative opinions, and are reticent to inform healthy patients about social egg freezing.[17] Medical paternalism could explain this behaviour and it needs to be remedied urgently.

 

EGG FREEZING – HOW IT WORKS

Women need to be aware that in order to freeze eggs, they are collected in the same way as is done for IVF. Women self-inject hormones for approximately 10-14 days to stimulate ovulation and when the eggs are mature, they are collected surgically under sedation, with small risks of infection and bleeding.[18] Hormone injections are not completely risk-free, and although rare, some women may develop ovarian hyperstimulation syndrome (OHSS)[19], characterised by swollen ovaries, a bloated abdomen, pain, nausea, vomiting and, in severe cases, liver dysfunction and respiratory distress syndrome.[20]

eggf3

Egg collection and freezing. Image from Clínica Eugin, Barcelona.

Although IVF using thawed eggs is just as successful as using fresh eggs[21], there are no guarantees that if a woman freezes her eggs, she will definitely have a baby – it just increases her chances.[22] That is simply the reality of fertility treatments, and doctors need to be forthcoming with information. Ideally, women will conceive naturally, having frozen their eggs merely as an ‘insurance policy’ and for peace of mind. [23] The age of the woman impacts the quality of the eggs and doctors recommend that egg freezing occurs prior to the late-thirties.[24] There is considerable emphasis on educating young women on how not to get pregnant. Women also need to be educated about their biological ‘clocks’ and the possibilities and limitations of egg freezing.

CAREER AND THE PURSUIT OF ‘MR. RIGHT’ INSTEAD OF ‘MR. RIGHT NOW’

The reasons why women are freezing their eggs also need to be demystified. Baldwin interviewed women who availed of social egg freezing in the UK, the USA and Norway and discovered that they believe that there is a ‘right time’ to become a mother.[25]  This is when, ideally, they are financially secure and in a stable relationship with a man who wishes to raise a child.[26] There has been considerable backlash from the media about social egg freezing, particularly since 2014, when Apple and Facebook offered egg freezing as a benefit for their female employees.[27] It raised concerns that women would be forced into it in order to be considered a ‘team player’ and ascend in their careers, treating motherhood as an inconvenience. However, the main reason why women are freezing their eggs has nothing to do with career advancement, it is actually due to the lack of a suitable partner and to avoid future regret.[28] In fact, one of the women interviewed by Baldwin stated: “I think the media really misrepresent women who have children later. I don’t know a single woman who has put off having babies because of her career, not a single woman I have ever met has that been true for.”[29]

Further, Baldwin and her team coined the term “panic-partnering” to express what future regret meant for the women in the study.[30] This is the fear that they might run out of time and settle for any man, rush into having a child purely to avoid childlessness, and regret this later once the relationship fails.[31] These women also rejected the idea of using a donated egg or having a baby alone with donated sperm, as they wanted the ‘whole package’ – a committed relationship and a father to their genetically-related child.[32] Social egg freezing allows women to ‘buy time’ to find this right partner.

There is ongoing research at the London Women’s Clinic to assess why women are freezing their eggs.[33] Zeynep Gurtin from the University of Cambridge chairs open seminars for single women at the clinic and has identified similar women to those from Baldwin’s research: they are highly educated, in their late thirties and early forties and are “frustrated by their limited partnering options.”[34] These women want to find ‘Mr. Right’, not ‘Mr. Right Now’. Gurtin affirms: “as women become more and more successful in educational and career terms, they have begun to outnumber similarly qualified men, and will need to adjust their partner expectations, embark on single parenting, embrace childlessness, or put some eggs in a very cold basket.”[35]

I recently attended one of these seminars and found the London Women’s Clinic to be a highly positive environment, with counselling and support groups available for their clients. The open seminars are a good opportunity for women to obtain realistic information in clear terms, without it being a sales pitch. Research from the USA[36] affirms that a considerable number of women regret freezing their eggs, particularly if a low number of eggs are obtained. They also complained about a lack of emotional support and counselling.[37] Therefore, it is crucial that clinics offer counselling both during and after egg freezing to ensure that women have realistic expectations as to what the technology can and cannot do.

 

COSTS

Social egg freezing is not covered by health insurance[38] and is therefore a private procedure, costing between £3000 – £3500 in the UK[39] and approximately €3000 in Ireland.[40] This raises questions of social justice and fairness, as only women with greater financial means can access egg freezing for non-medical reasons. Further research focusing on this issue is necessary.

 

FREEDOM FROM EMBRYO FREEZING AND LEGAL DISPUTES

The success of egg freezing expands women’s reproductive autonomy as it frees them from having to freeze embryos with a partner. In 2007, a British case reached the European Court of Human Rights (ECtHR). In Evans v. United Kingdom, the applicant, Natallie Evans, had ovarian cancer and underwent IVF with her partner to create six embryos to be frozen. When the relationship ended, the ex-partner removed his consent for the embryos to be used. The applicant could no longer extract eggs and the six embryos were her last opportunity to have a genetic child. The ECtHR discussed whether there was a violation of article 2 (right to life) and article 8 (right to respect for privacy and family life). It was decided that since embryos do not have a right to life in the UK that there was no violation of article 2.[41] The Court also found that overruling someone’s withdrawal of consent, even in this exceptional case, would not violate article 8 or exceed the margin of appreciation.[42]

In other words, the ECtHR decided that the ‘right not to procreate’ of the ex-partner overruled the ‘right to procreate’ of the applicant and the embryos had to be discarded. Ms. Evans could have created embryos with a donor sperm, avoiding legal disputes. However, as has been demonstrated, women wish to have a partner to raise a child with. The options for women have expanded and if they freeze their eggs it is their sole decision to use them for IVF with a partner or sperm donor, to donate them to another woman, or for research.

 

GAMETE STORAGE AND A CALL TO ACTION

Current technology allows eggs to be frozen indefinitely. In the UK, the Human Fertilisation and Embryology Act determines that gametes can be stored for up to 10 years for non-medical reasons and up to 55 years for medical reasons.[43] This reduces the benefits of social egg freezing. For instance, if a woman freezes her eggs at age 27 to ensure she has the best possible eggs, she will have to use them prior to her 37th birthday. There is no time extension, which could cause a considerable amount of pressure for this woman, who believed she was buying herself extra time.

Kylie Baldwin, one of the most prominent researchers of social egg freezing in the UK, has created a petition to convince the UK Government and Parliament that the law needs to change.[44] Signatures from UK citizens and residents are requested at this moment, prior to the 27th of October 2018, in order to be reviewed by the UK Government. This movement is highly important, and I advise all UK citizens and residents to sign it.

In Ireland, the General Scheme of the Assisted Human Reproduction Bill 2017 also adopts this 10-year time limit for non-medical gamete freezing.[45] If the bill remains unaltered when passed as a law it will raise the same issues that are currently being debated in the UK. Perhaps, there is still time for an amendment in the Irish bill.

 

CONCLUSION

 Social egg freezing is quite a recent development and further interdisciplinary research is required to examine the legal, sociological, feminist and economic implications of it. This is needed in order to gain a complete picture of the technology and the impact it has on women’s lives, relationships and society as a whole. There is a risk that women are gambling with their fertility by ‘putting all their eggs in one basket’. That is why social egg freezing must be approached with caution and with realistic expectations by women in order to avoid potential disappointment. However, it is an exciting opportunity, and it is quite clear that the rights and freedoms available to women in relation to their reproductive autonomy have expanded significantly in the last century. This is further evidenced by the very recent successful result in Ireland’s referendum to repeal the 8th amendment (a constitutional ban on abortion which was introduced in 1983 and which allowed for abortion only where a woman’s life was at risk).

 

I would like to dedicate this post in memory of Grace McDermott, co-founder of Women Are Boring, who I met at the induction of our PhD programme in 2014 and became friends with. She was a wonderful person and I am happy to have had her in my life. I am sure she would have strong opinions about social egg freezing and we would have had some lively discussions about the current state of it.

[1] Valerie L. Peddie and Siladitya, ´Request for “social egg freezing” in Khaldoun Sharif and Arri Coomarasamy, Assisted Reproduction Techniques: Challenges and management options (Wiley-Blackwell 2012) 160 – 161

[2] Peddie and Bhattacharya supra n1, 161

[3] ibid 161

[4] Eleonora Porcu, Patrizia Maria Ciotti and Stefano Venturoli, Handbook of Human Oocyte Cryopreservation (Cambridge University Press 2013) 26

[5] ESHRE Task Force on Ethics and Law, Wybo Dondorp et al, ‘Oocyte cryopreservation for age-related fertility loss’ (2012) 27 Human Reproduction 1231

[6] The Practice Committees of the American Society for Reproductive Medicine and the Society for Assisted

Reproductive Technology, ´Mature Oocyte Cryopreservation: A Guideline`, (2013) 99 Fertility and Sterility 37

[7] Imogen Goold and Julian Savulescu, ´In favour of freezing eggs for non-medical reasons` (2009) 23 Bioethics 47, 47

[8] The ESHRE Working Group on Oocyte Cryopreservation in Europe, Françoise Shenfield et al, ‘Oocyte and Ovarian Tissue Cryopreservation in European Countries: Statutory Background, Practice, Storage and Use’ (2017) Human Reproduction Open 1, 4

[9] Carolyn McLeod, Self-Trust and Reproductive Autonomy (The MIT Press 2002) 105

[10] Catriona Mackenzie, ‘Conceptions of Autonomy and Conceptions of the Body in Bioethics’ in Jackie Leach Scully, Laurel E. Baldwin-Ragaven and Petya Fitzpatrick (eds), Feminist Bioethics: At the Center, on the Margins (The John Hopkins University Press 2010) 72-73

[11] Mackenzie supra n10, 83

[12] Carol Gilligan, In a Different Voice: Psychological Theory and Women’s Development (Harvard University Press 1993) 71; Susan Sherwin, No Longer Patient: Feminist Ethics and Health Care (Temple University Press 1992) 46

[13] McLeod supra n9 103

[14] ibid 37

[15] J.C. Daniluk and E. Koert, ‘Childless Women’s Beliefs and Knowledge About Oocyte Freezing for Social and Medical Reasons’ (2016) 31 Human Reproduction 2313, 2319

[16] L. Yu et al, ‘Knowledge, Attitudes, and Intentions Toward Fertility Awareness and Oocyte Cryopreservation Among Obstetrics and Gynecology Resident Physicians’ (2016) 31 Human Reproduction 403; Désirée García et al, ‘Poor Knowledge of Age-Related Fertility Decline and Assisted Reproduction Among Healthcare Professionals’ (2017) 34 Reproductive BioMedicine Online 32

[17] Yu et al supra n16, 403; García et al supra n16, 35

[18] ESHRE supra n5, 1233

[19] ibid 1233

[20] Michael M Alper and Bart C Fauser, ‘Ovarian Stimulation Protocols for IVF: is More Better than Less?’ (2017) 34 Reproductive Biomedicine Online 345, 348

[21] Joseph O. Doyle et al, ‘Successful Elective and Medically Indicated Oocyte Vitrification and Warming for Autologous In Vitro Fertilization, with Predicted Birth Probabilities for Fertility Preservation According to Number of Cryopreserved Oocytes and Age at Retrieval’ (2016) 105 Fertility and Sterility 459, 459

[22] Ana Cobo and Juan Antonio García-Velasco, ‘Why All Women Should Freeze their Eggs’ (2016) 28 Current Opinion in Obstetrics and Gynecology 206, 206

[23] Zeynep Gurtin, ‘Why are Women Freezing their Eggs? Because of the Lack of Eligible Men’ (7 July 2017) The Guardian <https://www.theguardian.com/commentisfree/2017/jul/07/egg-freezing-women-30s-40s-lack-of-eligible-men-knights-shining-armour> accessed 26 May 2018

[24] Susie Jacob and Adam Balen, ‘Oocyte Freezing: Reproductive Panacea or False Hope of Family?’ (2018) 79 British Journal of Hospital Medicine 200, 200

[25] Kylie Baldwin, ‘’I Suppose I Think to Myself, That’s the Best Way to Be a Mother’: How Ideologies of Parenthood Shape Women’s Use for Social Egg Freezing Technology’ (2017) 22 Sociological Research Online 1, 5

[26] Baldwin supra n25, 5

[27] Mark Tran, ‘Apple and Facebook offer to freeze eggs for female employees’ The Guardian (15 October 2014) <https://www.theguardian.com/technology/2014/oct/15/apple-facebook-offer-freeze-eggs-female-employees> accessed 24 May 2018

[28] Kylie Baldwin et al, ‘Running Out of Time: Exploring Women’s Motivations for Social Egg Freezing’ (2018) Journal pf Psychosomatic Obstetrics & Gynecology 1, 3

[29] Baldwin et al supra n28, 4

[30] ibid 4

[31] Baldwin et al supra n28, 4

[32] ibid 4

[33] Gurtin supra n23

[34] ibid

[35] ibid

[36] Eleni A. Greenwood et al, ‘To Freeze or Not to Freeze: Decision Regret and Satisfaction Following Elective Oocyte Cryopreservation’ (2018) Fertility and Sterility in Press

[37] Ariana Eunjung Cha, ‘Egg-Freezing Regrets: Half of Women who Undergo the Procedure Have Some Remorse’ (18 May 2018) The Washington Post <https://www.washingtonpost.com/news/to-your-health/wp/2018/05/18/egg-freezing-regrets-half-of-women-who-undergo-the-procedure-have-some-remorse/?utm_term=.46f0ecc0afcf> accessed 27 May 2018

[38] ESHRE supra n8, 4

[39] See, for example, current prices at the London Women’s Clinic in London: https://www.londonwomensclinic.com/about/prices/

[40] See, for example, current prices at Sims IVF in Dublin: http://www.sims.ie/treatments-and-services/prices.883.html

[41] Evans v United Kingdom (2007) 43 EHRR 21, para. 54

[42] Evans v United Kingdom (2007) 43 EHRR 21, para. 60

[43] Benjamin P. Jones et al, ‘The Dawn of a New Ice Age: Social Egg Freezing’ (2018) 97 Acta Obstetricia et Gynecologica Scandinavica 641, 644

[44] Petition to extend the 10-year storage limit on egg freezing <https://petition.parliament.uk/petitions/218313> accessed 27 May 2018

[45] General Scheme of the Assisted Human Reproduction Bill 2017, Head 22, 8 (a)(i)

 

Searching for weather patterns on free-floating worlds

by Johanna Vos, University of Edinburgh

Planet 1

Artist’s conception of a free-floating planet. Image: NASA

At the start of 1995, we knew of only 9 planets – Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. Although we have since lost Pluto, we have now confirmed over 3,700 exoplanets — planets orbiting a star other than our sun. These exoplanets have been discovered by various methods, but the vast majority have been detected via indirect methods — measuring the influence of an exoplanet on its host star. We have also managed to directly image a number of exoplanets. This is the most difficult technique since most planets are lost in the bright glare of their host star. In recent years, we have discovered that in addition to the exoplanet companions — exoplanets orbiting a host star, there have been a number of discoveries of so-called rogue, or free-floating planets. These are planetary-mass objects (less than about 13 times the mass of Jupiter) with no host star, wandering the Milky Way alone!

There are currently two theories about the formation of these isolated planets. The first theory suggests that they form similar to a star like our Sun — through the collapse of a massive interstellar cloud composed of molecular gas and dust. Once enough material is compressed at the centre of the cloud, nuclear fusion is ignited in the core, and a star is born. Once nuclear fusion is established, a star will continue to shine for about 10 billion years. However, in the case of our free-floating planets, we think that the core did not accrete enough material to trigger nuclear fusion. These objects can be thought of as ‘failed stars’, and spend their entire lifetimes cooling down. The other theory proposes that the free-floating planets were ejected from a planetary system. This can happen due to gravitational interactions with other planets within the system or a close encounter with another star. These interactions could fling a planet out of its orbit and leave it free to travel through interstellar space. Most likely, the free-floating planets that we have discovered to date formed through both of the theories discussed here, but we have not yet found a way to distinguish them from each other.

Free-floating planets pose a huge advantage to astronomers studying exoplanets. The population of free-floating planets share a remarkable resemblance with the small population of directly-imaged planets that we have discovered. The free-floating and companion exoplanets share similar masses, temperatures, ages and sizes, but while the companion exoplanets are extremely hard to image, the isolated planets are much easier since they do not have a bright host star nearby. New instruments and technologies are currently being developed so that we may study companion exoplanets in detail in the future. In the meantime, the free-floating objects can be studied in exquisite detail and act as useful analogues for the directly-imaged companions, providing clues on what we might expect.

Brightness Modulations Signal Atmospheric Features

While it can take several hours to obtain an image of an exoplanet orbiting its host star, a medium-sized telescope can capture images of a free-floating planet on ~5 minute timescales. We cannot resolve the surface of a planet since it is too far away, but we can make use of the fact that they rotate to try to identify the presence of weather patterns in the planet’s atmosphere. This is done through a technique called ‘photometric variability monitoring’, which basically means measuring the brightness of an object over time. By monitoring the brightness over many hours we can approximate what the upper atmosphere of such an object looks like. The video below shows an artist’s concept of a brown dwarf with atmospheric bands of clouds, thought to resemble the clouds seen on Neptune and the other outer planets in the solar system. The dots on the bottom show the measured brightness of the planet over time, called the lightcurve of the planet.

Star2

Artist’s conception of a rotating free-floating planet with bands of clouds resembling those seen on Neptune. The dots on the bottom show the measured brightness of the object over time.

PSO-318.5-22: A Cloudy Free-floating Planet

In 2015 I used the New Technology Telescope in La Silla, Chile to observe the free-floating planet PSO J318.5-22. PSO 318.5-22 is a free-floating planet situated 80 light-years from earth, with a temperature of 800°C and a mass 7 times that of Jupiter. This object is unusually red compared to other objects with similar temperatures, and this is thought to be due to the presence of very thick clouds in its atmosphere. Using the images, we could measure the brightness of this object in each frame, and found that the brightness of this isolated planet changed by up to 10% over the course of 5 hours. Follow-up observations showed that the lightcurve is periodic, repeating itself every ~8.6 hours, indicating that this was the rotational period of the planet. Every 8.6 hours an atmospheric feature, most likely silicate clouds and iron droplets, would rotate in and out of view. This was the first detection of weather on an planetary-mass object, and hinted that these atmospheric features may be common on extrasolar planets.

We then went on to observe PSO J318.5-22 simultaneously using the Hubble Space Telescope and the Spitzer Space Telescope, which allowed us to track the brightness of our target in a variety of different wavelengths with unprecedented accuracy. The new lightcurves revealed that although all lightcurves showed brightness modulations in agreement with a 8.6 hour rotational period, the lightcurves obtained from the Hubble and Spitzer telescopes appeared ‘out of phase’. This means that when the planet appeared at its brightest in the Hubble images, it appeared very faint with the Spitzer Space Telescope, and vice versa. The Hubble and Spitzer Telescopes differ in the wavelengths they use — Hubble observations are in the near-infrared while Spitzer probes longer wavelengths in the mid-infrared. Different wavelengths are sensitive to different heights in the atmosphere of the planet — the Hubble telescope sees deep into the planet’s atmosphere while the Spitzer wavelengths only see the highest altitudes. The observed shifts between lightcurves suggest that we are observing different layers of clouds located at different vertical positions in the atmosphere. These types of observations have thus allowed us to explore both the horizontal and vertical cloud structure of PSO 318.5-22, a rogue planet lying 80 light-years away.

 

Future Exoplanet Companion Studies with JWST

Now that we have developed the technique of photometric variability monitoring, we hope to extend these studies to the directly-imaged exoplanet companions once the James Webb Space Telescope (JWST) launches. Due to be launched in 2020, JWST will revolutionise all fields of astronomy by providing unparalleled sensitivity to astrophysical signals at a wide range of wavelengths. JWST will allow us to extend the variability monitoring discussed above to exoplanet companions, such as the HR8799bcde planets shown below. This system of four planets, called HR8799bcde is so far the only multi-planet system that has been imaged. By re-observing the HR8799bcde system over a number of years, astronomers could track their movement around their host star. The four planets shown here share very similar properties to free-floating planets such as PSO J318.5-22, and so we expect that they will show similar brightness changes over time. Current telescopes cannot obtain images of these planets at the sensitivity and cadence needed to measure photometric variability, but JWST will allow us to carry out these measurements for the first time. 

HR8799_planets

Video showing four exoplanets orbiting their host star. The host star HR8799 harbours four super-Jupiters with periods that range from decades to centuries. Astronomers re-observed this system over a number of years to map out the orbits of these four exoplanets. Video: Jason Wang and Christian Marois.

Toppling the Pillars of Cancer Cell Biology

by Caitrin Crudden, PhD candidate at the Karolinka Institutet, Stockholm, Sweden.

Caitrin in the lab

Caitrin in the lab

Picture an expansive galaxy in your head. A vast space with thousands of twinkling dots.
As seconds pass, connections flash from dot to dot – fast enough to disappear before you can even focus on one – generating an intricate, pulsating web.

I’m not a cosmologist. I’m a cancer cell biologist, and I study subcellular signalling. You probably already know that cancer is a disease of uncontrolled cell growth. But cancer cells have not gained an alien skill in order to do so; they use the exact same growth signalling pathways that every other one of your cells uses. In a cancer cell, relatively small tweaks occur in normal signalling pathways, which render them dysfunctional, often hyperactive. But the galaxy-like expansive and pulsating web of communication imagery goes part way in describing the system we are dealing with. Subcellular signalling is vast and mind-numbingly complicated, and in all of the decades of molecular biology so far, we are still piecing links together with every additional study.

But a galaxy-like network, somewhat like the task, is quite overwhelming and daunting. For simplicity, let’s imagine one signalling pathway in isolation, a bit like a chain of children in the school playground, playing a game of whispers. A message is passed from one child to the next child down the line, but instead of the usual hilarity of miscommunication, our hypothetical game is pretty exact. An un-fun version of playground whispers, if you will. Much like those children in the playground, in a cell, a message is passed from one part of the cell to another by sequential messenger molecules. For example, a message can be sent around the body in the blood in the form of a molecule. This molecular message binds to a receptor that sits on the surface of a cell prised waiting for this exact signal. The binding of the molecular message to this receptor flicks it from off to on. An on receptor turns on a nearby molecule, this on molecule turns the next molecule on, and so on and so forth, until the message is passed to the nucleus. Here, it tells the cell which genes are to be transcribed, in order to build proteins to accomplish a specific cellular task. In cancer, one or more of these signalling pathways stops working correctly because of a genetic mutation in a messenger molecule. To continue the metaphor, basically a child in the middle of chain decides to go a bit rogue.

Cell Signalling

Cell signalling

Let’s take an example. There’s a proliferative signalling pathway called the mitogen activated protein kinase pathway or simply MAPK to its friends. In the middle of it is a molecule called Ras. Normally, this pathway fires a nice concise signal in response to a message from somewhere else in the body, that tells this cell that it needs to grow and divide into two daughter cells. Maybe, for instance, the human overlord has acquired a pesky paper cut and the cells need to grow to close the wound. In that case, the message binds to a receptor on the cell, a growth factor receptor, which communicates to Ras, and Ras turns on to communicate the signal to the next molecule, which passes on to the next, and down and down a chain of messenger molecules into the nucleus, which initiates the steps that need to take place for the cell to divide. In this normal efficient situation, Ras returns to its off state as soon as it has efficiently passed its signal onto the next molecule, and in doing so, ensures a safe and distinct message is given. A successful game of un-fun playground-whispers, and everyone can pat themselves on the back and go about their day.

A common mutational event in cancer is that Ras picks up a genetic mutation that means it becomes stuck in the on position. We call this constitutive activation, which basically just means stuck-in-the-on-position. With Ras constantly on, the signal is continuously fired from it to the next molecule, even in instances when it is inappropriate for the cell to divide. Hence, these cells acquire uncontrolled growth, outgrow their neighbours and can continue to mutate and grow and move and invade and…I think you all know how this story ends.

So the answer seems logistically simple – turn Ras off, right? However, frustratingly Ras turns out to be a pretty much un-druggable molecule. Despite huge effort, the 3D surface of the protein doesn’t have pockets in which a potential drug could bind and correct it. However, efforts have been more successful in drugging its next-in-line messenger molecule, Raf. If, in our hypothetical chain of school children playing whispers, there’s one mischievous kid in the middle adding rubbish in willy-nilly, that didn’t come from anyone before her, the damage is minimized if the next partner in line simply doesn’t pass the nonsense on. Raf inhibitors showed great promise in pre-clinical development, and in clinical trials of metastatic melanoma, a truly horrible aggressive disease. Things started to look up.

Until – Bam! The drugs stop working. In a patient who initially responded well, the disease comes back – and it’s more aggressive then ever. A heart-breaking yet frustratingly common scenario. The cell is a highly dynamic system with a lot of inter-connected pathways that can flip back and forth when needed, and a cancer cell, because of its unstable genome that is prone to mutations, is even more adaptable. You can put a road block in the signal chain – Ras’s whisper-partner keeps quiet, but cunning Ras simply finds another buddy in the playground to blurt rubbish to, aaand we’re back to square one. As useful to our understanding as chain-schemes are, the network-like galaxy, in all of its sobering complexity, is more realistic. You can start to get an idea of the difficulty of treating this disease.

So, what now? Some of my current work, and that of others, is trying to optimize multi-target approaches. If a cell can circumvent the Raf or similar inhibitor road-blocks quite rapidly, we must simultaneously or synchronously take away its back up options, in a highly choreographed bank and forth dance to the death. The idea is that a multi-target network approach, which removes back-door options, minimizes adaptation of cancer cells to inhibitors and hence drug resistance. The hope is that if we design smart enough multi-target approaches, we might just be able to topple the pillars of survival that these cells rely on.

Max Delbrück, a 20th century geneticist wrote;

“Any living cell carries within it the experiences of a billion years experimentation by its ancestors. You cannot expect to explain so wise an old bird in a few simple words.”

Nor can you outsmart it, with simple strategies.

The mysterious lives of chimaera sharks & the effects of deep sea fishing

MY DEEP SEA MSC RESEARCH AND WHY DEEP SEA FISHERIES OVERSIGHT IS NEEDED

by Melissa C. Marquez.

chimaera families

A rhinochimaera

“You’re not what I expected when you said you were a shark scientist.” Gee, thanks. I can’t tell you how many times I’ve heard that I don’t live up to someone’s preconceived mental image of what I should look like as a “shark scientist.” It doesn’t change the fact that I’m a marine biologist though, and that I am very passionate about my field.

I recently wrapped up my Masters in Marine Biology, focusing on “Habitat use throughout a Chondrichthyan’s life.” Chondrichthyans (class Chondrichthyes) are sharks, skates, rays, and chimaeras. Today, there are more than 500 species of sharks and about 500 species of rays known, with many more being discovered every year.

Over the last few decades, much effort has been devoted towards evaluating and reducing bycatch (the part of a fishery’s catch that is made up of non-target species) in marine fisheries. There has been a particular focus on quantifying the risk to Chondrichthyans, primarily because of their high vulnerability to overfishing. My study focused on five species of deep sea chimaeras (not the mythical Greek ones, but the just-as-mysterious real animal) found in New Zealand waters:

• Callorhynchus milii (elephant fish),

• Hydrolagus novaezealandiae (dark ghost shark),

• Hydrolagus bemisi (pale ghost shark),

• Harriotta raleighana (Pacific longnose chimaera),

• Rhinochimaera pacifica (Pacific spookfish).

 

These species were chosen because they cover a large range of depth (7 m – 1306 m), and had been noted as being abundant despite extensive fisheries in their presumed habitats; they were also of special interest to the Deepwater Group (who funded the scholarship for my MSc).

Although there is no set definition for what constitutes as “deep sea,” it is conventionally regarded to be >200 m depth and beyond the continental shelf break (Thistle, 2003); in this zone, a number of species are considered to have low productivity, leading to them having a highly vulnerable target of commercial fishing (FAO, 2009). Deep sea fisheries have become increasingly economically important over the past few years as numerous commercial fisheries become overexploited (Koslow et al., 2000; Clark et al., 2007; Pitcher et al., 2010). Major commercial fisheries exist for deep sea species such as orange roughy (Hoplostethus atlanticus), oreos (several species of the family Oreosomatidae), cardinalfish, grenadiers (such as Coryphaenoides rupestris) and alfonsino (Beryx splendens). Many of these deep sea fisheries were not sustainable (Clark, 2009; Pitcher et al., 2010; Norse et al., 2012) with most of the stocks having undergone substantial declines.

chimaera (1)

Deep sea fishing can also cause environmental harm (Koslow et al., 2001; Hall-Spencer et al., 2002; Waller et al., 2007; Althaus et al., 2009; Clark and Rowden, 2009). Deep sea fisheries use various types of gear that can leader to lasting scars: bottom otter trawls, bottom longlines, deep midwater trawls, sink/anchor gillnets, pots and traps, and more. While none of this gear is solely used in deep sea fisheries, all of them catch animals indiscriminately and can also damage important habitats (such as centuries-old deep sea coral). In fact, orange roughy trawling scars on soft-sediment areas were still visible five years after all fishing stopped in certain areas off New Zealand (Clark et al ., 2010a).

Risk assessment is evaluating the distributional overlap of the fish with the fisheries, where fish distribution is influenced by habitat use. For sharks, that risk assessment included a lot of variables: there are a number of shark species (approximately 112 species of sharks have been recorded from New Zealand waters) with many different lifestyles, differences in their market value for different body parts (like meat, oil, fins, cartilage), what body parts they use for sharks (for example, some sharks have both their fins and meat utilised but not their oil; some just have their fins taken, etc.) and how to identify sharks once on the market (Fisheries Agency of Japan, 1999; Vannuccini, 1999; Yeung et al. 2000; Froese and Pauly, 2002; Clarke and Mosqueira, 2002).

In order to carry out a risk assessment, you have to know your study animals pretty well. It should come to no surprise that little is known about the different life history stages of chimaeras, so I did the next best thing and looked at Chondrichthyans in general. My literature review synthesized over 300 published observations of habitat use for these different life history stages; from there, I used New Zealand research vessel catch data (provided by NIWA, the National Institute of Water and Atmospheric Research) and separated them by species, sex, size, and maturity (when available). I then dove into the deep end of using a computer language called “R,” which is used for statistical computing and graphics. Using R programming software, I searched for the catch compositions based on the life history stage I was looking for (example: looking for smaller sized, immature fish of both sexes and little to no adults when in search for a nursery ground).

The way we went about this thesis differs in that we first developed hypotheses for characteristics of different habitat use, rather than “data mining” for patterns, and it therefore it has a structured and scientific approach to determining shark habitats. Our results showed that some life history stages and habitats for certain species could be identified, whereas others could not.

Pupping ground criteria were met for Callorhynchus milii (elephant fish), Hydrolagus novaezealandiae (dark ghost shark), and Hydrolagus bemisi (pale ghost shark); nursery ground criteria were met for Callorhynchus milii (elephant fish); mating ground criteria were met for Callorhynchus milii (elephant fish), Hydrolagus novaezealandiae (dark ghost shark), Hydrolagus bemisi (pale ghost shark), and Harriotta raleighana (Pacific longnose chimaera); lek-like mating criteria were met for Hydrolagus novaezealandiae (dark ghost shark). Note: Lek-like mating is where males perform feats of physical endurance to impress females and she gets to choose a mate.

Ghost Shark_SPP unknown

Ghost shark

These complex—and barely understood— deep sea ecosystems can be overwhelmed by the fishing technologies that rip through them. Like sharks, many deep sea animals live a k-style lifestyle, meaning that they take a long time to reach sexual maturity and once they are sexually active, they give birth to few young after a long gestation period. This lifestyle means these creatures are especially vulnerable since they cannot repopulate quickly if overfished.

In order to manage the environmental impact of deep sea fisheries, scientists, policymakers and stakeholders have to identify the ways to help re-establish delicate biological functions after the impacts made by deep sea fisheries. Recovery—defined as the return to conditions before they were damaged by fishing activities—is not a unique concept to just deep sea communities, and is usually due to site-specific factors that are often poorly understood and difficult to estimate. Little is known about biological histories and structures of the deep sea, and therefore the rates of recovery may be much slower than shallow environments.

Management of the seas, especially the deep sea, lags behind that of land and of the continental shelf, but there is a number of protection measures already being put in place. These actions include, but are not limited to,

• regulating fishing methods and gear types,

• specify the depth that one can fish at,

• limit the volume of bycatch, limit the volume of catch,

• move-on rules, and

• closure of areas of particular importance.

Modifications to trawl gear and how they are used have made these usually heavy tools less destructive (Mounsey and Prado, 1997; Valdemarsen et al. 2007; Rose et al. 2010; Skaar and Vold 2010). Fishery closures are becoming more common, with large parts of EEZs (exclusive economic zone) being closed zones for bottom trawling (e.g. New Zealand, North Atlantic, Gulf of Alaska, Bering Sea, USA waters, Azores) (Hourigan, 2009; Morato et al. 2010); the effectiveness of these closures is yet to be established.

And while this approach, dubbed the “ecosystem approach,” to fisheries management is widely advocated for, it does not help every deep sea animal or structure. Those that cannot move (sessile) are still in danger of being destroyed. As such, ecosystem-based marine spatial planning and management may be the most effective fisheries management strategy for protecting the vulnerable deep sea critters (Clark and Dunn, 2012; Schlacher et al. 2014). This management strategy can include marine protected areas (MPAs) to restrict fishing in specific locations and other management tools, such as zoning or spatial user rights, which will affect the distribution of fishing effort in a more effective manner. Using spatial management measures effectively requires new models and data, and will always have their limitations given how little data in regards to the deep sea exists, and that this particular environment is hard to get to.

So what does it all mean in regards to my thesis? Well, for one thing, there is a growing acknowledgement these unique ecosystems require special protection. And like any scientist knows, there are still many unanswered questions about just how important this environment is (especially certain structures).

ElephantFish

A juvenile Elephantfish, Callorhinchus milii. Source: Rudie H. Kuiter / Aquatic Photographics

On a more shark-related note, not all life-history stage habitats were found for my chimaeras, and this may be because these are outside of the coverage of the data set (and likely also commercial fisheries), or because they do not actually exist for some Chondrichthyans. That cliffhanger is research for another day, I suppose…

This project could not have been done without the endless amount of support of my family and friends; those who have supported me since day one of my marine biology adventures. They’re the ones who stick up for me whenever I hear, “You’re not what I expected when you said you were a shark scientist.” I am not really sure what the stereotype of a shark scientist is supposed to be, thankfully I grew up where you accept and judge people by who they are and what they do. However I see this as a challenge, as it sets the stage up for me to show the mind of a shark scientist can come in all kinds of packages.

As a final note, I’d like to thank the New Zealand Seafood Scholarship, the Deepwater Group, as well as researchers from National Institute of Water and Atmospheric Research (NIWA) who provided funding, insight and expertise that greatly assisted the research. The challenge of venturing into complex theories is that not all agree with all of the interpretations/conclusions of any research, but it is a basis for having a discussion, which can only be good for all.

 

 

References:

  • Thistle, D. (2003). The deep-sea floor: an overview. Ecosystems of The Deep Oceans. Ecosystems of the World 28.
  • FAO. 2009. Management of Deep-Sea Fisheries in the High Seas. FAO, Rome, Italy.
  • Koslow, J. A., Boehlert, G. W., Gordon, J. D. M., Haedrich, R. L., Lorance, P., and Parin, N. 2000. Continental slope and deep-sea fisheries: implications for a fragile ecosystem. ICES Journal of Marine Science, 57: 548–557.
  • Clark, M. R., and Koslow, J. A. 2007. Impacts of fisheries on seamounts. In Seamounts: Ecology, Fisheries and Conservation, pp. 413 –441. Ed. by T. J. Pitcher, T. Morato, P. J. B. Hart, M. R. Clark, N. Haggen, and R. Santos. Blackwell, Oxford.
  • Pitcher, T. J., Clark, M. R., Morato, T., and Watson, R. 2010. Seamount fisheries: do they have a future? Oceanography, 23: 134–144.
  • Clark, M. R. 2009. Deep-sea seamount fisheries: a review of global status and future prospects. Latin American Journal of Aquatic Research, 37: 501 –512.
  • Norse, E. A., Brooke, S., Cheung, W. W. L., Clark, M. R., Ekeland, L., Froese, R., Gjerde, K. M., et al. 2012. Sustainability of deep-sea fisheries. Marine Policy, 36: 307–320.
  • Koslow, J. A., Gowlett-Holmes, K., Lowry, J. K., O’Hara, T., Poore, G. C. B., and Williams, A. 2001. Seamount benthic macrofauna off southern Tasmania: community structure and impacts of trawling. Marine Ecology Progress Series, 213: 111–125.
  • Hall-Spencer, J., Allain, V., and Fossa, J. H. 2002. Trawling damage to Northeast Atlantic ancient coral reefs. Proceedings of the Royal Society of London Series B: Biological Sciences, 269: 507–511.
  • Waller, R., Watling, L., Auster, P., and Shank, T. 2007. Anthropogenic impacts on the corner rise seamounts, north-west Atlantic Ocean. Journal of the Marine Biological Association of the United Kingdom, 87: 1075 –1076.
  • Althaus, F., Williams, A., Schlacher, T. A., Kloser, R. K., Green, M. A., Barker, B. A., Bax, N. J., et al. 2009. Impacts of bottom trawling on deep-coral ecosystems of seamounts are long-lasting. Marine Ecology Progress Series, 397: 279–294.
  • Clark, M. R., and Rowden, A. A. 2009. Effect of deep water trawling on the macro-invertebrate assemblages of seamounts on the Chatham Rise, New Zealand. Deep Sea Research I, 56: 1540–1554.
  • Clark, M. R., Bowden, D. A., Baird, S. J., and Stewart, R. 2010a. Effects of fishing on the benthic biodiversity of seamounts of the “Graveyard” complex, northern Chatham Rise. New Zealand Aquatic Environment and Biodiversity Report, 46: 1 –40.
  • Fisheries Agency of Japan. 1999. Characterization of morphology of shark fin products: a guide of the identification of shark fin caught by tuna longline fishery. Global Guardian Trust, Tokyo.
  • Vannuccini, S. 1999. Shark utilization, marketing and trade. Fisheries Technical Paper 389. Food and Agriculture Organization, Rome.
  • Yeung, W. S.; Lam, C.C.; Zhao, P.Y. 2000. The complete book of dried seafood and foodstuffs. Wan Li Book Company Limited, Hong Kong (in Chinese).
  • Froese, R. and Pauly, D. 2002. FishBase database. Fishbase, Kiel, Germany. Eds. Available fromhttp://www.fishbase.org (accessed April 2016).
  • Clarke, S. and Mosqueira, I. 2002. A preliminary assessment of European participation in the shark fin trade. Pages 65–72 in M.Vacchi, G.La Mesa, F.Serena, and B.Séret, editors. Proceedings of the 4th European elasmobranch association meeting. Société Française d’Ichtyologie, Paris.
  • Mounsey, R. P., and Prado, J. 1997. Eco-friendly demersal fish trawling systems. Fishery Technology, 34: 1 – 6.
  • Valdemarsen, J. W., Jorgensen, T., and Engas, A. 2007. Options to mitigate bottom habitat impact of dragged gears. FAO Fisheries Technical Paper, 29.
  • Rose, C. S., Gauvin, J. R., and Hammond, C. F. 2010. Effective herding of flatfish by cables with minimal seafloor contact. Fishery Bulletin, 108: 136–144.
  • Skaar, K. L., and Vold, A. 2010. New trawl gear with reduced bottom contact. Marine Research News, 2: 1–2.
  • Hourigan, T. F. 2009. Managing fishery impacts on deep-water coral ecosystems of the USA: emerging best practices. Marine Ecology Progress Series, 397: 333–340.
  • Morato, T., Pitcher, T. J., Clark, M. R., Menezes, G., Tempera, F., Porteiro, F., Giacomello, E., et al. 2010. Can we protect seamounts for research? A call for conservation. Oceanography, 23: 190–199.
  • Clark, M. R., and Dunn, M. R. 2012. Spatial management of deep-sea seamount fisheries: balancing sustainable exploitation and habitat conservation. Environmental Conservation, 39: 204 –214.
  • Schlacher, T. A., Baco, A. R., Rowden, A. A., O’Hara, T. D., Clark, M. R., Kelley, C., and Dower, J. F. 2014. Seamount benthos in a cobalt-rich crust region of the central Pacific: Conservation challenges for future seabed mining. Diversity and Distributions, 20: 491 –502.

Time to think about visual neuroscience

by Poppy Sharp, PhD candidate at the Center for Mind/Brain Sciences, University of Trento.

All is not as it seems

We all delight in discovering that what we see isn’t always the truth. Think optical illusions: as a kid I loved finding the hidden images in Magic Eye stereogram pictures. Maybe you remember a surprising moment when you realised you can’t always trust your eyes. Here’s a quick example. In the image below, cover your left eye and stare at the cross, then slowly move closer towards the screen. At some point, instead of seeing what’s really there, you’ll see a continuous black line. This happens when the WAB logo falls in a small patch on the retinae of your eyes where the nerve fibres leave in a bundle, and consequently this patch has no light receptors – a blind spot. When the logo is in your blind spot, your visual system fills in the gap using the available information. Since there are lines on either side, the assumption is made that the line continues through the blind spot.

Illusions reveal that our perception of the world results from the brain building our visual experiences, using best guesses as to what’s really out there. Most of the time you don’t notice, because the visual system has been adapted over years of evolution and then been honed by your lifetime of perceptual experiences, and is pretty good at what it does.

WAB vision

For vision scientists, illusions can provide clues about the way the visual system builds our experiences. We refer to our visual experience of something as a ‘percept’, and use the term ‘stimulus’ for the thing which prompted that percept. The stimulus could be something as simple as a flash of light, or more complex like a human face. Vision science is all about carefully designing experiments so we can tease apart the relationship between the physical stimulus out in the world and our percept of it. In this way, we learn about the ongoing processes in the brain which allow us to do everything from recognising objects and people, to judging the trajectory of a moving ball so we can catch it.

We can get insight into what people perceived by measuring their behavioural responses. Take a simple experiment: we show people an arrow to indicate whether to pay attention to the left or the right side of the screen, then they see either one or two flashes of light flash quickly on one side, and have to press a button to indicate how many flashes they saw. There are several behavioural measures we could record here. Did the cue help them be more accurate at telling the difference between one or two flashes? Did the cue allow them to respond more quickly? Were they more confident in their response? These are all behavioural measures. In addition, we can also look at another type of measure: their brain activity. Recording brain activity allows unique insights into how our experiences of the world are put together, and investigation of exciting new questions about the mind and brain.

Rhythms of the brain

Your brain is a complex network of cells using electrochemical signals to communicate with one another. We can take a peek at your brain waves by measuring the magnetic fields associated with the electrical activity of your brain. These magnetic fields are very small, so to record them we need a machine called an MEG scanner (magnetoencephalography) which has many extremely sensitive sensors called SQUIDs (superconducting quantum interference devices). The scanner somewhat resembles a dryer for ladies getting their blue rinse done, but differs in that it’s filled with liquid helium and costs about three million euros.

A single cell firing off an electrical signal would have too small a magnetic field to be detected, but since cells tend to fire together as groups, we can measure these patterns of activity in the MEG signal. Then we look for differences in the patterns of activity under different experimental conditions, in order to reveal what’s going on in the brain during different cognitive processes. For example, in our simple experiment from before with a cue and flashes of light, we would likely find differences in brain activity when these flashes occur at an expected location as compared to an unexpected one.

One particularly fascinating way we can characterise patterns of brain activity is in terms of the the rhythms of the brain. Brain activity is an ongoing symphony of multiple groups of cells firing in concert. Some groups fire together more often (i.e. at high frequency), whereas others may also be firing together in a synchronised way, but firing less often (low frequency). These different patterns of brain waves generated by cells forming different groups and firing at various frequencies are vital for many important processes, including visual perception.

What I’m working on

For as many hours of the day as your eyes are open, a flood of visual information is continuously streaming into your brain. I’m interested in how the visual system makes sense of all that information, and prioritises some things over others. Like many researchers, the approach we use is to show simple stimuli in a controlled setting, in order to ask questions about fundamental low level visual processes. We then hope that our insights generalise to more natural processing in the busy and changeable visual environment of the ‘real world’. My focus is on temporal processing. Temporal processing can refer to a lot of things, but as far as my projects go we mean how you deal with stimuli occurring very close together in time (tens of milliseconds apart). I’m investigating how this is influenced by expectations, so in my experiments we manipulate expectations about where in space stimuli will be, and also your expectations about when they will appear. This is achieved using simple visual cues to direct your attention to, for example, a certain area of the screen.

When stimuli rapidly follow one another in time, sometimes it’s important to be parse them into separate percepts whereas other times it’s more appropriate to integrate them together. There’s always a tradeoff between the precision and stability of the percepts built by the visual system.  The right balance between splitting up stimuli into separate percepts as opposed to blending them into a combined percept depends on the situation and what you’re trying to achieve at that moment.

Let’s illustrate some aspects of this idea about parsing versus integrating stimuli with a story, out in the woods at night. If some flashes of light come in quick succession from the undergrowth, this could be the moonlight reflecting off the eyes of a moving predator. In this case, your visual system needs to integrate these stimuli into a percept of the predator moving through space. But a similar set of several stimuli flashing up from the darkness could also be multiple predators next to each other, in which case it’s vital that you parse the incoming information and perceive them separately. Current circumstances and goals determine the mode of temporal processing that is most appropriate.

I’m investigating how expectations about where stimuli will be can influence your ability to either parse them into separate percepts or to form an integrated percept. Through characterising how expectations influence these two fundamental but opposing temporal processes, we hope to gain insights not only into the processes themselves, but also into the mechanisms of expectation in the visual system. By combining behavioural measures with measures of brain activity (collected using the MEG scanner), we are working towards new accounts of the dynamics of temporal processing and factors which influence it. In this way, we better our understanding of the visual system’s impressive capabilities in building our vital visual experiences from the lively stream of information entering our eyes.

Women are literally boring….

By: Laurie Winkless

Tunnels, that is. All over the world, Tunnel Boring Machines (or TBMs) are chewing their way through the packed subterranean network of your nearest city. But something you might not know is that they’re all given women’s names. Naming a machine after a human isn’t that weird, right? Many of us have named our cars after all, but it goes a bit deeper for TBMs. According to tunnelling tradition, a TBM cannot start work until it is officially named. But exactly where we got the tradition of naming them after women remains a bit of a mystery.

Some sources suggest that it comes from the 16th century, when miners, armourers, and artillerymen prayed to Saint Barbara. Legend has it that Barbara’s father had locked her in a windowless tower when he found out about her conversion to Christianity. Later, a flash of lightning struck him dead, and since then, all trades associated with darkness and the use of explosives have recognised Barbara as their patron saint. Today’s tunnel engineers see themselves as fitting that description, and so give TBMs women’s names in Barbara’s honour. Others suggest that the tradition comes from the link between miners and ship-builders – their physical strength and similar skills often saw men switch between trades as the need arose. Boats have long been given the pronoun ‘she’ (again for reasons unknown), so perhaps using women’s names for tunnelling machines started there?

Regardless of its beginnings, this tradition is carried out throughout the world today, as a sign of good luck for the project ahead. And, perhaps surprisingly in our increasingly secular world, most tunnelling projects still erect a shrine to Saint Barbara at the tunnel entrance.

I am a massive fan of TBMs. Here I am looking very excited in a TBM- tunnel under the streets of London. If I lived my life again, I think I’d be a tunnelling engineer. (Credit: Laurie Winkless)

Anyway, before we meet some of the First Ladies of the Underground, let have a quick look at how they work. First off, TBMs are huge. Bertha, the largest TBM in the world, is currently working her way under Seattle. She has a diameter of 17.5m, is 99 m long, and weighs over 6,000 tonnes. If we measure her in units of ‘double decker buses’ – she’s as tall as four parked on top of one another, as long as eight parked nose-to-tail, and weighs as much as 467 of them. So it’s no surprise that she’s usually referred to as ‘Big Bertha’.

So what do TBM’s like Bertha do with all that…girth? In their simplest form, TBMs are cylinder-shaped machines that can munch their way through almost any rock type. As I mentioned in my book, Science and the City, TBMs are generally referred to as ‘moles’, but I prefer to think of them as earthworms. Worms eat, push forward and expel whatever is left over, and while there are lots of different types of TBM, they pretty much all do those same three things.

Image credit: Crossrail

At the front, TBMs have a circular face covered in incredibly hard teeth made from a material called tungsten carbide. As the cutter-head rotates, it breaks up the rock in front of it. This excavated material is swallowed through an opening in the face (some would call it a mouth) and it is carried inside the body of the TBM using a rotating conveyor belt. There, it is mixed with various additives (rather like saliva or stomach acid in some animals) that turn the rock into something with the consistency, if not the minty-freshness, of toothpaste. After digestion, this goo is expelled out of the back of the TBM, and it travels along a conveyor belt, until it reaches a processing facility above ground. There, the goo is filtered and treated, with much of it reused in other building projects.

Because of their shape, TBMs produce smooth tunnel walls, which can then be lined using curved segments of concrete. TBMs manage this part of the process too – many metres behind the cutter-head, large robotic suction arms called erectors (stop giggling) pick up and place the concrete panels, to form a complete ring. As the TBM moves forward, more and more of these rings are put into place, until the tunnel is fully clad. In this way, cities across the globe can produce fully-lined tunnels at the rather impressive rate of 100 m per week.

Enough background. Time to meet some of the TBMs boldly going where no machine-named-after-a-woman has gone before.

London – Ada, Phyllis, Victoria, Elizabeth, Mary, Sophia, Jessica and Ellie

Crossrail is Europe’s biggest engineering project. Since 2009, they’ve constructed two brand-new, 21 km-long tunnels across London, running east-west. To do this, they used eight TBMs, and as tradition dictates, each was given a woman’s name, selected by members of the public. The first six machines were named after historical London figures, whilst the final two machines were named after ‘modern day heroes’. Because two TBM’s excavate parallel tunnels at the same time, they’re also named in pairs.

Image credit: Crossrail

– Mary and Sophia: These two excavated Crossrail’s new Thames Tunnel, between Plumstead and North Woolwich. They were named after the wives of Isambard and Marc Brunel, the famous engineers who constructed London’s first Thames Tunnel over 150 years ago. The women were a lot faster than their hubbies though – the original tunnel took 16 years to construct. This one was completed in just eight months.

Victoria and Elizabeth: Can you guess which women from history these TBMs were named after?! Yep, Queenie #1 and #2. In the citation, the reason given was that “Victoria was monarch in the first age of great railway engineering projects and Elizabeth is the monarch at the advent of this great age.” Victoria and Elizabeth excavated the tunnels that run between Canning Town and Farringdon, finishing the job in May 2015. As an aside, the Crossrail route itself will appear on tube maps as ‘The Elizabeth Line’, which is disappointingly predictable. I was rooting for ‘The Brunel Line’ myself, but hey.

Ada and Phyllis: These may be my favourites – named after the world’s first computer scientist, Ada Lovelace, and Phyllis Pearsall, who single-handedly created the London A-Z. Lovelace was a woman before her time – without her work, Charles Babbage and his ‘analytical engine’ would have been nothing more than a rich-man and his hobby. Pearsall, on the other hand, got lost on the way to a party in 1935, and decided the maps were inadequate. She walked a total of 3,000 miles to compile the first comprehensive street map of the city. Their Crossrail reincarnations drove west from Farringdon station, laying the groundwork for the second stage of the project.

Jessica and Ellie: These names were selected by primary school children from East London, and they come from heptathlete Jessica Ennis-Hill and swimmer Ellie Simmonds, who won gold medals at the 2012 Olympics and Paralympics held in the city. Like their human counterparts, these TBMs were hard-working, each excavating two sections of Crossrail’s route.

London has two brand-new TBMs too, which will be working on the extension to the tube’s Northern Line – the line I spent almost all of my 13 years in London living on. Like Crossrail’s Jessica and Ellie, the names of the newbies – each weighing in at 650 tonnes (or 50 double-decker buses) – were selected by schoolchildren. They drew inspiration from pioneering women in aviation. One is named Amy, after Amy Johnson, the first female pilot to fly solo from Britain to Australia. And the second is Helen, named after the first British astronaut, Helen Sharman.

Seattle – Big Bertha

What more can I say about Bertha? Well, she was named after one of Seattle’s early mayors. In fact, Bertha K. Landes was the city’s first and only female mayor…. And she’s still widely regarded as one of the best they ever had. She fought against police corruption and dangerous drivers, and advocated for municipal ownership of the Seattle City Light and street railways. In 2013, Bertha-the-TBM started her long journey across the city, excavating a multilevel road tunnel to replace the Alaskan Way Viaduct. But just six months into the project, Bertha ground to a halt. Investigations showed that some of Bertha’s cutting teeth had been severely damaged by a large steel pipe embedded in the ground that hadn’t shown up on surveys. Over the next two years (yes, really), construction engineers dug a recovery pit, so that they could access the machine’s cutter-head, and partially replace it. Bertha resumed tunnel boring in late December, 2015. As I type, she’s also on a pause because of some misalignment, but this stoppage is expected to be temporary. Poor Bertha.

Image credit: Washington State Department of Transportation

Auckland – Alice

Since moving to New Zealand in December, I’ve had a bit of rail-infrastructure-shaped gap in my life. Thankfully, Kiwis are also fans of TBMs, but they tend to use them for road tunnels. The latest one to finish her work is Alice – a 3200 tonne (246 buses) TBM that spent the last two years carving a path between Auckland’s major transport routes. Alice’s tunnel connects State Highway 16 and State Highway 20, and once it opens in April/May 2017, it will complete the city’s ring road. Having recently spent more than an hour in Auckland traffic heading to the airport, I can attest to how much the road is needed! Since finishing her tour of duty, Alice has since gone to a farm when she can roam free amongst all of the other TBMs…. Oh if only this were true. In reality, the largest sections of the machine are being shipped back to her German manufacturer. There, her components will be used to build another TBM. So it’s not been a bad life, I guess.

San Francisco – Mom Chung

Mom Chung is another TBM that has already done her job and is now ‘in retirement’. She is named after Dr. Margaret Chung, the first American-born female Chinese physician, who practiced medicine in the heart of San Francisco’s Chinatown. During World War II, she took lots of American servicemen under her wing, earning her the nickname ‘Mom’. Legend has it that when one of her ‘sons’ became a congressman, he filed the legislation to create a female branch of the Navy, in response to pressure from Mom, who was a firm supporter of women in the military. Mom Chung-the-TBM built the southbound central subway tunnel in San Francisco, and even had a Twitter account for a while.

Of course, actual, real-life women work alongside (and inside) these machines. As more women are attracted into engineering, tunnelling is no longer solely a male pursuit. Women still make up a small percentage (around 11% of the UK construction sector, for example), but those numbers are slowly growing. So no matter which way you look at it, women are literally boring. Tunnelling is awesome.

*** You can follow Laurie on Twitter @laurie_winkless. She also wants to say thank you to Dr Jess Wade for inspiring this article. If you love science and very cool doodles, you can also follow Jess on Twitter – she’s @jesswade

 

Deep Time Diversity: Decoding 375 Million Years of Life on Land

By: Emma Dunne (@emmadnn)

Across the world today we can see a tremendous amount of biodiversity. Animals occupy every corner of the globe, from the lush rainforests at the equator to the vast icy expanses at the poles and the plethora of grasslands, deserts, and forests in between. Nature is outstanding in its variation of animal forms; animals have mastered flight, can tolerate extreme environments, demonstrate complex behaviours, and some can even use tools. But exactly how life on land became so diverse remains largely uncertain.

 

Chameloeon

Chameleons are a distinctive group of reptiles which contains many different species that vary greatly in colour. Image: Pixabay.

Life has been around for an extremely long time – 3.8 billion years to be exact. Now, that’s a very long time indeed, but for the first 3.795 or so billion years life was microscopic. It wasn’t until 542 million years ago that animals became a little more complex – during the ‘Cambrian Explosion’ when most major groups, such as arthropods, first evolved. To put things into perspective, wherever you are right now stick both of your arms out straight to the side (don’t be shy!). The very tip of your left index finger represents the present day, and the tip of your right index finger represents the point about 542 million years in the past. Moving from right to left, the first fish appear somewhere in the middle of your right forearm just after the Cambrian Explosion. Plants emerged on land around 425 million years ago, a little closer to your right elbow. It wasn’t until the point just before your right shoulder that vertebrates first ventured onto land, beginning the process of evolving into the beasts we are all familiar with today. At the point in the middle of your body, the continents were all squashed together in a landmass known as Pangaea, while reptiles, such as the sailbacked Dimetrodon, ruled the hot and arid lands around the equator. Dinosaurs first appear somewhere on your left shoulder (about 240 million years ago), followed very closely by the first mammals. Dinosaurs are wiped out just before we reach your left wrist (66 million years ago), paving the way for mammals to begin ruling the land. And now to make you really feel like a big fish in a small pond: Humans did not appear until the very tip of your left index finger, occupying a slice of your makeshift timescale no thicker than your fingernail. So, our species really hasn’t been around for long at all!

2 Dimetrodon

Dimetrodon grandis, an extinct reptile that lived 295-272 million years ago during the Permian period in the wetlands of the supercontinent Euramerica. Illustration: Scott Hartman (www.skeletaldrawing.com)

 

With all of these different animals evolving and going extinct at different points throughout Earth’s history, biodiversity has fluctuated, with increases in diversity punctuated by significant decreases known as extinction events, some more severe than others.

Over the last 50 years palaeobiologists have been trying to quantify exactly how significant these rises and falls in diversity have been using computational methods.

Typically, these analyses involve tallying the number of fossil families for specific time intervals and comparing the totals between neighbouring intervals. Previous studies using this method estimate that diversity on land has risen exponentially, or continued to rise faster and faster over time. A number of reasons have been given for this pattern, including the availability of suitable niches and favourable climatic conditions allowing species to thrive and diversify further.

Sounds simple, right? Not quite…

jlkg

The currently accepted pattern of changes in diversity on land constructed using counts of fossil tetrapod (four-limbed vertebrates) families through time. This pattern shows an “exponential rise” in diversity and more and more families appear on land as time goes on. From Sahney et al. (2010) Biol. Lett. (Numbered 1-3 are the end-Permian, end-Triassic and the Cretaceous/Paleogene boundary mass extinctions)

The problem is the fossil record is inherently biased. When you think of a fossil I could almost be certain that you would think of a skeleton in a piece of rock. And that’s not wrong! Hard parts, such as bones, shells, and teeth, are much easier to preserve than soft squishy bits – bias number one. Luckily for vertebrate palaeontologists, like myself, we don’t usually run into this issue as our study subjects have bones. But we do unfortunately encounter other biases. Some groups of animals contain many more individuals than others, and are therefore more likely to leave fossils behind (think huge herds of wildebeest vs. a pride of lions). Similarly, different habitats allow more diversity than others (for example the Siberian Tundra vs. the African savannah). These ‘biological factors’ come in to play even before the fossilisation process even begins!

G.png

Groups of animals that exist in large numbers such as wildebeest or antelope, are much more likely to leave behind some fossils for us to find that animals who don’t exist in such large numbers, such as lions. These biological factors affect the fossilisation potential of an organisms waaay before the geological processes kick in!

The chances of an animal becoming a fossil are very slim indeed. Usually, after an animal dies its body rots away or is devoured by predators and scavengers, never to be seen again. But sometimes conditions are just right, and once the body is buried quickly with mud or sand, rock can begin to form and the remains can be fossilised. As we look back further in time our picture of the past gets a little fuzzier, as older rocks get overlain by younger rocks and mashed up by geological forces such as earthquakes and erosion. Fossils also only occur in sedimentary rocks (if you can remember back to your high school geography classes, you might remember that there are three types of rock: igneous, metamorphic, and sedimentary!), and sedimentary rocks are not found uniformly across the globe. So even finding a fossil is an extremely rare occurrence!

Human biases permeate all scientific disciplines, and palaeontology is no exception.

Sometimes it is easy to stumble across a large ‘mass grave’ containing hundreds of fossils, and sometimes these sites can be in very sunny, very beautiful countries worth visiting. Other times fossils have been found in isolation in areas where conditions are harsh, such as the important transitional fossil Acanthostega found in eastern Greenland. So, who’s up for a fun expedition to the wilds of Siberia in search of reptile fossils in the dead of winter? What, no? Yeah, me neither.

All of these factors (biological, geological, and human in origin) contribute to what are known as ‘sampling biases’, or biases that influence the amount and type of fossil data we have available for us to study.

kljg.png

An exquisitely preserved full body fossil of the extinct amphibian Phlegethontia longissima from the Mazon Creek fossil beds in Illinois, USA. Finds like this little fella are very rare indeed. Specimen housed at the Burpee Museum.

With these sampling biases stacked against us, it seems unwise to use simple counts of fossils to illuminate important patterns of diversity through time. This is where my research comes in. We are currently building a shiny new dataset within the publically accessible Paleobiology Database (paleobiodb.org). With this dataset, we are able to apply more sophisticated statistical methods to our analyses and rigorously test the patterns of diversity change on land over the last 375 million years.

My research will allow palaeobiologists to answer the question; are we able to identify genuine patterns of diversity change, or are we simply viewing changes in the number of fossils available to study through time?

So, with so many millions of years to get through, where’s the best place to start? Why, at the beginning of course! My current work surrounds the interval of geological time when the first vertebrates appeared on land and began to diversify over the next 100 million years. Given that the rocks containing these fossils are very old and are poorly surveyed, our ability to identify genuine diversity patterns is significantly distorted. However, the story does begin to improve as we move into the next 100 million years and we begin to see the fossils reflecting the true patterns of diversity.

hjldf

Map of the world from the Paleobiology Database (paleobiodb.org) showing the locations across the world where tetrapod fossils have been found from the time they first appeared approximately 375 million years ago right up to the present day. You can create maps such as this for yourself at: paleobiodb.org/navigator!

My research has just begun to scratch the surface of decoding the diversity of life on land, and there’s still a long way to go! Studies such as ours are becoming increasingly relevant today as we try to anticipate the effects of the current biodiversity crisis happening across the world. Many animals worldwide are currently under threat of extinction, and if this pattern is to continue we might well see ourselves experiencing the terrifying prospect of a 6th major mass extinction.

Research into past extinction events can determine how ecosystems and animal communities responded in the aftermath of dramatic decreases in diversity, and I hope that my research looking into the geological past will give us some hope for the future.

Find out more:

https://www.theguardian.com/environment/2015/jun/21/mass-extinction-science-warning

https://theconversation.com/how-looking-250-million-years-into-the-past-could-save-modern-species-60338

 

What can the brain learn from itself? Neurofeedback for understanding brain function.

By:Dr. Kathy L.Ruddy

STEM editor: Francesca Farina

The human brain has a remarkable capacity to learn from feedback. During daily life as we interact with our environment the brain processes the consequences of our actions, and uses this ‘feedback’ in order to update its stored representations or ‘blueprints’ for how to perform certain behaviours optimally. This learning-by-feedback process occurs regardless of whether we are consciously aware of it or not.

The more interesting implication of this process is that the brain can also ‘learn from itself’, forming the basis of the ‘neurofeedback’ phenomenon.

BCI_soft_EdgesBasically, if we stick an electrode on the head to record the brain’s electrical rhythms (or ‘waves’), the brain can learn to change the rhythm simply by watching feedback displayed on a computer screen. Because we know that the presence of particular types of brain rhythms can be beneficial or detrimental depending on the context and the task being performed, the ability to volitionally change them may have useful applications for enhancing human performance and treating pathological patterns of brain activity.

In recent years neurofeedback has, however, earned itself a bad reputation in scientific circles. This is mainly due to the premature commercialisation of the technique, which is now being ‘sold’ as a treatment for clinical disorders – for which the research evidence is currently still lacking – and even for home use to alleviate symptoms of stress, migraine, depression, anxiety, and essentially any other complaint you can think of! The problem with all of this is that we, as scientists, understand very little about the brain rhythms in the first place; Where do they come from? What do they mean? Are they simply a by-product of other ongoing brain processes, or does the rhythm itself set the ‘state’ of a particular brain region, enhancing or inhibiting its processing capabilities?

In my own research, I am currently working towards bridging this gap, by trying to make the connection between fundamental brain mechanisms, behaviours, and their associated electrical rhythms or brain ‘states’.

By training people to put their brain into different ‘states’, we were – for the first time – able to glimpse how brain rhythms directly influence these states in humans. We focused on the motor cortex, the part of the brain that controls movement, because there is a vast ongoing debate in the literature concerning whether changing the state of this region has implications for movement rehabilitation following stroke or other brain injury. Some argue that if the motor cortex is in a more ‘excitable’ state, traditional stroke rehabilitation therapies have enhanced effectiveness, compared to when the same region is more ‘inhibited’. Brain stimulation directly targeting the motor cortex has been used in the past in an attempt to achieve this more plastic, excitable state, but with mixed success and small effects that have proven difficult to reproduce.

TMS neuofedback_imagesIn our investigation we used brain stimulation in a non-traditional way to achieve robust bidirectional changes in the state of the motor cortex. Transcranial magnetic stimulation (TMS) can be used to measure the excitability (state) of the motor system. By applying a magnetic pulse to the skull over the exact location in the brain that controls the finger, a response can be measured in finger muscles that is referred to as a motor-evoked potential (MEP). The size of the MEP tells us how excitable the system is. We developed a form of neurofeedback training where the size of each MEP was displayed to participants on screen, and they were rewarded for either large, or small MEPs by positive auditory feedback and a dollar symbol. This type of neurofeedback mobilizes learning mechansims in the brain, as participants develop mental strategies and observe the consequences of their thought processes upon the state of their motor system. Over a period of 5 days, participants were able to make their MEPs significantly bigger or smaller, by changing the excitatory/inhibitory state of the motor cortex.

Our next question was, how exactly is this change of state being achieved in the brain? Are electrical brain rhythms changing in the motor cortex to mediate the changing brain state? Using this new tool to change brain state experimentally, we asked participants to return for one final training session, this time while we recorded their brain rhythms (using EEG) during the TMS-based neurofeedback. This revealed that when the motor cortex was more excitable, there was a significant local increase in high frequency (gamma) brainwaves (between 30-50Hz). By contrast, higher alpha waves (8-14Hz) were associated with a more ‘inhibited’ brain state, but were not as influential in setting the excitability of the motor cortex as the gamma waves

page-0The implications of these findings are twofold. Firstly, having a tool to robustly change the excitatory/inhibitory balance of the motor cortex gives us experimental control over this process, and thus opens several doors for new fundamental scientific research into the neural mechanisms that determine the state of the motor system. Secondly, this approach may have future clinical potential, as a non-invasive and non-pharmacological way to ‘prime’ the motor cortex in advance of movement rehabilitation therapy, by putting the brain in a state that is more receptive to re-learning motor skills. As the training is straightforward, pain free and enjoyable for the participant, we believe that this approach may pave the way for a new wave of research using neurofeedback in place of traditional electrical brain stimulation, as a scientific tool and an adjunct to commonly used stroke rehabilitation practices.