65.2 F
Wednesday, October 20, 2021
Home Blog

Survival of the Fittest: A Sullied History of Science

"Darwin's Finches" by Arne Hendriks is licensed under CC BY 2.0 (Use as cover image)

We’ve reached a point in the pandemic where people are starting to wonder: what would you do in a vaccinated world? Of course, reaching that reality requires that the world continue to take the health risks of coronavirus seriously, and that we continue to be transparent about how vaccines are made, tested, and distributed. There has been a lot of mistrust of the multiple COVID-19 vaccines for a wide range of reasons — from the speed at which resources and knowledge came together, to the historic mistreatment of patients (particularly Black patients) in the name of science.

The hesitancy around the vaccines, as well as the continued refusal of many to wear facemasks that are proven to slow the spread of the virus, has led my peers and I to ask: how do we restore trust in science? Scientists often attempt to stay aloof from their social impact in an effort to remain unbiased in their research, but this also results in their findings being twisted to fit other narratives.

A twisted idea I’d like to leave behind with this pandemic is the Social Darwinist idea of “survival of the fittest.” 2020 was the year that I finally learned who coined that phrase. No, it was not Charles Darwin (though he picked up the phrase for later writings), but rather the writer Herbert Spencer. Spencer was a contemporary of Darwin who philosophized on many topics, and while you can read his essays for free on Project Gutenberg… I’d advise that you don’t. Spencer was a white-supremacist and eugenicist, and you don’t have to read much of his work for it to be obvious who he considers “civilized men” and who he considers “savages.” Spencer used Darwin’s theories to try to explain that certain sectors of mankind would naturally dominate others; that it was in our nature to crush others under our heels. Because of his supremacist views, Spencer (deservedly) faded out of popular opinion. However, the origins of this phrase still affect how we interpret evolutionary ideas today. And “survival of the fittest” is one of the worst, most pervasive misinterpretations of science that I can think of.

When talking about “fitness” in a scientific context, biologists don’t use the term to mean “are you fit enough to run a 5K?” Instead, you can think of it more like a puzzle piece fitting into the greater image. Is an organism well adapted to its environment? Does it produce offspring that are also well adapted? Then it has high fitness. UC Berkeley’s Understanding Evolution site has a succinct explanation: “The fittest individual is not necessarily the strongest, fastest, or biggest… fitness includes [the] ability to survive, find a mate, produce offspring — and ultimately leave [genes] in the next generation.”

Darwin, in On the Origin of Species, explained the concept of fitness through his theory of natural selection, which is a mechanism for change. If fit organisms produce lots of offspring then, assuming their traits are coded in genes, the next generation will have more individuals that share traits with the fit parents. Gradually, over many generations, those traits — ones that help a creature survive and reproduce — will become more common in the population. Hence, we have evolution.

It is worth reiterating that evolution is not a progression towards smarter, stronger, or faster, nor does evolution progress from the simple to the complex. Similarly, there are no organisms that are “more evolved” than others. Spencer and his contemporaries, however, would have you believe that complexity is the basis for evolution; that we can put trees, turtles, and dogs on a ladder up to the pinnacle of evolution: mankind. 

In reality, I (or any human) am no more or less fit, and no more or less evolved than, say, a slime mold. Both of our lineages have survived until this point because we are matched to the respective habitats we live in. Evolution is fickle, and just because an organism hasn’t changed its shape much in millions of years, like horseshoe crabs, that doesn’t mean the organism is “less evolved.” Evolutionary forces have been acting on horseshoe crabs just like any other species, but these organisms were already so well adapted that “fit” traits were already highly prevalent and did not need to fluctuate. If you need more proof that evolution is way more chaotic than a simple, linear progression, you can turn away from horseshoe crabs (which aren’t actually crabs, but arthropods), and look at the evolution of real crustaceans, which have independently evolved into crab-like outer shapes multiple times throughout the history of the planet.

I compared humans and slime molds before because even these “simple” creatures share something remarkable with humans: altruism. When in harsh environments, the single celled amoebae Dictyostelium discoideum will gather together and form a multi-cellular slime mold with a fruiting stalk. The amoebae composing the stalk die as the stalk hardens, but the remaining amoebae can climb it and be carried away on the wind to greener pastures. Such sacrifices for the greater good are common throughout nature.

For humans, the greater good doesn’t always require an act so drastic, but it does ask that we think about our community before ourselves. In fact, caring for each other is what allowed us to survive as a species, and lack of compassion may have been why other Homo species died out. Fossils of early humans have revealed healed bones and physical deformities which would have required others to provide for their survival. In this context, “survival of the fittest” becomes laughable. Early humans didn’t abandon their weak or sick or elderly because community was (and is) crucial.

The skull of Shanidar I, who survived multiple injuries. “Shanidar I skull and skeleton” by Osama Shukir Muhammed Amin FRCP(Glasg) is licensed under CC BY-SA 4.0

With a better understanding of evolution and of early human life, it becomes clear that social Darwinist interpretations were from a particular historical context that does not apply to modern interpretation. Yet because Spencer’s essays were titled First Principles, and because we revere Darwin for the theory of natural selection, we have come to the cultural conclusion that “survival of the fittest” and all its associations are, indeed, principles fundamental to life. But natural selection can be separated from white-knuckle competition as the driving force of life. For example, Robin Wall Kimmerer, a Potawatomi professor at SUNY-ESF, offers this interpretation of natural selection:

“There is no question but that all living beings experience some level of scarcity at various points, and therefore that competition for limited resources, like light or water or soil nitrogen, will occur. But since competition reduces the carrying capacity for all concerned, natural selection favors those who can avoid competition. Oftentimes this is achieved by shifting one’s needs away from whatever is in short supply, as though evolution were suggesting ‘if there’s not enough of what you want, then want something else.’”

Darwin was not the only or earliest writer on the ideas of evolution, any more than Spencer was the voice of authority on its social implications. Yet we’ve let the ideas of these men and their contemporaries dictate public understanding of evolutionary biology and ecology for one hundred and fifty years. In the process, we have concealed the dark history of bigotry by scientific leaders — look no further than the full title of Darwin’s most famous work: On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. This willful ignorance has obscured our ability to analyze evolutionary ideas in a harsher light.

Until we face the ugly history of how these ideas came to be, we cannot hope to change how they are used. Science cannot be trusted as a force of social change if it continues to ignore its social impact. However, if we leave behind preconceived notions of what must be, and step back to look at what is, we can upgrade outdated ideas. Just last month, while Texas was being ravaged by an ice storm that led to over 100 deaths, a Texas mayor wrote that it was not the government’s responsibility to provide support and that “only the strong will survive and the weak will [perish].” But fundraising by Representative Ocasio-Cortez and mutual aid efforts throughout the state provided community relief in spite of such beliefs. Rather than insist that only the strongest individuals will triumph, we can come together as communities to support each other. The pandemic, too, reminds us that miraculous things happen when we work together and share resources: in one year’s time, we developed multiple vaccines to a previously unknown virus. Community and compassion are how humans survived in the past, and it’s how we can build a better future.

Biases in STEM: Gender Discrimination Affects UGA Faculty


“I think my career was negatively impacted by being a woman,” said Ellen Neidle, professor of microbiology at the University of Georgia (UGA), who has a doctorate in molecular, developmental, and cellular biology. “I always thought that women could do everything. It was a pretty big shock to me when I first realized that wasn’t true.”

The phenomenon known as the “leaky pipeline” describes the way in which women become underrepresented minorities in STEM fields, according to an article from the Graduate Student Association at the University of Maryland. If water is poured into a pipe with leaks along its length, a small amount of water will emerge at the end. In this metaphor, the pipe represents the path to a career in STEM, and the water represents anyone pursuing such a career.

“More women drop out of the pipeline in going from Ph.D. student to professor than men,” said Cassandra Hall, a new assistant professor of computational astrophysics at UGA, who has a doctorate in astronomy.

The leaky pipeline affects women in various STEM fields differently. “Women in biological and life sciences represent more than half of the students earning Ph.D.’s, yet most fail to achieve tenure track faculty positions in academia,” according to the Graduate Student Association of the University of Maryland

In other fields, this disequilibrium begins even before the undergraduate level. Fewer women choose math, engineering, and physics as majors in the first place, according to a 2014 study published in the journal Proceedings of the National Academy of Sciences. This may be a self-perpetuating problem, as the low numbers of women going into these fields leaves young women with a lack of role models.

Two common reasons for this phenomenon are that women are more likely to put an emphasis on work-life balance, and that male postdocs are twice as likely to expect their partners to make career sacrifices on their behalf, according to the study.

“I certainly have some friends who are wives who feel like they’ve given up their careers for their families,” Neidle said. 

Dr. Ellen Neidle. Photo by Amy Weir. Used with permission.

Women hold only 37% of all faculty positions and only 28% of tenured faculty positions at UGA, according to UGA’s 2019 Fact Book. Neidle feels that women are underrepresented because UGA does not foster an environment of retention. She has noticed an increase in female graduate students deciding they do not want to pursue a career in academia. “What does that say about the portrait we’re portraying?” she said.

Neidle has been an advocate for supporting women in science at UGA since she began working there in 1994. “I’ve always been a squeaky wheel, and being a squeaky wheel makes you feel more and more ostracized.” She said that being a “squeaky wheel” means she is often vocal about her complaints when she notices a discrepancy with diversity and inclusion. 

At a prestigious scientific conference held at UGA in 2007, Neidle noticed that out of more than 15 world-renowned speakers, only one was a woman. “I complained to the department, and there’s always an excuse,” she said. “And then everybody gangs up and argues against me, whether it’s on these email lists, or just discounting what I have to say.” 

Neidle’s experience of being marginalized by male colleagues is not unique. It is an institutional issue that is perpetuated by underrepresentation of women and reinforced by gender stereotypes. With a lack of role models and support from female colleagues, stereotypes of women being less “intelligent” and “competent”, and being more “emotional”, are easily maintained, according to a 2019 study published in the Journal of Neuroscience.

Hall has also experienced and witnessed gender discrimination in her field. “A lot of the time it’s not malicious, it’s a subconscious bias, but that’s why I think it’s really important to address the issues,” she said.

Hall recognizes that being white and cisgender puts her in a position of privilege. “I’ve been incredibly lucky in many senses because the challenges that I’ve faced have been mostly offhand comments,” she said. “It seems to happen to my colleagues more than it happens to me, but it could be my own bias that I’m not noticing it’s happening to me.”

Hall noted that she is aware of many issues with bullying and sexual harassment in her field, and that she knows of women who have internalized trauma from these situations who have had to seek therapy. She said it is important to have good networking between women within the STEM fields because it can act as a lifeline in these scenarios. She also said, “it helps to have other women to run ideas past for equity and for making things better, as well as to act as a sounding board.”

Hall said she feels that being a gay woman sometimes causes people to trust her judgements more than those of her straight counterparts. “It’s interesting that society has deemed that because I’m slightly more masculine presenting, that I’m somehow slightly more worth listening to.”

Hall was awarded a Winton Exoplanet Fellowship in 2018, which she said felt monumental to her both as a woman and as an open member of the LGBT community. 

After she got the email about the fellowship, one of Hall’s colleagues told her not to “out” herself in her interview. She decided that if Winton has a problem with her being LGBT, then she does not want their money. “I’ve got thick skin, so it’s water off a duck’s back to me,” she said.

Neidle said she loves her career and her interactions with her grad students, and that there are aspects of being in academia at UGA that are “really wonderful”. However, she said there are simply certain issues that need to be addressed. 

Neidle has been involved with several initiatives to get funding from the National Science Foundation (NSF) to support increased diversity. She has been in charge of her departmental seminar series for the last few years, where one of her duties is to select invited speakers. “I can make sure that half of the speakers will be women, and that there will be people of color coming in,” she said. “I’m trying to find places where I can make a difference.”

Neidle said she has learned to pick her places, and to figure out where she can and cannot fight. “There has to be a balance. Some of it is being a squeaky wheel, some of it is fighting for things, and some of it is just quietly changing things you can change,” she said. 

Hall said it bothers her when people assume that things naturally get more “liberal” over time. She said that change requires conscious effort, whether it be for civil rights, women’s rights, or LGBT rights. “Don’t assume that things always get better, because they don’t, they are hard won victories by people who often sacrificed a lot to do so,” she said.

Neidle said, “I think having more diversity brings creativity, it brings in different perspectives. You want to share ideas universally across all boundaries, so it just seems obvious to me.”

Both Neidle and Hall hope to see a future where the demographics of successful people in STEM accurately reflect that of society, and where a person’s success is not impacted by implicit biases. Hall said that being aware of these biases is the first step to eradicating them because if you know they’re there, then you can do something about it.

Main image credit: Quaerens-veritatem licensed under CC BY-SA 4.0 via Wikimedia Commons.

Wall of Destruction: The impact of the US-Mexico Border wall on wildlife


Growing up in Arizona, we were told that people could go to jail for damaging a Saguaro cactus. Saguaros are a protected symbol of the Southwest. Yet in 2019, videos shot by Kevin Dahl, the Arizona Senior Program manager for the National Parks Conservation Association, recorded bulldozers uprooting Saguaro cacti and other desert shrubs at the United States and Mexico border in preparation to build a wall.

Man-made barriers have long impacted their surrounding environments. Most large-scale barriers are erected for national security reasons, with little regard for local wildlife. Currently, a 30ft tall, steel wall is under construction to act as an impermeable barrier along the nearly 2000 mile border between the United States and Mexico, and it is destroying everything in its path. 

Border Wall I by Russ McSpadden via Flickr is licensed under CC BY-NC 2.0.

How will the US-Mexico border wall impact wildlife?

The plans for the US-Mexico border wall span desert, woodland, grassland, and wetland ecosystems that are rich with biological diversity. One study shows that the wall will transect the habitats of at least 1,506 native terrestrial and freshwater animal and plant species. According to the Center for Biological Diversity, this number includes 93 imperiled species (i.e., endangered, threatened, or under review for protection). 

One of these 93 species is the Río Yaqui Fish, which relies on rare desert springs and streams for its habitat. Not only are these water reserves already susceptible to persistent drought and increasing temperatures, a borderlands campaigner for the Center for Biological Diversity states “there’s good reason to believe that the Yaqui fish’s only US habitat is drying up as a result of tens or hundreds of thousands of gallons of groundwater being pumped to build the border wall.” These freshwater fish species are facing possible extinction as their habitats are sucked dry. 

Additionally, all but five of the 93 species have populations on both sides of the US-Mexico border line, meaning a wall will split these endangered populations into even smaller units.  One of the most endangered mammals of North America, the Mexican gray wolf, was actually starting to recover its numbers after decades-long binational conservation efforts. This effort could be squandered as the wall splits the vulnerable group, preventing necessary genetic exchange for its continued survival. 

Wolf Jokes by MTSOfan via Flickr is licensed under CC BY-NC-SA 2.0

Even low flying animals are threatened by the wall. The Quino checkerspot, a fast-flying butterfly that ranges from the Santa Monica Mountains to Baja California, Mexico, is already facing extinction due to habitat loss from land development. In addition to preventing contact between surviving populations, the US-Mexico border wall will directly harm native vegetation that this butterfly relies on to reproduce. As a consequence, it will be a challenge for Quino checkerspots to recover their population sizes and maintain important genetic variability. 

It does not matter whether endangered species along the US-Mexico border live in water, on land, or can even fly – the construction of the wall will destroy or fragment their habitats. A wall reduces overall landscape connectivity, limiting access to food, water, mates, or migration corridors. The examples above represent just a few of the diverse endangered species that will be affected.

If these species are endangered, then why aren’t they federally protected?

Because of the importance of the species and landscapes along the US-Mexico border line, many environmental laws are set in place to protect them, including the Endangered Species Act and the National Environmental Policy Act. However, under the Real ID Act of 2005, the Trump administration can override environmental laws that would slow down building of the US-Mexico border wall. Indeed, the Department of Homeland Security has waived 48 environmental laws set to protect species and habitats along the border line. Though the ecological impacts this will have are likely unintentional, it is difficult to ignore the complete disregard of these critical ecosystems. 

How do we help? 

Different organizations have focused their efforts to defend conservation laws, conserve endangered species, and rebuild habitats. One organization in particular, Defenders of Wildlife, has filed a lawsuit in hopes for the Supreme Court to review the constitutionality of the Real ID Act. Their two-part report also describes how US and Mexican Agencies are teaming up for conservation projects in the Lower Rio Grande area, including efforts to document animals and plant native vegetation to restore habitats. The Defenders of Wildlife website lists ways for you to take action and have a voice in helping protect threatened and endangered wildlife. 

Most importantly, we must take time to truly understand the consequences of political motives on wildlife. It is also our responsibility to protect critical ecosystems with our daily choices and be thoughtful of our votes come election time. We are not the only ones to call this land our home.

Breaking the two-hour tape: Engineering the fastest marathon run in history


What does it take to reach the peak of athletic performance and break barriers thought to be beyond human capabilities? One of these barriers is the two-hour marathon, a feat which requires running 26.2 miles while maintaining an average pace of 4:34 per mile. At that speed, you could run the 100-yard length of a football field in under 16 seconds! With improvements in training and exercise physiology, the men’s marathon world record has steadily decreased yet still lingers just above two hours. Some scientists believed the two-hour barrier would not be broken while others said it was only a matter of time.

Image Credit: Pedro Perim via Wikimedia Commons. Licensed under CC-BY-SA-4.0

Enter Eliud Kipchoge, the Kenyan long-distance runner who holds the world record for the fastest marathon after finishing the 2018 Berlin Marathon in 2:01:39. Winning 12 of the 13 marathons he entered, Kipchoge is widely regarded as the best marathoner of modern times. Kipchoge has a long history of success, including winning middle-distance championships in the early 2000’s and becoming the 2016 Olympic marathon champion. After falling short of a sub-two-hour marathon by only 26 seconds in Nike’s Breaking2 project, Kipchoge would again attempt to break the two-hour barrier at the Ineos 1:59 Challenge. But when every second counts, what would it take to allow the best of the best to approach the limits of human performance?

To improve running economy (how efficiently the body turns energy into running motion), Kipchoge wears a groundbreaking model of running shoes designed for marathoners. These lightweight shoes contain a carbon-fiber plate and a midsole with thick foam which ultimately lessen the energy needed to flex joints in the lower body, reducing overall demand for energy. The foam is also flexible and resilient; after a runner’s foot strikes the ground, the foam is able to push back (similar to a spring) to help propel the runner forward. Combined, these small running economy boosts can help shave seconds off Kipchoge’s pace. 

Image Credit: Vianney de Montgolfier via Behance. Licensed under CC BY-NC-ND-4.0. 

To decrease wind resistance and ensure he keeps pace, Kipchoge is assisted by 42 world-class runners, including Olympic medalists. These pacers run in front of him in a V-formation, rotating in and out throughout the challenge. An electric car helps pacers maintain formation by projecting lasers onto the pavement. Periodically, a person on a bicycle provides Kipchoge with hydration and fuel in the form of a carbohydrate-rich drink mix, replacing water stations usually present in races.

Aside from training, fueling, and economy, there are a range of external factors which influence performance. A low-altitude environment with overcast conditions, minimal winds, and a temperature around 40-50°F set the stage for the best performances. High humidity and temperatures challenge the body’s ability to regulate temperature, raise lactate production, and ultimately decrease running efficiency. To optimize Kipchoge’s performance, event organizers narrow down locations, dates, and times until they find conditions sufficient for his next attempt.

On October 12th, 2019 in Vienna, Austria, temperatures ranged between 43-57°F with minimal rain, moderate humidity, and winds averaging about 5 mph. This location was under three hours from where Kipchoge lived (reducing jet lag) and at a low altitude, facilitating a higher  concentration of oxygen in the air. Here, Kipchoge would attempt a sub two-hour marathon by competing 4.4 laps around a flat, tree-shaded course consisting of two long stretches with small loops at each end. That morning, Kipchoge started off strong, maintaining a consistent speed aided by the pacers smoothly rotating in and out along the course. Reaching the halfway point 10 seconds ahead of pace, Kipchoge and the pacers steadily progressed. With just over half a mile to go, the pacers were waved off and Kipchoge accelerated down the final stretch. Waving to the crowd and pumping his chest, he crossed the finish, completing the 26.2 mile course in a breathtaking 1:59:40.

Image Credit:  Michael Gubi via Flickr. Licensed under CC BY-NC-2.0.

While he shattered the two-hour barrier, Kipchoge’s time does not count as a marathon world record as event conditions did not meet official standards. The Ineos 1:59 Challenge was not an open event, Kipchoge was led by rotating pacers and a pace car, and he was handed fluids by cyclers. Yet, he is still recognized as the first human to run the marathon distance in under two hours. So what is next for breakthroughs in the marathon? Thirty years ago, scientists predicted an ideal athlete in perfect conditions could run a marathon in 1:57:58. In a marathon compliant with world record criteria, is this possible? As Kipchoge stated in an interview following his sub-two-hour marathon run, “Personally, I don’t believe in limits.”

Sleeping Beauty Seeds


This year I’ve been reading a lot about seed dormancy and while we’re all hunkered down, sheltered in place during the COVID-19 pandemic, I can’t help but feel there’s an apt comparison to be made. Most plants don’t get a lot of input on where they land as seeds, but they do have a say in when they sprout. Seeds in the soil will wait all winter, or sometimes for multiple years, before deciding that conditions are just right to launch into the world as seedlings. If you’ve been on walks around your neighborhood (a great way to stay sane right now), you’ve probably seen this process in action as different plants rapidly sprout and grow, like the oak pictured below. One of my favorites — bluebonnets — are just starting to bloom as spring comes into full swing.

A sprouting red oak. Credit: Kelly McCrum, used with permission.

Plants “spring up” now because one trigger to begin germination is warming temperatures after a cold period . This is also why, if you’ve ever tried to sprout seeds at home, you’ll see recommendations to put seeds in your refrigerator for a week or so before planting. Spring and summer bring rain, the elimination of frost risk, and plenty of light for plants to produce the energy they need for growth. Seedlings do their best to time emergence during these favorable conditions so that they can grow as much as possible before harsher cold climates return.

 In addition to environmental factors, there are internal triggers for germination, too. If you’ve ever cut into a tomato or an apple and found the seeds have already started sprouting, you’ve stumbled upon plant vivipary. This phenomenon is caused by fluctuating hormone levels in the seeds – namely, running out of the ‘dormancy hormone’ known as abscisic acid. Using both external and internal cues to break dormancy lessens the chance that a plant will sprout too early, such as during a warm spell in February when there’s still a risk of damaging frost later in the season. 

 Vivipary in a tomato.
Source: Tomato seeds premature sprouting licensed by mykhal under CC BY 2.0

Amazingly, some seeds can wait centuries for the proper signals to break dormancy. If you’re a plant nerd like me, you may remember the Judean date palm, nicknamed “Methuselah,” that scientists sprouted from a 2,000 year old seed in 2005; he’s still doing well, and scientists are attempting to breed him with modern and other ancient date varieties. Other successful germination attempts include the 1,300 year old lotus seed from China that scientists sprouted in 1994. Unfortunately, the resulting plant was sacrificed to carbon dating, but the lead scientist on the project, Jane Shen-Miller, has since been caring for other centuries-old lotus plants. Re-growing ancient plants can give us information about the aging process, as well as about the evolution of a plant and its associated diseases. It’s the plant equivalent of Jurassic Park, with no frog splicing required!

Methuselah, the ancient Judean date palm. Credit: Methuselah-Ketura-2018-10 licensed by DASonnenfeld under CC BY-SA 4.0

While seeds are lying dormant, whether it be for centuries or a few months, they are part of the ‘soil seed bank.’ Humans also create seed banks to preserve genetic diversity of crops (which ASO has written about here), and these natural seed banks serve similar purposes. In the case of a natural disaster, having a seed bank means that a given plant species won’t become locally extinct. Less drastically, an individual plant can be assured more of its offspring will survive if some of its seeds wait to germinate in the following growing season. This has been studied in desert annual plants, where the harsh environment almost guarantees that not all seedlings will survive to adulthood. If some seeds have remained dormant in the soil (though what fraction does so varies year to year) then the parent plant still has offspring that might survive in the next year. This phenomenon is especially important in plants that can only make seeds once before they die, though seed dormancy exists in plants of all life histories.

Of course, humans can’t bury themselves and wait decades to emerge from a shelter-in-place order. But in the meantime, maybe it’ll help to imagine yourself like a little seed: waiting out a hard winter and prepping for the day that you can stretch back into the sunlight of normalcy.


About the Author


Forget What You Know About Alzheimer’s


Alzheimer’s disease (AD) is the sixth-leading cause of death among adults in the US. Its progression is devastating: the brain slowly deteriorates, cognitive ability degrades, and bodily functions gradually shut down. Given our aging population and the huge financial burden of care, the National Institutes of Health is expected to contribute almost $3 billion to AD research in the year 2020 alone. Researchers worldwide have been working for decades to find a treatment. Despite their best efforts, treatments have proven mostly ineffective in clinical trials. 

Comparison of the normal brain structure (left) versus brain structure of a person with Alzheimer’s (right). Image courtesy of Garrondo via Wikimedia Commons. Licensed under Public Domain.

Some scientists now argue that the very foundation of AD research may be outdated. There is surprising evidence that AD could be triggered by an infection, rather than some intrinsic property of the brain. If true, that means decades worth of research and development may be aimed at the wrong molecular targets. Understandably, skepticism abounds among the research community. What happens when dissenting new evidence butts up against established medical paradigms?

What We Think We Know

Our current AD paradigm is based on late-1980s observations that advanced AD patients accrue massive amounts of misfolded beta amyloid (Aβ) protein peptides in their brains. Known as plaques, these protein bodies are believed to trigger further neurodegenerative processes. The amyloid hypothesis is supported by genetic evidence, as the most significant marker for predicting future AD is the APOE4 mutation. ApoE is an enzyme which helps to clear Aβ peptides from the brain, but the APOE4 variant is impaired in this activity. However, treatments designed to target Aβ peptides have never been successful.

Depiction of Aβ plaques (orange) and tau neurofibrillary tangles (blue) in the brain. Image courtesy of NIH Image Gallery. Licensed under Public Domain.

An alternative theory, the tau hypothesis, focuses on tau protein fragments that form aggregates called neurofibrillary tangles inside of nerve cells. Proponents believe that this is the biochemical cause of AD pathology. Like plaques, tau tangles accumulate in the brains of AD patients. Tau is increasingly becoming a target of new drug development, but again, clinical trials have been mostly negative.

There is little doubt at this point that Aβ and tau play important roles in AD; their near-universal prevalence in AD patients cannot be ignored. But why is it, then, that anti-Aβ and anti-tau drugs have so far proven ineffective?

A New Context for Old Discoveries

New research is providing intriguing possibilities to answer this question. Given current trends in microbiology, perhaps it is not surprising that numerous links have been proposed between AD and the microbiome. Several independent labs now believe they have identified a specific culprit: Porphyromonas gingivalis. P. gingivalis is the main bacterium involved in gum disease. What’s staggering is that this bug is able to invade the brains of ApoE-deficient mice, cause inflammation in the same brain regions that are affected by AD, and even induce production of Aꞵ plaques in the brains of previously healthy mice. 

Recently, researchers found evidence of two P. gingivalis toxins in the hippocampus —  the brain’s memory processing center —  in 91-96% of the 54 human AD brain samples that they tested. These toxins, known as gingipains, give bacteria the ability to invade and feed on human tissues. Higher gingipain levels directly correlated with higher levels of tau neurofibrillary tangles in the human samples, demonstrating increased cognitive impairment. In a mouse model, treatment with gingipain inhibitors reduced inflammation in the brain, blocked formation of Aβ plaque components, and even rescued damaged neurons in the hippocampus. Although initial sample sizes are small, there is compelling evidence that infection may play a role in the onset or exacerbation of AD.

Changing the Paradigm?

Proposing that AD is an infectious disease seems counter to everything we’ve ever known about this illness. Assuming for a moment that this hypothesis is true, what does that mean for traditional AD research? Maybe Aβ plaques and tau bodies are not the direct causes of AD, but rather the symptoms of infection. Maybe this is why targeting these proteins has proven so ineffective. Maybe this is why there have been no significant breakthroughs in AD treatment in 40 years.

Despite the saying, science is not a perfect science. We are always bound by the limits of the information available to us at the time. What these novel studies demonstrate is that we don’t have all the information yet regarding AD. We should be willing to entertain radical new ideas that are supported by evidence, rather than hold tight to established yet fruitless paradigms. Now is a time when we can choose to be open to new ideas, or we can continue to delay life-saving advances while again confirming what doesn’t work.

Loved ones raise funds and awareness for AD research. Image courtesy of Susumu Komatsu Photography, Licensed under CC BY 2.0.

About the Author

Jennifer Kuraz

Jennifer Kurasz is a graduate student in the Department of Microbiology at UGA, where she studies the regulation of RNA repair mechanisms in Salmonella. When not in the lab, she prefers to be mediocre at many hobbies rather than settle on one. She greatly enjoys her women’s weightlifting group, cooking, painting, meditation, craft beer, and any activity that gets her outdoors. She can be contacted at jennifer.kurasz25@uga.edu. More from Jennifer Kurasz.

Fanning the flames


In recent years, it feels like we have watched parts of the world be swallowed whole by fire, painting a very apocalyptic picture of the future. Nearly 40,000 square miles in Australia were decimated by bushfires last year. California’s Camp Fire displaced about 50,000 residents, and Indonesia saw over 2 million acres of land consumed by flames, including precious orangutan habitats. The scale and frequency of this destruction feels unprecedented, but what’s causing them? And why now?

The 2018 “Camp Fire” in California was the deadliest and most destructive fire in CA history. Credit: NASA via Wikimedia Commons licensed under Public Domain

In the Devonian period around 400 million years ago, a rise in trees caused a rich production of oxygen in the atmosphere; with that came the natural process of forest fires instigated by lightning. The fiery landscape continued for millions of years, whilst organisms evolved alongside it – the results of that coevolution are easily seen today. For example, Jack pine, a pine tree native to the northern US and Canada, evolved serotinous pine cones, which only open to spread seed in intense heat. The longleaf pine, Jack pine’s southern relative, developed different growth stages that revolve around fire, with its seedlings being essentially fire-proof. Smaller understory plants evolved to store more of their carbon-rich biomass underground, effectively ‘hiding’ more of themselves until the fire passed. Plants weren’t the only ones to innovate – animals, too, have evolved to thrive in a fiery ecosystem. Take the gopher tortoise, for example, which is considered an ‘ecosystem engineer’. The tortoise’s burrows become the perfect underground bunker for it, and hundreds of other species, to wait out a fire.

Gopher tortoise borrows can shield hundreds of species of animals during fires. Illustration by Emma Roulette. Used with permission.

Historically, fire has been a valuable tool for humans and their ancestors, who have been using it as early as one million years ago in Africa. With their arrival in North America around 14,000 years ago, early humans learned to use controlled fires – what we now call prescribed burns – to their benefit. The purpose of these burns was multifaceted. They provided mineral ash and carbonized leaf litter, which settled and fertilized the soil, creating a rich substrate for agriculture. The fires also facilitated hunting. The tender new growth of shrubs and grasses attracted animals, which hunters could more quietly stalk in the newly cleared forest.

Grass growing back a few days after a prescribed burn. Photo by author.

These adaptations by plants, animals, and people played a role in the establishment of fire-dependent ecosystems. In North America alone, these range from savannah plains and swamps, to conifer forests. However, with the arrival of colonialism on the continent in the 15th century, forest fires were deemed destructive and wasteful. Ironically, the land the colonists saw as pristine and untouched by man was the result of millennia of fire practices by natives. The combination of fire suppression and excessive logging by the colonizers left ecosystems massively disturbed. Flammable forest debris build-up, what foresters call “fuel,” was left unchecked, leading to devastating wildfires. These wildfires, unlike prescribed fires, are uncontrollable, extremely hot, and damaging to plants and animals. As early as 1910, a series of wildfires known as the Big Blowup, swept through three states and killed 85 Americans. These are the same fires we see today, torching millions of acres on the news. 

Native Americans shaped the landscape for thousands of years before the arrival of Europeans. Painting by Frederic Remington National Gallery of Art, Washington D.C. licensed under Public Domain.

Now that we better understand forest and fire ecology, forest managers can facilitate natural processes by incorporating prescribed burns in management practices. But prescribed burning is not implemented in every fire-dependent ecosystem in the US. In the southeast, prescribed burns are heavily implemented; whereas in the west, prescribed fires are not as incorporated into management practices.

Screen Shot 2020-03-29 at 6.21.11 PM
Graphic showing acres burned by wildfires and prescribed burns in the US. It’s no coincidence that areas that undergo prescribed burns are less scathed by wildfires.  Image credit: https://www.climatecentral.org/

Of course, prescribed fires near residential areas or highways can present urgent safety concerns, especially for those with respiratory illnesses. Non-fire solutions to prevent the accumulation of fuel have been explored, such as allowing goats to intermittently graze potentially flammable grasses. The Forest Service also puts an emphasis on outreach and education, providing people with the tools and knowledge to prevent wildfire on both public and private lands.

Prescribed fires can be initiated using many tools, including a drip torch, shown above. Video by author.

There is no singular answer as to why wildfires like the ones we see in Australia and California are so destructive – but it can be boiled down to fire suppression and the dark cloud looming over everyone’s environmentally conscious head, climate change. There is no doubt that the frequency and severity of wildfires will increase as extreme droughts and higher temperatures are expected in the wake of climate change. Mitigation of climate change is imperative to reducing these wildfires, and so is education. The US Forest Service is taking measures to educate the public on the benefits of prescribed fires and how we can prevent wildfires, since more than 80% of wildfires are caused by people.


Our dependence on forests is more than one can imagine – in unexpected ways too – and preserving these ecosystems is integral to saving ourselves, the land’s history, and the millions of beings who were here before us. Find out more on how to prevent wildfires and how to curb climate change




Double Merle Dogs


Dog coats come in a seemingly endless variety of patterns, lengths, textures and colors, determined by their genetic makeup. Just 8-14 different genes are responsible for most of these differences in coat color and pigmentation. Dogs inherit two alleles, or variations, of each of these genes, one from the father and one from the mother. Alleles can be dominant, where the effect is present if there is just one copy present, or recessive, where the effect is only shown if there are two copies of the allele present. The resulting combinations of alleles that are inherited influence certain aspects of their coat color. One of these genes, the merle gene, impacts coat color by producing distinguishing markings in numerous breeds. 

The merle gene exists as two alleles: the dominant allele Merle (M), and the recessive Non-merle (m). If a dog inherits the dominant M allele from at least one parent, it will have merle characteristics. As a result of the M allele, random sections of the dog’s coat will be diluted or mottled. Merle dilutes the dark pigments, and can result in partial or completely blue eyes as well as lightened colors on the nose and paw pads. Typical merle dogs have one dominant merle allele and one recessive non-merle allele (Mm). If two of these merle dogs are bred, there is a ¼ chance that their offspring will inherit two copies of the Merle allele (MM). These dogs are called double merles. 

Double Merle Shetland Sheepdogs Kalisi and Adora. Photo by Dawn H. – used with permission

Double merles are mostly white in color; they are also more likely to have hearing and vision problems. The main connection between the merle gene and health problems is rooted in the pigment producing cells, or melanocytes. Melanocytes in the inner ear help convert vibrations from sound waves into electric impulses sent to the brain to be interpreted as sound. The merle gene causes a reduction of melanocytes. While dogs with a single copy of the merle gene normally have enough of this type of cells present, double merles have very few, to the point where hearing loss can occur. Lack of melanocytes also leads to reduced blood supply and ultimately the death of nerve cells in the ear. Double merles can be deaf or hearing impaired in one or both ears. Double merles are also more likely to have eye and vision defects, though the exact link between the merle gene and vision defects is unclear. Resulting abnormalities including irregular development of the pupils and iris, or reduced eye size can cause light sensitivity, poor vision, and partial or total blindness.

Curious about the experiences of raising double merle dogs, I reached out to Dawn, who owns two double merle Shetland Sheepdogs, Kalisi and Adora. Kalisi is deaf and vision impaired, and Adora is deaf and blind (8,9). Dawn is an advocate for educating people on double merle and specially abled pets, and shared some of her knowledge and experiences with these unique dogs.

Two large misconceptions are that hearing and vision impaired dogs are not trainable or that they startle and bite easily. However, they are just as intelligent and food motivated as any other dog and are able to be trained to respond positively to touch. Both Kalisi and Adora are trained using touch commands. Kalisi’s training also involves different hand signals for different activities including obedience, tricks, and agility. She has two dog trick titles and works as a therapy dog. Dawn says their training is comparable to that for her non-hearing or vision impaired sheltie, Kappi. The main difference is in the way of giving a command, be it by voice, hand, or touch.

Kalisi has earned both novice and intermediate trick titles. Photo by Dawn H. – used with permission

Purposeful breeding of two dogs that both carry the Merle allele (often referred to as merle to merle breedings) are typically avoided. However it is not always possible to tell if a dog is a merle by looks alone, as other factors that determine coat pattern could make the characteristic diluted patches less apparent. Genetic testing should be performed to ensure a dog is not one of these “cryptic” merles. Additionally, there are over 15 breeds known to carry the merle gene, so double merles can still occur in a litter from two merle dogs of different breeds. 

Most double merle puppies are a result of poor breeding practices or the accidental breeding of two merle dogs, and are often euthanized shortly after birth or placed in shelters. Those that aren’t killed often face trouble finding homes as there are many misconceptions surrounding both their ability to be trained and regarding health problems they may face.

Fortunately, numerous groups and advocates are working to combat these myths. With the correct knowledge and training, double merles and other dogs with disabilities are capable of living normal lives and make wonderful pets. Dawn and other advocates want everyone to know that double merles are “different, not less” and that their only limitations are those we put on them.

About the Author


Emily is a PhD candidate in the Department of Microbiology studying a regulator of aromatic compound metabolism in the soil bacterium Acinetobacter baylyi. She loves running, college football, and taking her dog everywhere around Athens. You can reach her at emcintyre@uga.edu. More from Emily McIntyre. 

This looks familiar…

Rushed city by Huub Zeeman is licensed under CC-BY-NC-ND 2.0

How many times has this happened to you before? You walk into a room–it could be one you’ve stepped foot in a dozen times that day, or never at all– and hesitate by the doorway. There is something about that space that is nagging at the back of your mind. You decide that, somehow, you have lived through this moment before or you’ve seen this room, exactly as it is arranged now, at this precise point in time. You likely know what the feeling is called, déjà vu, but what is it? And why does it happen? 

What is déjà vu?

Déjà vu is a French term that translates as “already seen”. It was coined by philosopher Émile Boirac to describe the brief sensation of having already lived a novel moment. The majority of the population has had at least one episode of déjà vu in their life, with 60% of the population experiencing them regularly. These episodes can persist anywhere from ten to thirty seconds and have no damaging or lasting effects. The sensation of déjà vu comes from the combination of two different cognitive processes: the recognition of a particular event (knowing you’ve been there/seen that before), and the awareness that recognizing the event is incorrect (knowing you possibly couldn’t have been there/seen that before). What’s more interesting, though déjà vu is a very common phenomenon, its causes can vary depending on the individual.

An odd side effect

“Brain Illustrations” by Denise Wawrzyniak is licensed under CC BY-NC 4.0 

Instances of déjà vu are often associated with neurological or psychological conditions. However, regardless of whether the cause of déjà vu is benign or pathological, scientists agree that the areas of the brain involved are all found within the temporal lobe. These regions are in charge of processing sensory stimuli (everything you hear, see, smell, taste, and feel) and converting them into memories in the rhinal cortex. The rhinal cortex acts as a middle man–it helps take the information received by your senses and turn them into memories. Those memories can then be consciously recalled when needed. The more common an event is, the less rhinal processing it requires to be stored and retrieved. Certain conditions can affect this storage/retrieval process, suggesting that déjà vu could result from gaps in memory conversion in the rhinal cortex, like when you save several files in your computer to similar names and you have to open a few files before finding the right one. 

The most common pathological cause of déjà vu is temporal lobe epilepsy (TLE). TLE is a common form of focal epilepsy and can be subdivided by the severity of symptoms, going from momentary loss of awareness (simple partial seizure) to strong convulsions. Déjà vu often precedes epileptic episodes in the mildest form of TLE, simple partial seizures. It serves as an “aura” for oncoming episodes of epilepsy felt before the onset of more severe symptoms. This particular type of déjà vu differs from that of healthy individuals because the sense of familiarity is not connected to anything in the environment–they do not feel as if they’ve lived that moment or been in that place before. This distinction has led scientists to conclude that disease-associated and “normal” forms of déjà vu must have different causes.

A glitch in the matrix?

While there is certainly evidence linking it to disease, déjà vu is most often just a glitch in a completely healthy brain. Incidence is significantly higher in young adults (15-25 years of age) and in those with higher education, socioeconomic class, or who travel often. Additionally, déjà vu is also increased in subjects who are under significant stress or lacking sleep. This evidence had led scientists to think that déjà vu is a by-product of memory consolidation, the process of transforming short-term memory to long-term. Therefore, with increasing stimuli like enjoying movies, documentaries, and books, or decreased amount of time for processing, the frequency of episodes increases.

Study by David A Ellis is licensed under CC BY 2.0

Probably the most fascinating fact about déjà vu is that brains of healthy individuals who experience the phenomenon are different from those who report no episodes. One particular study observed the volume of brain cells, or grey matter, in different regions of the temporal lobe in subjects with and without déjà vu experiences. People who report experiencing déjà vu episodes have a lower total volume of grey matter in memory-specific areas of temporal lobe compared to those who don’t experience the phenomenon. 

The brain is a magnificent machine capable of unimaginable wonders, but that doesn’t mean it’s perfect. In its quest for efficiency, it sometimes takes a few ill-advised shortcuts that can leave you feeling confused. So, next time you walk into a room and feel like you’ve lived that moment before, remember that it’s the harmless side effect of a brain trying to juggle too many things at once. Take a moment to appreciate the complex process that brought on this phenomenon and maybe consider taking more naps.

Saving more than just seeds, in situ


While I’m often left paralyzed by apple choice in Kroger, I know the breadth of options at grocery stores mask a far different reality: we’ve lost roughly 90% of the world’s crop varieties in the past 100 years. This threat to future food security is referred to as genetic erosion and primarily attributed to the proliferation of modern cultivars, which displace local crop varieties. Conservation methods to maintain crop biodiversity rely on either the use of external seed banks and greenhouses (ex situ) or through continued cultivation on farmland (in situ).

As I’ve previously alluded to, ex situ conservation is imperfect. While there is a bleak romance to seed banks as our planet’s emergency supply closet, this shouldn’t be our only option. One of the most obvious drawbacks are the physical limitations to conserving all plant genetic material. Everything cannot be banked – for instance Svalbard has roughly 5,000 of the estimated 390,900 existing plant species in its collections- so then which plants are deemed worthy of this protection? And who gets to make those decisions? 

pasted image 0
Pea Sample- Pisum sativum (Fabaceae), 1880-1960. Image credit: Museum Victoria licensed under CC BY 4.0

Then there’s a messier issue I will distill to: seeds have context. Germplasm is not a standalone technology, but rather interwoven to its ecological and cultural surroundings. Severing such ties without mindful consideration has consequences. One such example can be found in the 16th century proliferation of maize in Europe. The crop’s relative affordability led to its quick adoption as a food item among Italian peasantry. But lost in the grand crossing of the Atlantic was the concept of nixtamalization, the traditional Mesoamerican alkali treatment of corn, which ensures the bioavailability of niacin (also known as essential vitamin B3). Without nixtamalization, and with corn as the primary food source, chronic niacin deficiencies emerged in a scourge of pellagra in Italy, a disease marked by neatly descending “Ds”: diarrhea, dermatitis, dementia, and death. This same pattern reemerged in the American south in the late 19th century, taking thousands of lives. It’s embarrassing to think that a glint of respect for the cultural knowledge surrounding food preparation could have averted centuries of human suffering.

pasted image 0 (1)
“Pellagra, an American problem.” Image credit: Medical Heritage Library, Inc. is licensed under CC BY-NC-SA 2.0

What use is a seed if we do not know how to appropriately grow, process, and eat it? Storing seeds in a vault decontextualizes plants, necessitating a complementary mode of conservation that maintains the robust cultural knowledge surrounding crop variety production and consumption. This can be found in in situ conservation, where continued cultivation on farmland ensures maintenance of both germplasm and its kindred socio-ecological system. Critical to this type of conservation is traditional and indigenous knowledge.

The Potato Park in Peru is one example of a successful landscape-scale in situ conservation model in the Andean region, which encompasses two of Vavilov’s centers of origin. The site is classified as an Indigenous Biocultural Heritage Area, and aims to protect the region’s incredible biodiversity and improve indigenous livelihoods through use of traditional knowledge. Methods of crop cultivation here are emblematic of traditional modes of farming more generally, in that they are incredibly complex and low-input, with a typical farm plot containing between 250-300 potato varieties. Success of such farming systems is highly reliant on deep agro-ecological knowledge.

Varieties at Potato Park. Image credit: The International Institute for Environment and Development licensed under CC BY-NC 2.0

However, traditional farmers are continually facing incentives to switch to higher yielding, profitable commercial cultivars and more generally, a global economy that devalues traditional modes of existence. This has been displayed in the indigenous Arawakan women of Venezuela, who have customarily cultivated over 70 varieties of bitter manioc (cassava). With cultural shifts to an education system that encourages the abandonment of traditional modes of crop production, there has been a concurrent erosion of traditional cultivation, knowledge and the associated agrobiodiversity.

Maintenance of genetic diversity is a global public service. Thus, structures should be put in place to support both traditional varieties and their corresponding knowledge. Some suggestions range from community based conservation approaches with designated funds for compensating communities for income losses, to establishing separate “farmers rights” legal systems that explicitly recognize farming community’s contributions.  But instead, we primarily have western Intellectual Property structures that incentivize commoditization and individual ownership. While I am no etymologist, there does seem to be a glaringly obvious “culture” in agriculture that should be paid heed. 

The path to extinction is paved by both loss of genetic diversity and loss of knowledge, and so we need ex situ and in situ conservation hand in hand.


It’s worth mentioning the various directionalities in human relations with plant material. While most of us are attuned to the thinking of humans domineering plants to suit our needs, plant genetic material can similarly influence humanity. Landrace varieties that are interwoven with their local ecologies demand that we too pay more attention to our immediate environment in order to successfully harvest them. In a way, fostering this relationship with localized plant material can produce subtle human-environment relational shifts away from domination and towards respect. And because I am writing for a website with the word “science” in the title I will spare you from the philosophical zenith of this train of thought, but will leave some links in case anyone cares to meander in that direction.


About the Author

Tara Conway is an M.S. student in Crop and Soil Sciences, where she is working towards the development of a perennial grain sorghum. She is originally from Chicago, IL. Her work experience spans from capuchin monkeys to soap formulating. You can reach her at tmc66335@uga.edu, where she would like to know which bulldog statue in town is your favorite. Hers is the Georgia Power one due to its peculiar boots. More from Tara Conway.

Featured image credit: “Cobs of Corn” by Sam Fentress licensed under CC BY-SA 2.0.

The undead ghost forests of Georgia


The US Atlantic coast is a dynamic, living landscape. Georgia in particular displays a picturesque mosaic of barrier islands, salt marsh meadows, maritime forests, brackish marsh and river networks snaking up the Coastal Plain. Together, coastal habitats form a dynamic ecosystem capable of protecting the coastline, storing carbon, filtering water and providing coastal regions with valuable fisheries.

Spartina marsh and creek network in the low elevation foreground with maritime forest at a higher elevation in the background. Image Credit: Rebecca Atkins. Used with permission.

 The last hundred years, however, have set the stage for unsettling trends in the rate at which coastal areas are changing. As the earth warms and glaciers melt into the ocean, scientists are predicting an increase in sea level between 3 and 11 feet for the Georgia coast by the end of the century. While this “sea-level rise” may not sound significant, a minimum of 12,500 homes, 350 miles of road and 278 square miles of the Georgia coast will face catastrophic flooding. Similar flooding scenarios are expected to play out along the entirety of the eastern US coast. Notably, it’s not just the rising sea-level that’s an issue, but also land sinking. Much of this sinking is the natural result of post-ice age glacial rebound, which is one of the main contributors to sea level rise in coastal Georgia. 

Marsh edge becoming submerged by the tide. Here you can see a layer of green marsh cordgrass (Spartina)  and the muddy marsh platform held together by an intricate grass root network. Image Credit: Rebecca Atkins. Used with permission. 

The effects of rising sea levels aren’t always as visible as flooding. Increased rates of saltwater intrusion into groundwater and low-lying areas is also a growing problem which can lead to even faster soil breakdown and further loss of elevation. One major example of this process is being observed in the Florida everglades. Another phenomenon resulting from this saltwater intrusion is occurring on a large scale in coastal trees. As saltwater pushes inland, salt-intolerant hardwood trees are dying. From the roots up, coastal tree communities are transitioning into “ghost forests.”

Ghost forests do not pop up overnight, but they are becoming increasingly prevalent. Tree death is a gradual process, normally taking years to decades, but the increasing frequency of extreme weather events like storms and drought can accelerate forest loss. Hardwood species such as oaks and tupelo are usually the first to go, followed by more salt-tolerant species like sweet gum, red cedar and loblolly pine. Eventually, entire landscapes will transition from forest to marsh, and perhaps in time to open water. 

A similar phenomenon has been noted on barrier islands, like those spanning the coastline of Georgia and South Carolina. These islands are shaped by the movement of wind, ocean currents and sediment. Typically, sediment gets stripped from the northern end of barrier islands and is then deposited along the southern end, forming a new beach. This process of sand-sharing can give rise to “skeleton” or “boneyard” forests along the eroding beaches. 

As forests succumb to the sea, the skeletons of maritime forests help to stabilize eroded beaches. They can even be beautiful, serving as popular tourist attractions. However, even though skeleton forests may represent a natural part of Georgia’s barrier island life cycle, the increasing rate of land loss due to the combination of rising sea levels, human development and extreme weather is faster than some islands can keep up with. 

A staircase to a beach on Jekyll Island being submerged by a high tide and shoreline armoring (here a sea wall) installed to minimize beach erosion. Image Credit: Rebecca Atkins. Used with permission.

 Ghost forests can be viewed as a natural response to changing environmental conditions. Emergent marshes are better able to store carbon and keep up with sea level rise than forested areas because of their ability to capture sediment and vertically accrete. However, the overall area of marsh land is declining faster than it can accrue due to sea level rise. Marsh expansion also depends on the availability of natural land at higher elevations to compensate as lower elevation land becomes completely submerged. This ability is limited by human activity when coastal communities build homes and install hard structures like sea walls to prevent beach erosion.

Overall, the growing presence of ghost forests from Louisiana to Canada is a worrisome indicator of a rapidly changing coast, and researchers are taking notice. Within the Georgia Coastal Ecosystems Long Term Ecological Research Program (GCE LTER), a project has been initiated to measure the response of trees along the Altamaha river to hurricanes. So far 45 trees are being repeatedly surveyed as an indicator of forest health as storm events increase and salt water pushes further up into rivers. 

Compared with the more developed coastline along the Northeastern US, Georgia is praised for its roughly 100 miles of pristine coast. Unfortunately, sea level rise is both a global and a local problem that we’ll all have to face, and ghost forests, although captivating, are a haunting reminder of what’s at stake.

One extremely popular wedding destination is Driftwood Beach on Jekyll Island. Image Credit: Rebecca Atkins. Used with permission.
The sanded down surface of a ghost tree. Image Credit: Rebecca Atkins. Used with permission.

Plastic tips: a more sustainable science


Alternatively, this post could have been titled, My Guilty Conscience Series: Plastics

This blog post has been a long time coming – given the fact that I (and many others) have been conditioned to “reduce, reuse, and recycle” before we could even multiply. Yet, as I continue to diligently organize my empty jars and cans into recycling bins, I come to the lab everyday and amass a sizable amount of single-use plastics, and they’re not even recycled. 

They just go straight into the garbage.

It doesn’t come as a surprise: we have a global plastic crisis. The increasing plastic pollution has been well-documented by researchers around the world. If our current plastic waste production and management persists, we face long-term, detrimental consequences that include endangerment to marine life, economic damage to coastal cities, and increasing microplastics in our diets. Currently, there is a movement to limit or ban single-use plastics for average consumers, largely focusing on everyday plastic bags, utensils, and packaging.

However, it would be reckless to claim that all plastic waste is due to individual consumer behavior. There is a more insidious current of plastic waste coming from a bigger, systematic entity: the research and development sector. Without exception, academic and industrial research bears a responsibility to curb its own plastic usage.

10mL serological pipette tips in a vase. A lovely bouquet. Image Credit: reerdahl via Flickr. Licensed under CC BY-NC-ND 2.0. 

In 2010, approximately 275 million metric tons of plastic waste was generated by 192 countries. Researchers at the University of Exeter estimate that life science research institutions generate 5.5 million metric tons of plastic waste each year, or roughly 2% of the global plastic waste production. This waste contribution is overwhelmingly disproportionate, considering the fact that life science researchers make up just 0.1% of the world population.

The reason for researchers’ large plastic contribution lies in the fact that plastics are well-integrated into laboratories. They’re cheap, disposable, and most importantly, sterile. 

Reagents are delivered to our door in plastic bubble wrap and Styrofoam. On our hands are periwinkle blue, latex-free gloves. Plastic pipette tips and sample tubes are disposed after a single-use, unless you want to introduce cross-contamination to your samples.

Curious about my own contribution, I collected all the single-use plastics I used in a day and estimated the amount of plastic waste I would generate in a year. With maintenance of my fly stock, cell culture, and miscellaneous experiments, I accumulated 254 g of plastic by the end of the day. This totals to approximately 66 kg – roughly the mass of a small woman – of plastic in a single year.

It quickly adds up. But how do we limit our plastic consumption when our research depends on it? 

67 g of my non-hazardous plastic waste. Not pictured: the rest of the 187 g of biohazardous plastic waste – which had already been safely disposed of. Image Credit: Kathy Bui. Used with permission. 

Some eco-conscious scientists are attempting to change their daily lab practices without compromising results, and they are calling for more awareness of science’s sustainability issue. There are open resources and hashtags (e.g. #labwasteday, #labconscious, #sustainablescience) dedicated to sharing sustainable practices and inspiring other scientists to follow suit in this movement. Currently, some general tips are to use glass containers as an alternative, wash and reuse single-use containers (whenever contamination is less of an issue), and support suppliers that sell sustainable products.

In addition, some universities are taking matters into their own hands. The University of Leeds made an ambitious initiative, pledging to give up single-use plastics entirely by 2023. This does not only include plastics in office spaces and cafeterias, but in laboratories too. Currently, the university is working with suppliers to limit the amount of plastic packaging and products as well as developing other alternatives to plastic equipment. Similarly, University College London, the UK’s largest university, plans to cut out single-use plastics and increase support for sustainability research by 2024.

Throughout the past few decades, there has been a major rally to control individual consumer plastic waste. However, there have not been any regulations on the research sector. While there is some recent progress on making scientific research more sustainable, there is still a need for systematic intervention and regulation for an entire sector’s worth of plastic waste. Some steps towards a large-scale change are to (1) contact your university’s sustainability program about a bigger initiative towards more eco-friendly practices and recycling programs in research or (2) express interest for more sustainable lab products with your supplier on social media. In the meantime, we can only be more conscious of our actions to reduce our environmental footprint – whether it’s recycling cans at home or just one less pipette tip at our bench.

Kathy Bui is a Ph.D. student in the Department of Cell Biology at the University of Georgia. She is currently working on CRISPR-gene editing in Drosophila melanogaster and developing split fluorescent protein technology. She uses sturdy glass tupperware for lunch and her Google Pixel 3 to take high-quality pictures.

The Treasure in Your Trashcan


Many of us can recall a time where someone we knew (or even ourselves) threw a banana peel out a car window.  They’re biodegradable, so what’s the harm? I’ll never forget the time my mom did not dispose of that peel in a proper way… My family and I were driving through Yellowstone National Park, and we had each eaten one of these tasty fruits.  One by one, my mom threw the peels out the car window and onto the dirt path, not even batting an eye. Unfortunately, a park ranger was following us, and after turning on his lights and pulling us over, we quickly learned that it was not the appropriate time or place to freely whisk away our peels.  Although many of us probably aren’t tossing our leftover produce in the middle of National Parks, there is still a lot that we don’t often consider when we carelessly chuck our organic waste.

Environment is Key

Depending on where you throw that banana peel, it can actually take up to two years to fully decompose. Rather than letting that banana peel slowly disintegrate in the wild, composting is the better option.  Composting, in itself, will help speed up the degradation of that banana peel, and cutting it into smaller pieces will make that process even faster.  Having a specific place in your yard or a bin on your porch isn’t enough to have a well-working compost, though.  You’ll need all the right conditions– a pH of 6.5-8.0, 40-60% moisture, and a temperature range between 80º-150º F, with higher temperatures being preferred since pathogens are destroyed.  Add in some earthworms if you need further assistance breaking down your scraps into smaller pieces. 

The Science of Composting

So what exactly is going on in that backyard compost box?  Composting is the process by which solid organic waste is turned into an environmentally-useful material.  But it doesn’t just happen as soon as an orange peel hits the ground. They key is having helpful microbes such as bacteria, actinomyces, and fungi, which assist in converting organic waste into smaller substances- namely carbon, nitrogen, phosphorous, potassium.  There are two types of degradation- aerobic, requiring oxygen, and anaerobic, having no oxygen requirement.  Aerobic degradation occurs much more frequently. The newly converted material can be used to boost the soil fertility of a garden or as a renewable energy source.  So what happens when you don’t properly compost food waste? Annually, every American throws out roughly 1,200 lbs of organic waste that could have been composted.  Sadly, when that leftover produce falls into a landfill, the biodegradation process doesn’t typically happen.  Due to their dry and oxygen-poor conditions, organic matter will most likely “mummify” rather than decompose.  


One Athens resident in particular saw a need for increased compost efforts when she decided to create her own composting business in 2012.  Kristen Baskin started LetUsCompost, a company that provided roadside compost pickup and compost-enhanced soil delivery service, in addition to compostable plates, cups, and silverware.  Over the seven years that they operated, they paved the way for Athens compost culture, getting several local businesses to hop on board. Hendershots, Collective Harvest, and The Hub Bicycles all worked with LetUsCompost to properly dispose of their food waste.  Although the company recently announced it is ending operations, Kristen and her crew have made a great impact on Athens compost culture that can still be seen today.

Kristen Baskin of LetUsCompost. Join Kristen and I to learn more about the science behind composting and how you can help turn your trash into treasure at this week’s Science Cafe! -Little Kings Shuffle Club, Thursday January 23rd at 7pm. (Photo used with permission)

About the Author

Hallie Wright studies host plant resistance and fungal avirulence of finger millet blast in Katrien Devos’s lab.  She’s passionate about enhancing agricultural literacy and helps middle schoolers conduct agricultural science experiments.  You can find her at local punk shows or eating jalapeño pineapple pizza at Fully Loaded.

The False Promise of Animal Testing: Safety and Efficacy


One fact that was drilled into my head while studying biomedical science was how few experimental drugs ever make it past clinical trials. A failure rate of 90% is reported. This struck me as odd, but I chalked it up as an example of how difficult drug development is and didn’t ask why. That changed when I decided to use mice as part of my thesis project. Initially reluctant, my graduate advisor convinced me it would be the best way to prove my hypothesis. As my experiments progressed, though, I started to wonder if the mice on my lab bench could really predict how a human would respond to the same treatment. This led to discoveries that would completely change my outlook on preclinical drug testing.

Laboratory rats in typical research housing. Image Credit: Understanding Animal Research via Flickr. Licensed under CC BY 2.0

Ultimately, the reason so many drugs fail clinical trials comes down to two pillars of biomedical science: safety and efficacy. If a drug has dangerous side effects or if it doesn’t provoke a therapeutic response in enough people, it’s thrown out. As part of the preclinical regulatory process, the Food and Drug Administration (FDA) mandates that any investigational drug compound must be extensively tested in at least a few different species before approving it for clinical trials. To better understand why that is, it’s useful to examine the medical tragedy happening as the legislation was passed.

In the late 1950s and early 1960s, the world was reeling from the discovery that a new sleeping pill, thalidomide, would cause severe birth defects when taken by pregnant women. Developed and marketed in 1957 by the German-based company Chemie Grünenthal, it is estimated that over 15,000 children worldwide were born with deformities linked to thalidomide. The US, however, was mostly spared due to the FDA’s refusal to approve the drug. Politicians such as Senator Estes Kefauver (D-Tennessee) criticized Grünenthal while praising the FDA for recognizing the potential danger. Surely, the whole tragedy could have been prevented if the company simply tested their drugs on pregnant animals! In 1962, the Kefauver Harris Amendment to the 1938 Federal Food, Drug, and Cosmetic Act was passed, mandating that new drugs had to be proven safe and effective before being administered to humans. Extensive animal testing was enforced as the gold standard of ensuring this.

Here’s the thing: no one knows if Grünenthal actually tested thalidomide on pregnant animals. All of their records were destroyed. What we do know is that teratogenicity (embryonic toxicity) testing was routine by the 1950s. It’s unlikely that a well-established pharmaceutical company would just not perform those tests, but let’s assume they didn’t. Would more animal testing have prevented the disaster? To answer that, consider Karnofsky’s law:

Any drug administered at the proper dosage, and at the proper stage of development to embryos of the proper species…will be effective in causing disturbances in embryonic development.

“Thalidomide babies” would often be born with underdeveloped limbs that resembled flippers. Image Credit: wild.sproket via Flickr. Licensed under CC BY-NC-ND 2.0.

Extensive animal testing has proven this to be true. By 2004, 1500 drugs had been shown to produce birth defects in at least one animal species, while only 40 were known human embryonic toxins. Mice and most other rodents do not exhibit classic thalidomide toxicity, even at doses of 4000 mg/kg. (In humans, thalidomide typically produces birth defects at 0.5 mg/kg!) Only monkeys consistently experienced birth defects when given thalidomide, and then only at 10 times the usual human dose. Unfortunately, this turned out to be an exception rather than the rule. For other known human embryotoxins, the value of using toxicity in non-human primates to predict human toxicity is very low. So, with all of these divergent results using modern techniques, would it seem likely that 1950s scientists could have been able to make sense of more animal data?

Animals are not little humans. Biological systems are so complex that even if two species share almost all of the same genes, the way that those genes are regulated and how they interact with each other can lead to totally different outcomes. Animal models routinely fail to predict safety and efficacy in humans, despite that being the very measure they are supposed to assess.Imagine how many potentially life-saving drugs have been discarded based on poor results in animals! It’s clear to me that the FDA needs to revisit Kefauver Harris, but what can be done in the absence of reliable alternatives to animal testing?

Stay tuned for part 2 of this series, where I will go over current efforts to phase out animal testing in preclinical drug research.


About the Author

unnamed (1).jpgIsrael Tordoya is an MS student in the Department of Pharmaceutical and Biomedical Sciences, studying the relationship between obesity and breast cancer. One day, he hopes to be an advocate for marginalized people (and animals!) in medicine while developing generic drugs. In his free time, he likes to run, listen to audiobooks, and make bad music. Find him on Twitter: @TordoyaIsrael or email him: it37190@uga.edu

The roots of your tea


While coffee has seemingly had a cultural renaissance, with independent coffee roasters popping up all over the country, and even the naivest 7 year old being able to spout the  difference between arabica and robusta, a far older, and ancient drink seems to remain in obscurity in the continental United States. The drink I’m referring to is tea – the national drink of Britain, and the world’s second most popular drink. 

Tea Styles

If you ask any random person on the street about tea they’ll easily be able to name a few of the classic varieties found at the supermarket. Green Tea, Earl grey, Oolong, pu’erh tea if they really know their stuff. But if you then prod their knowledge a little bit more, asking simple questions such as “what makes a green tea a green tea?” Or “how is an Oolong different than white tea”? They’ll most likely look at you askance, and mumble something incoherent. Or, they might make an all too common mistake of assuming that such a variable drink must come from different plants. Well, in fact they’d be wrong. All tea comes from a single plant, Camellia sinensis. But, then you’re still left with the same question, what makes different styles of tea unique if not the plant itself? The answer lies in the processing.

green plant scenery
Tea leaves. Photo by Arfan A licensed under Unsplash.

The single process that determines the style of tea is oxidation. Oxidation from a chemical standpoint is the loss of an electron mainly due to oxygen. Or in layman’s terms, it’s the change of one chemical to another with the help of an enzyme (in this case polyphenol oxidase) and oxygen. It’s this very process which causes bananas and apples to brown. However, it’s essential to realize that oxidation isn’t always a bad thing. Sometimes, when chemicals in our food change, they can change for the better, unlocking different compounds, and creating more unique flavors and aromas.

So, to create these novel flavors, tea growers expose tea leaves to unique sets of processes that either increase or decrease the amount of oxidation. What this process looks like in practice is taking large batches of tea leaves, and either rolling, or cutting them into smaller pieces. This type of mechanical action breaks up the cell walls of the leaves, spilling their cellular contents into the leaves, and causing oxidation due to enzymatic reactions. Oxidation then causes the polyphenols found in the leaf tissue to convert to flavonoids and terpenoids (the molecules which give tea its taste), while at the same time browning the leaves. This alteration of the native chemicals in the leaf gives each style of tea a unique flavor combo. 

green grasses
Tea leaves oxidizing after rolling. Image Credit: 蔡 嘉宇 licensed under Unsplash.

After a tea has reached its desired level of oxidation, the leaves are heated and dried to denature the enzymes in the leafs, and halt any further oxidation. This is an essential step, and requires the utmost precision. For example, if you were crafting an oolong and let it oxidize a little too much, you’ve actually created a black tea. Of the six different styles of tea ranked from least to most oxidized (white, yellow, green, oolong, black, post-fermented), the least oxidized teas have the least caffeine, while the most oxidized teas have the most caffeine. The lack of processing of less oxidized teas generally results in teas of this type having a “leafier”, or “fresher” taste than some of the more oxidized teas. However, it should be noted that this is only a general rule, as oxidation ranges can be quite varied even within the same style of tea. For instance, two different tea producers might oxidize their oolongs differently, with one oxidizing their leaves to 40% oxidation and another oxidizing their leaves to 80%. Both are still technically oolongs, but will be quite different to drink.

There are other factors which can have lesser effects on tea’s style, from where and how the tea was grown, the age of the leaf when picked, or if the tea plant had been exposed to pest. And yet, the major actor still remains the same process which browns our bananas. The simple act of damaging leaves and letting them sit and change has created a class of beverage that’s incredibly large and diverse. So, next time you’re sitting down with a cup of tea, why not take a minute to savor the taste of oxidation. 

About the Author

Pablo Mendieta is a graduate student pursuing a PhD in bioinformatics and genomics at the University of Georgia. His specific interest lie at the intersection of agriculture, and genetic technologies. From Boulder Colorado, he enjoys the outdoors, science fiction, programming, and hip hop. You can email him at john.mendieta@uga.edu or connect with him on Twitter. More from Pablo Mendieta.


Deadly realism, science communication, and dropping out of high school: an interview with Dr. Diana Six


Dr. Diana Six is a professor of Forest Entomology and Pathology at the University of Montana. Dr. Six’s work focuses on bark beetles, their symbiotic fungi, climate change, and what these factors mean for forest health. Her work has received the attention of the media and she has presented at TedX and has been featured on National Geographic, all while pursuing a master’s degree in journalism. Here, Dr. Six and I discuss her path from high school dropout to professor, how to feel hopeful as a scientist in the face of climate change, and the importance of science communication. 

Diana Six photo
Dr. Diana Six. Used with permission.

You took a different route to get to academia, can you talk a little bit about that experience? 

I think I can describe myself as the accidental tourist in a way. I’m a first generation college student. My mother got through high school and my dad dropped out in the 5th grade – he was virtually illiterate. It was a disruptive and abusive home. I dropped out of high school and spent years drifting and doing drugs. And then, I can’t even tell you what happened, but something changed and I decided I couldn’t go on that way. 

I went to night school to get my high school diploma. Two teachers there took an interest in me and talked me into enrolling at  a community college. I didn’t really want to go – I was doing it more to make them happy. I enrolled in library science because I couldn’t figure out what to do, but I liked books. I took a biology course my first semester and switched majors immediately to microbiology, receiving an associates degree. I went on and got a bachelors in agriculture because I liked bugs – that led me to do a masters in medical and veterinary entomology. 

At that point I wasn’t sure what to do; I had worked equally on insects and fungi and loved both. In fact, as a kid I had an insect and a fungus collection. I got offered to work on a PhD with bark beetles and fungi, and went ‘Oh god, this is perfect!’  

In the end, do you think your non-traditional route helped you become a better scientist?

Maybe. I have a deadly realism when I look at the world – I have a very critical eye. I think in ways it has helped, but in other ways my background really held me back. I had to really fight to get out of being insanely shy with no confidence and I still suffer badly from imposter syndrome – I don’t think that ever goes away. So it was a struggle, but I think it’s made me a better scientist. It took me a lot longer to get here, but I got here. 

[The non-traditional route] helps me advise students. In Montana at least, we get a lot of first generation students and it helps me talk to them. It also helps me talk to students that have been growing up in an abusive home and have had a rough start. They can kind of see if I made it, they can do it too. So, it’s helped me be a good mentor as well.

Mountain pine beetle damage in the Rocky Mountain National Park. Photo by Bchernicoff licensed under CC BY-SA 3.0.

You have a very impressive record of science communication. I was wondering how you got into science communication and how you balance it with your research? 

I’ve always been interested in improving science communication. At first it was to other scientists – there were papers that I knew had cool stories, but those stories were certainly hidden well. I wanted my papers to narrate what was actually happening. Then, bark beetles began to be a big thing and became quite political, so I was getting interviewed a lot. It was a pretty unsatisfactory interaction for me and the reporters – I didn’t know how to talk to them. Consequently, I wasn’t communicating well and wasn’t very happy with what they reported, although a lot of it, I soon realized, was my fault. So I started talking to journalists, looking more at how they operate so that I could interview better and start preparing in a different way.

Then I did something really crazy. Six years ago, I enrolled in a journalism program for a masters and I’m finally getting there. I’ve got one more class and am putting my thesis together! I realized in order to be really good at [science communication], I had to do more than take a two-hour workshop.. Now I’ve had to go out and do the reporting, actually work as an editor in a magazine, and do all sorts of things. I feel like I can write better, I can interview people better, and I can make better products. 

But now at least when people interview me, I know how to tell them a good story that journalists can report accurately. They don’t have to piece together jumbled stuff that I give them – you can lead them into the bigger story. 

As a journalist, do you have any tips on how to efficiently communicate your research for graduate students?

 We always tell linear stories as scientists: this is a question, this is how we’re looking at it, this is what we found, this is what it means. That’s not how you talk to a journalist; they just want to hear the end. And they need it in fairly jargon-free, short, clear, concise sentences. We have a tendency to talk in very long sentences and go on and on and on to explain one little thing, which is what I’m doing right now.

It’s hard to gather your thoughts if you go into this cold. If you can get an idea, briefly, of what they’re going to ask you about, it’s good if you have time to sit down ahead and put together soundbites. These are little short sentences that make clear what’s happening and it’s always good to have some metaphor in there. So if you want to make a point, use some kind of cool visual term that will make it kind of interesting.

There’s a book, Escape from the Ivory Tower by Nancy Barron, which is a way for scientists to learn to communicate with the media – it’s an awesome book. There’s one page in particular in there called ‘The Message Box.’ If you do nothing more than to use that page, you’ll become a better communicator for science. In one of my journalism courses, they had us use this and now I never go into an interview without filling out a message box ahead. In fact, I have a whole stack that I pull back out, depending on what I’m doing. I would recommend that for any grad student or academic that wants to do interviews that they start with that message box approach. It’s really powerful.

 Where do you see the field of science communication going?

All people going into science should learn good science communication skills. I don’t think everybody has to get a degree in journalism, but developing that as a skill is crucial.

One of the reasons that science has lost credibility is that people don’t understand it – they don’t hear about the science that’s being done. If scientists were communicating what they’re finding and it’s value more often, then appreciation for science would be stronger and people would see the value of it in their lives.  Even for people who are communicating about climate change – if you don’t understand your audience, the communication isn’t going to happen. So, learning how to be a good communicator of science is crucial and I think all graduate students should be doing some aspect of it. It should just become a natural part of training. 

Life of Pine featuring Diana Six from CJ O’Flair on Vimeo.

Your twitter bio says, “Climate change is real – just ask the bark beetles and pretty much all of nature.” Do you have a go-to spiel about climate change for those who deny it or aren’t so familiar with it?

I do it by examples that affect their lives. People don’t really become concerned unless it’s affecting something that’s very real and important to them. I think about what community they live in and what kinds of things that they do that are important to them. Then I’ll point out things that have likely changed in their lifetime that they can see. They often start to understand; then you can build on why those things are changing and how that could influence them. I can’t tell someone, ‘You need to worry about this because of polar bears.’ That makes them go ‘Oh I like polar bears and that’s a bummer,’ but it’s not going to affect them in their heart. So for me, when I give talks or meet with people, I try to bring it to the effects that are in their lives. 

Do you have any advice to give to new scientists who feel unhopeful about the future in the face of climate change?

This is the toughest time to be an ecologist. Ecologists study interactions between species but we’re seeing these interactions changing. They’re either being torn apart or enhanced. It’s not only just sort of depressing to see ecosystems you study begin to change and fall apart, and extinctions increasing, but this makes it increasingly difficult to study just basic questions. 

My advice is, if you’re an ecologist and that’s what you want to do, there’s probably no more of an important time for you to be one. The information that you can gain right now has such added value and importance and I think you can make a major difference like never before.

To hear more from Dr. Diana Six, you can follow her on Twitter or visit her lab website. To learn more about the mountain pine beetle outbreak and her work on it, you can check out her interview here.

From Touring Musician to International Mycologist


Dr. M. Cathie Aime is a Professor of Botany and Plant Pathology and Director of the Arthur Fungarium and Kriebel Herbaria at the University of Purdue. Her lab specializes on the biology of rust fungi as well as the biodiversity of tropical fungi, which has led her research to have an international focus. Interestingly enough, Dr. Aime didn’t follow the traditional method to academia by any means.

Photo by Cathie Aime, used with permission.

Can you describe your undergraduate experience?

 I dropped out of undergraduate school in my third year to take care of my grandmother in New Orleans. I was a musician, played in a bunch of bands — even toured. I worked in bookstores and as a waitress to support my music habit for about 10 years. When I turned 30, I told myself, “You’re not going to make it as a musician. You should do something with your life.” So I decided to finish my undergrad degree and eventually got my PhD in mycology.

How did you become interested in mycology?

My last course in undergrad at Virginia Tech was in mycology. I didn’t know anything going in —  other than that mushrooms sounded cool. Sure enough, it’s what changed my life. I had a really good professor, so I became fascinated with fungi and how much there was unknown about them. He (Orson Miller) convinced me to go to grad school, and I ended up staying at Virginia Tech to work with him.

Photo by Cathie Aime, used with permission.

What inspired you to stay in academia?

Really, everything from that point is about Orson (Miller); he was a fantastic teacher. Without his encouragement, his explanation of academia, and grad school and research, I probably wouldn’t have considered research as a career. I knew people did it; I just didn’t know how people did it. When I started in the lab, I knew that’s what I wanted—to do research and be in academia, just be in that environment. 

How did you get involved with field operations for your biodiversity work?

 When I dropped out, there was no such thing as molecular biology. In those ten years, the entire field of biology had changed. At Virginia Tech we didn’t have the facilities to do molecular biology, but at Duke (about 3 1/2 hours away) there was a mycologist doing molecular mycology. I would drive down there every weekend and holiday and work in his lab. When I was there, I met a grad student who had previously worked in Guyana as a botanist. We decided to do a one year study in Guyana to look at the fungi there. It had nothing to do with my research; it was just something fun to do as a side project. Of course this side project has been going on for 20 years now. 

What was the hardest part about doing these field experiments/trips?

 All of the permissions and permits from the local governments. Wherever you are doing the work, getting the permits is always time consuming and what you need is different for every country. Even in Guyana, where we have been for 20 years, the rules change every year. It’s expensive to get the permits for doing research, and additionally you need to get separate permits to export whatever you are taking out of the country.

 Places like Guyana have no overland routes to where we go, so we have to take little charter planes that can land at abandoned mining camps. The planes take 600 pounds, so we have to  figure out what we can take and how many planes can take us. The weather is always bad. Sometimes you’re sitting on the airstrip on the other side for days, waiting for the weather to clear up so a plane can come back and get you. 

Usually, buying all the gear and rations goes well. But if you forget something, like your salt or your toothpaste, you are out for 2 months. Building the camps itself isn’t so difficult because we keep going back to the same place. One year, we had a lightweight aluminum canoe to get us around, but there was a flood and the canoe washed away. There we were, stuck in the middle of nowhere, with no boat, no way to get back to where the plane was supposed to pick us up, and no way to signal anybody. Eventually, we got back down to the airstrip after building a dugout.

Photo from the personal collection of Cathie Aime, used with permission

So how do you pick your field work locations, a lack of studies or that there is something interesting going on there?

It’s a little more haphazard. For instance, in Vanuatu, in the middle of the South Pacific no one has studied the microfungal flora. I know that there’s a lot of endemism in those islands, but surveying there requires immense resources and infrastructure.  I got lucky when I was offered an opportunity to hitch a ride with a research group led by the New York Botanical Gardens. Some of the other locations, like Queensland, were very targeted. There was a specific fungus in the rainforest that my post doc and I needed to resolve a tangled problem. After a few years of trying to get samples or work around it we just said “let’s go and get it ourselves!”. In Cameroon we got funding to set up a long-term study to match that in Guyana. That was a very deliberately chosen forest and region to test specific biogeographical hypotheses. So, overall, a mixture of different reasons.

Photo by Cathie Aime, used with permission.

With all the international work that you do, how do you maintain an international student presence in your lab?

 I don’t know if it was so deliberate at first. When I go to different countries, I often get to work with local students that are interested in mycology but don’t have access to rigorous mycological training, especially in the developing world. If a student is passionate and shows promise, then I’m going to do everything I can to get them into my program. A lot of my students are that way. The way I see it, your productive time as an academic is limited. I started in my 30’s, so I have 20 to 30 good years in academia—what do I really want to do with that? I want to train students that are passionate about mycology and the environment, who will go on to train the next generation around the world.

Rethinking Anorexia: Making the Biopsychosocial Connection


With only 50% of patients recovering fully in the long-term, anorexia is the deadliest psychiatric disorder. Typically associated with poor body-image and unhealthy eating habits, anorexia has captivated and bewildered the minds of laymen and scientists alike. While not every person suffering from anorexia is underweight,  there is still a general misunderstanding of what is really going on in the mind and body. It is a myth that anorexia is a purely psychological phenomenon – where one’s desire to be skinny goes too far. The reality of the disease is much more complex. With side effects ranging from osteoporosis and anemia to heart failure and nerve damage, the consequences are far more severe than just being “too skinny.”

Credit: Thigala shri via Flickr. Licensed under Creative Commons Public domain.

Anorexia typically “begins” with an environmental trigger, such as stress or a simple desire to eat healthier, that signals the desire to overexercise or restrict food intake. But not everyone who loses weight develops anorexia. Rather, some may have genetic predispositions – mentally, physically, or metabolically that drive habits that enforce anorexic behaviors. This results in typical anorexic habits such as increased drive for thinness, increased body dissatisfaction, and ongoing food restriction that perpetuates the cycle of behavior and reward, namely restricting food and losing weight. 

This caloric restriction drives drastic changes in the gut microbiome, which is a collection of microorganisms in the GI tract that influences our metabolism and mood. While factors such as genetics, age, and sleep play a role in the diversity of microorganisms,  fiber intake is a huge player in the gut’s microbial composition and function.   

Escherichia coli grown in culture and adhered to a cover slip. E.coli is an example of a bacteria that is negatively correlated with BMI in patients with anorexia.  Credit: Rocky Mountain Laboratories, NIAID, NIH via Wikipedia. Licensed under CC Public Domain Mark 1.0.

What are the characteristics of the gut environment of anorexics? Chronic caloric restriction, food group imbalance, micronutrient deficiencies, and high fiber- to name a few.  This results in dysbiosis, or microbial imbalance, that affects not only the host’s metabolism, but behavior and the immune system as well. What was once a balanced and diverse environment now becomes competitive, selecting for microbiota that can sustain on low energy and nutrients. 

Usually, our gut and brain communicate with each other via the “gut-brain axis” to let us know whether we are satiated or still hungry. Gut microbes can release chemicals, such as short-chain fatty acids (SCFAs) and hormones that affect the appetite and metabolism control centers of the brain.  When we drastically reduce our food intake, our bodies get confused and the normal communication between the brain and the gut is impaired.  A well-maintained intersection now becomes a traffic jam and unfortunately, that traffic jam seems to have lifelong effects for anorexics.

Our current “treatments”  focus on refeeding and addressing psychosocial needs, but less so on fixing the microbial dysbiosis.

A graphic illustrating the relationship between the gut and brain in patients with anorexia. Used with permission from Ashleigh Gehman.

Knowing what we know now about the intricate connection between the gut and brain and its genetic underpinnings, it is important to translate this research to new therapies – such as prescribing pre/probiotics and the use of fecal-matter transplants. Studies have shown promising results by targeting the microbiota for treatment of many mood disorders, primarily by regulating serotonin levels, which has an active influence on appetite and behavior. Some bacterial strains have also been shown to alleviate symptoms of anxiety and stress-two  hallmark symptoms of anorexia. These novel approaches could theoretically improve weight gain, decrease stress to the gut , and even reduce psychological symptoms!

While anorexia treatment has long focused on alleviating psychiatric symptoms, it is perhaps time to turn our attention to the relationship between the patient’s gut and brain.  The development of novel treatments that target the gut microbiome are essential if we are to properly tackle this disorder.  All of these gut-brain factors, combined with the genetic effects on mental health and metabolism, might be of importance to improve long-term outcome of one of the most chronic psychiatric disorders of adolescence. 

If you or someone you love are currently struggling with an eating disorder, contact the NEDA helpline (1-800-931-2237) for support, resources, and/or treatment options. You can also click to chat here


About the author


Maria Flowers is an undergrad studying Biochemistry and Molecular Biology/Spanish at UGA.  Her dreams are to continue demystifying the sciences through art and writing. In her free time, she loves to dance, read the New Yorker, write poetry, and listen to her favorite podcast “On Being” while doing yoga.  She also loves frisson-ing to any and all types of music. If you wanna chat about science or her wide-array of special interests, you can email her at @ mwf38801@uga.edu


A not so familiar face: How a transferrable cancer could be the end for an Australian mammal


Cancer, a complex disease caused by an accumulation of mutations in our DNA, affects millions of individuals each year. Cancer poses a very serious threat, but is it contagious?

The short answer is “no”, at least for us humans. In humans, the only way to truly transfer cancer from one person to the next is by means of an organ transplant. No cases have been reported of cancer itself being contagious, but certain viruses, such as the familiar Human Papilloma Virus (HPV), have been linked to causing cancer in humans. This type of viral disease transmission is pretty unusual for cancer, but some examples do exist in other organisms outside the scope of humans.

Tasmanian devil (Sarcophilus harrisii). Image credit: Vassil via WikiCommons. Licensed under CC0 1.0.

Tasmanian devils, the largest carnivorous mammals on earth, can pass on cancer like we can pass on a cold to someone else. These devils, roughly the size of a small dog, suffer from a facial cancer known as Devil Facial-Tumor Disease (DFTD). This disease results in large tumors on the face and neck, causing asphyxiation or starvation at an average of six months after onset. This facial cancer goes undetected by the devil’s immune system, resulting in the growth of tumors until it is too late. But how does it spread from one devil to another? Devils are gregarious creatures. During acts such as feeding or mating, the devils can wound each other through biting, ultimately causing the spread of cancerous cells from one devil to another. This transmission of DFTD is so efficient that the origin of the disease has been linked back to a distinct region of Tasmania using genetic techniques on tumors sampled from a variety of affected devils. Tasmania is an island state of Australia located off its southeastern coast and the last remaining native range for the Tasmanian devil.

Modified map of Australia. Image credit: Mark Ryan via WikiCommons. Licensed under the GNU Free Documentation License.

Initial reports of devils with Devil Facial-Tumor Disease cropped up around the mid 1990’s. After just 10 years, estimates place about a 70% infection rate among the current population. It’s approximated that up to 60% of the devil population has been decimated by DFTD since 1996. Unfortunately, since the discovery of the first incidence of Devil Facial-Tumor Disease (DFTD1), a second wave of Devil Facial-Tumor Disease (DFTD2) arose sometime between 2007-2010, further decimating the devil population.

This Tasmanian DFTD epidemic has brought the endangered species near the point of extinction, which is projected to occur in as little as 35 years if no action is taken. Luckily, conservation efforts have been instituted to relocate uninfected populations of devils to various zoos around the world, as well as the nearby Maria Island, located off the eastern coast of Tasmania. Researchers at the University of Tasmania are spearheading a large collaborative effort that is showing promising results for the testing of recently developed vaccines against DFTD. Reports from the researchers at the University of Tasmania have also noted that the devil’s themselves are evolving to combat this disease, showing genetic mutations that improve resistance and tolerance to DFTD. Hopefully these human-facilitated efforts in combination with the naturally occurring mutations can lead to a successful and robust recovery of the Tasmanian devil population.

Ultimately, these devils provide an interesting and unusual case of transmissible cancer that could be used to further cancer research. Similar diseases have been documented in both domestic dogs and Syrian hamsters that seem to show related mechanisms for cancer establishment and metastasis (spreading of the disease to new locations in the organism). These animals have the potential to confer valuable information regarding basic tumor biology, tumor evolution, and common tumor transmission mechanisms to human studies.


IMG_2550 (1).JPGBen Luttinen is a Ph.D. student in the Department of Genetics studying the development of beneficial viruses in parasitoid wasps. In his spare time he enjoys watching movies, playing golf, and the occasional drink. You can reach ben at benjamin.luttinen@uga.edu.

The Wonders of Human Milk!


It’s a girl (or boy)! Your bundle of joy is finally here. Stepping into parenthood, life is magical.  But it is not all sunshine and roses either with the constant cleaning, frequent feedings and sleepless nights. The baby falling sick on top of it, is your worst fear. No wonder you find yourself paranoid, sterilizing everything all the time. Despite your habit of sterilization, millions of bacteria are making their way in through your baby’s mouth. Did you think that breast milk is sterile? No! it is teeming with bacteria which invade and colonize your baby’s gastrointestinal tract. These bacteria, along with viruses and fungi constitute the gut microbiome. This diverse microbe population, crucial for our well-being, enhances metabolism, synthesizes vitamins, and fights infections.

A newborn is prone to be sick, frequently encountering novel pathogens. However, human milk has naturally evolved to provide the first line of defense against pathogens. It does so by transferring antibodies for pathogens encountered during pregnancy by the mother to the child. These antibodies in human milk confer protection against respiratory and gastrointestinal infections, and fights inflammatory diseases like asthma, atopy, diabetes, obesity, and inflammatory bowel disease, all while providing nutrition to the baby.

Mothers, Child, Mummy, Lovely, Family, Togetherness
Mothers Child Image credit: Satya Tiwari via Pixabay. Licensed under Pixabay License.

By composition, one of the building blocks of milk are special sugars called human milk oligosaccharides (HMO). HMOs cannot be digested by infants reaching the intestine and colon, instead the gut microbiome uses them as an energy source. HMOs have shown to improve gut health by feeding these beneficial bacteria. Cow’s milk is similar to human milk except that it contains significantly less HMOs. The lack of HMOs can cause gastrointestinal problems and a compromised immune system if used as a substitute for human milk. Geographic location, environment, and the mother’s genetics all have a significant effect on the types of HMOs found in human milk. These acclimatized HMOs help fight pathogens in the baby’s local environment. Feeding benefits the mother as well. It burns extra calories helping lose pregnancy weight. It helps in bonding with your baby releasing the hormone, oxytocin, and it lowers the risk of breast and ovarian cancer and also, osteoporosis.

Human milk is the perfect food for your baby with balanced sugars, fat, vitamins and proteins. No wonder, the World Health Organization (WHO) recommends exclusive breastfeeding in the first 6 months of infancy. If breastfeeding is not possible, infant formulas with the same HMOs and nutrient composition are available. It really is wonderful how nature has evolved human milk to not only boost gut health and immunity of the child but also to promote good health of the mother.

To learn more about the wonders of human milk and gut health, be sure to attend the upcoming Athens Science Café on November 21, 2019. Dr. David Mills, Professor in the Department of Food Science and Technology at University of California, Davis will be sharing his perspective on this wonderful discussion.

Screen Shot 2019-11-16 at 7.49.19 PM.png


About the Author

unnamed-3-2Ankita Roy is a Ph.D. Student in the Department of Plant Biology at the University of Georgia working with bean roots. She plays mommy to two kittens and can whip up a curry to fire your taste buds in no time. True to her cooking skills, she enjoys trying out new cuisines to satisfy her passion for everything flavorful. She is an executive member of the Indian Student Association. You can reach her at ankita.roy@uga.edu. More from Ankita Roy

The science behind high insulin prices

Among many great things about life in Canada, I can walk into a pharmacy and purchase my insulin... at 1/10 the cost in the US.

You probably know or love someone who suffers from diabetes mellitus. In fact, recent CDC reports estimate that nearly 10% of Americans have diabetes, and as many as a third of Americans are pre-diabetic and undiagnosed. So, there is a reason the cost of healthcare—and in particular, insulin, the lifesaving drug used to treat diabetes—has been a popular topic in the news recently. Annual insulin costs have been skyrocketing, creating dangerous conditions for diabetics. In March 2017, the death of Shane Patrick Boyle raised a lot of eyebrows when he died from diabetic ketoacidosis after his GoFundMe failed to reach the $50 goal for his $750 monthly supply of insulin. As recently as July 2019, a Minnesota man died a similar death. Even the presidential race has highlighted the subject: recently, presidential candidate Bernie Sanders bussed a dozen Americans to Canada to purchase insulin at one-tenth of the price. However, many people don’t know what insulin actually is or why it is so difficult to produce competitively.

Enter a caption“Insulin” by Open Grid Scheduler / Grid Engine is licensed under CC0 1.0


What IS insulin, anyway, and what does it do?
The energy we need to survive is obtained by breaking down glucose and absorbing it into the bloodstream. However, the amount of glucose in our blood can hurt us if it gets too high, leading to hyperglycemia. Luckily for us, insulin is a peptide (protein) hormone that promotes the storage of glucose into our cells, removing the glucose from our blood. This allows us to store it as energy, or “brings our sugar down.”

However, in patients with diabetes, the disease causes their blood sugar to remain too high. Without this insulin, your body will not be able to store or use glucose as fuel, effectively making you starve. This can lead to ketogenesis—an emergency energy production in which your body breaks down fat into ketone bodies. These ketones are acidic, and an acute build-up of these ketone bodies in the bloodstream can lead to ketoacidosis—the condition that killed Shane Patrick Boyle. In the long term, hyperglycemia may lead to microscopic vascular damage and eventually organ failure.
So why is insulin so expensive?

Insulin is unique. Normally, the most expensive drugs are the ones that (1) can only be sold to a few people and therefore few people share the cost, or (2) are new, and no one has had a chance to make a competitor yet. However, diabetes is the 7th leading cause of death in the United States, and the scientist who discovered insulin sold the patent to the University of Toronto for $1 in 1929 because, in his words, “insulin belongs to the world, not to me.”

So why the rising price?

It is a perfect storm of business and biological complexity. First, the three largest manufacturers of insulin—Eli Lilly, Novo Nordisk, and Sanofi—represent 96% of the total insulin market as of 2018. Then, once you corner the market, you can set the price. For instance, insulin prices rose three-fold during the last decade in which the current director of Health and Human Services director, Alex Azar, was a senior executive at Eli Lillly, including when he served as president of the company. This pattern is exacerbated by the fact that drug prices in the US are negotiated by a convoluted web of private payers.

Second, insulin is not a small-molecule drug, but a large, complex biological molecule. Therefore a safe, identical copy (or biosimilar) cannot be easily made, making it difficult to compete in this established market.

Lastly, the price has been kept high by a process known as “evergreening” of patents. Normally, a drug patent only lasts 20 years. However, companies can essentially reset the clock on their patent, as long as they change their product slightly. As new insulin products enter the market, older (and potentially cheaper) versions are discontinued. Thus, a low-cost generic can never arise. For instance, even though Banting sold his patent for a dollar, this was a patent on using insulin from mammals, whereas insulin is currently made from biosynthetic analogs.

Moving forward

However, there may be some good news on the horizon for those of us whose lives depend on insulin. In July, Azar announced that Trump administration plans to allow Americans to legally import prescription drugs from Canada in an effort to reduce prices. There are also biohackers working to try to develop open-source insulin manufacturing protocols to help combat the effects of evergreening patents. However, these do not address the systemic problems that allows insulin prices to soar in the first place.

A study by Imperial College London found that a more reasonable price for an insulin analog would be somewhere between $78-130 per person per year if more competition could simply enter the market. Going forward, it is important to keep up with news on the cost of insulin and who it affects as consumers and voters.


13220826_1307652565916236_939053157207835834_nMike Choromanski is the former President of UGA’s Cellular Biology Graduate Student Association and a Ph.D. student studying Neuroscience and Cellular Biology. He attended Armstrong State University where he obtained a B.S. in Cell Biology with minors in Neuroscience and Philosophy while serving as an editor for his college newspaper, The Inkwell. Before teaching at UGA, he organized STEM trecks and taught environmental science for Philmont Scout Ranch. In his spare time, he loves to hike, cook and play video games, and competes on UGA’s fencing team.

Saving the world’s seeds, ex situ


The imposing structure of the Svalbard seed bank is familiar to many. This “doomsday” vault (ahem, already breached by climate change) is humanity’s last resort for preserving the seeds of our crops and plants. But how did this bastion of biodiversity arise?

Svalbard Global Seed Vault. Image credit: Dag Endresen via Wikimedia Commons. Licensed under CC BY 3.0.

Nikolai Vavilov, a 20th century Russian agronomist and geneticist, established the first modern seed bank in Leningrad in 1921. He is best known for establishing the centers of origin for the world’s cultivated plants. These are the geographical locations where the world’s major crops were domesticated, critically containing the wild relatives and greatest amount of genetic diversity for these plants. So while Vavilov traveled the globe (primarily on mule), he collected seeds to bring back to his native Russia in an effort to create a genetic repository to combat global hunger. He ended up collecting more seeds than any other person in history, eventually amassing some 250,000 entries in Leningrad. In a gutting twist, he became a martyr for plant genetics, dying in jail from starvation after being imprisoned for espousing Mendelian ideals while anti-Mendelian concepts were favored in Stalin’s Russia. 

Central Archive of the Federal Security Service of the Russian Federation (Moscow) via Wikimedia Commons. Licensed under article 1259 of Book IV of the Civil Code of the Russian Federation No. 230-FZ of December 18, 2006.

After Vavilov’s imprisonment, a dedicated staff of scientists maintained the seed bank at the Institute of Plant industry. During the Siege of Leningrad in 1941, they barricaded themselves inside the seed vault to protect this valuable biodiversity from both the German army and starving Soviet citizens. A dozen scientists starved to death guarding the seeds, which could have sustained them. Ensuring the food security of future generations was deemed more important than their own livelihoods.
Fast-forward to the 21st century and Vavilov’s original seed bank is now accompanied by upwards of 1400 other seed banks around the globe. But, the path to seed bank proliferation is a complicated one. With improved understanding of plant genetics in the 20th century came the advancement of high-yielding, stable crop cultivars, particularly hybrid varieties. These modern plant varieties are highly uniform and in the case of hybrid seed, genetically homogenous, which differs from historically predominant landrace varieties. As a contrast, landraces are traditional varieties that have adapted to their localized environment through domestication and are much more genetically variable- this is what Vavilov was amassing in his collection. However, the modern varieties were so high-yielding that they were spread throughout the globe as a means of combating hunger during the Green Revolution. Scientists quickly began to note that the new cultivars were effectively displacing local varieties of crops, causing variety extinction and contributing to genetic erosion, or the loss of genetic diversity. Genetic diversity is key to increased food security; landraces and wild relatives of cultivated crops often possess crucial traits such as disease or pest resistance, which can be used to improve the germplasm of current cultivars. So while plant breeders were capitalizing on the genetic diversity of landrace varieties to develop high-yielding, stable cultivars, the subsequent proliferation of those cultivars was wiping out that very same genetic diversity that they relied on. The imperative of seed banks soon became maintenance of crop biodiversity, rather than the straightforward catalog of diversity that Vavilov had conceived. Cue the Svalbard Global Seed vault.

Vavilov’s centers of origin. Image credit: Daphne Mesereum via Wikimedia Commons. Licensed under CC BY 3.0.

Further complicating things is the idea of ownership of this banked biodiversity. For most of history, the world was functioning under a “common heritage” assumption of genetic resources, meaning that they were a public good that could be availed of freely. Meanwhile, over the past few decades, the idea of ownership of genetic materials shifted with the offering of patents for stable cultivars and genetically engineered material. So by the late 20th century we had a free flow of plant genetic resources (PGR) to breeders, but a flow of patented, costly seed coming out of breeders. Much of the Earth’s plant biodiversity exists in the developing world (remember Vavilov’s centers of origin?) while much of modern plant breeding infrastructure exists in the developed world. To many in the developing world, this reeked of imperialism and was hotly protested.  After years of pressing this case, some national sovereignty over PGR was granted with the 2001 International Treaty on PGR for Food and Agriculture. This established a shared benefits dimension to PGR, in which countries receive a portion of the profits from anything derived from their PGR.

The implications of this biodiversity privatization are nuanced. Often, we are still left with costly seed that may be beyond the reach of low-income farmers. In certain economically precarious contexts, this cost is associated with the perturbing phenomena of farmer suicides. This protectionist institution may not be the same one that Vavilov and his associates died for.


About the Author

Tara Conway Tara Conway is an M.S. student in Crop and Soil Sciences, where she is working towards the development of a perennial grain sorghum. She is originally from Chicago, IL. Her work experience spans from capuchin monkeys to soap formulating. You can reach her at tmc66335@uga.edu, where she would like to know which bulldog statue in town is your favorite. Hers is the Georgia Power one due to its peculiar boots. More from Tara Conway.

Scouting for the Next Top Model (Organism)


Here’s a valid question: if it’s a human condition or disease we’re interested in, why do we study flies, plants or bacteria? It’s a question that researchers often have to answer: whether it be for grant funding or to their in-laws over Thanksgiving dinner. Certainly, no one wants to hear—or vote for— tax dollars aimlessly squandered on projects that have “nothing to do with the public good.”  Without anyone understanding the importance of basic science research, misperception of “trivial” research reflects in budget cuts of major research institutions. So, why is it important to study non-human organisms?

What it takes to be a top model

Traditionally, model organisms are a group of non-human species that are widely studied to better understand biological phenomena—which include mechanisms of human diseases. Model organisms should be easy to maintain, cost-effective, readily available, and short-lived. This broad definition of a model organism allows for many species to be good candidates to study. Yet, there are only some notable top model organisms (E. Coli, yeast, fruit flies, frogs, zebrafish, mice, worms, corn, and Arabidopsis, just to name a few.)

The industry standard of a top model

The rise of the model organism came out of a 20th century shift from descriptive biology to the study of underlying mechanisms. Researchers wanted a simple organism that could be readily studied and help answer big questions. If their model was too big or complex, their studies may take too long or not fully answer their main question. So, there was a selective bias for small and simple models whose genome could be easily manipulated.

Historically, corn and bacteria elucidated a large part of the central dogma (DNA>RNA>protein), and flies, worms, and mice revealed critical developmental processes. Because these experiments with model organisms were comparatively faster and cheaper than that in primates, it was no surprise that the research generated in these systems dominated their respective fields. The sheer amount of discoveries generated from major model organisms called for the creation of large-scale databases and the advent of strain collection. Robust methodologies and genetic tool development also accrued as more researchers used these major model organisms within their fields. 

The traditional definition of a model organism is no longer sufficient in grouping today’s limited set of “top model” organisms. The definition of a model organism has shifted since the 20th century, adding one more criterion: an organism with accumulated, well-practiced resources and methodologies.

Cover Art_Model Organisms_2.png
From left to right: Fruit fly, E. coli bacteria, roundworm, mouse, corn, zebrafish, Arabidopsis rockcress. Some of the longest-standing models in the industry. Credit: multiple sources (modified) via Flickr. Licensed under: CC BY-NC-SA 2.0, CC BY 2.0, CC BY-SA 2.0, or CC BY-NC 2.0.

Representation, representation, representation

Although there have been great strides in the field of genetics, we are still limited by the few model organisms that we study. There is a great range of biological phenomena that goes unexplained, simply because the current top model organisms do not have the analogous function or gene. Thus, there is a need to study more nontraditional model organisms to fill in these gaps of knowledge.

Understandably, the task of establishing a new model system is daunting—given that these new model systems will be going up against well-established model systems with a long history and a wealth of resources. Even though the development of a model organism requires a lot of time and money, it is still more time- and cost-effective than studying the same genes in non-human primates or humans. Fortunately, recent genetic tools, like genomic sequencing and CRISPR gene-editing techniques, allow individual labs to feasibly study the genome of a model organism candidate. 

 The push for more diverse models isn’t just coming from scientists. In 2018, the US National Science Foundation funded $10-million to projects that specifically develop nontraditional model organisms. So far, there have been some promising results. In March 2019, a research group at the University of Georgia successfully used CRISPR to create the first genetically-modified reptile (Anolis sagrei). Another research group at Columbia University successfully injected CRISPR components in the embryos of the Hawaiian bobtail squid (Euprymna scolopes) and the dwarf cuttlefish (Sepia bandensis), two species that uniquely reflect their neurobiological activity through the camouflage of their skin.

Brown anole (left) and genetically-modified albino anole (right). An upcoming reptilian model. Credit: Ashley Rasys.

Akin to the rise of diverse models in the fashion industry, the scientific community is making strides towards more diverse model organisms. However, these are only preliminary results from ongoing casting calls, and the search for new model organisms is still underway. Only time will tell which fresh-faced species waiting behind the curtain will transform into stellar model organisms, ready to strut the runway.



IMG_20171224_165206_371.jpg Kathy Bui is a Ph.D. student in the Department of Cell Biology at the University of Georgia. She is currently working on CRISPR-gene editing in Drosophila melanogaster and developing split fluorescent protein technology. When she is not studying or working in the lab, she is watching America’s Next Top Model or pro-wrestling; both bring her equal amounts of joy.

Plant Cells, an Unculturable Mystery


The simplest unit in biology is the cell. This central tenet has remained true since the coining of the term ’cell’ in 1665 by Robert Hooke. Cells have enabled multicellular organisms to conquer every part of the planet, enabling cell line specialization and the formation of more complex organisms. Multicellularity allowed organisms to thrive by allowing specialization. Instead of being acclimated to a single environment, diverse cell types in a single organisms allowed organisms to cope with more complex environments. 

Due to the all-important nature of cells in biology, biologists have spent massive amounts of time and resources in making cells easier to study for some very important reasons. Cell lines – which are homogeneous blocks of identical cells – offer massive advantages to biologists, as they give researchers the tools to ask very specific questions about how organs or cells function. For instance, let’s say you’re a researcher for a pharmaceutical company working on a new drug to cure liver disease (yay you!). However, before you take that drug to mouse trials, and way, way before you take it to human trials, you want to test if it even has the desired effect on a biological entity that resembles a liver. Makes sense right?

You may be left wondering, “how does one propagate cells?” Honestly, the answer is simpler than you might think. In most organisms, plants excepted (you’ll find out why in a minute), the way cell culture generally works is you take a sample of your tissue of interest, put that tissue in a petri dish with a nutrient rich broth that enables growth, and boom! You’re off to the races. Those liver cells you just cultured will stay that – liver cells! This technique is older than most people think, with the first cell culture being done by Robert Harrison in 1907 who was working on frog neural fibers

Enhanced image of Human HeLa cells in culture. Each blue dot is the nucleus of an individual cell. Don’t they all look happy? Credit: Panorama of HeLa cells by National Institutes of Health (NIH) via Flickr. Licensed under: CC BY-NC 2.0.

It’s worth noting that cell lines today come in thousands of varieties, with various species, cell types, and disease states available. There are also a variety of companies that will actually generate a cell line you’re interested in if it doesn’t exist (consider heart cells for your next valentines day gift). 

While everything I’ve laid out so far sounds great, there is one system that doesn’t have the advantage of cell lines, which has greatly hindered science in a particular realm: plant biology. But this isn’t due to lack of trying, rather, plants have a few odd cellular characteristics which make cell culture nigh on impossible.

Plant tissues are complicated. For instance, plant tissues are made up of a myriad of cell types that all seem to operate independently of one another. In leaf tissue alone there are around 10 different types of cells. This poses a serious issue. In mammal cells, we can take a tissue sample from the liver and get liver cells! But plants, take a tissue sample from a leaf, and try to culture it? You don’t get anything resembling leaf cells. And while you may think there are simple solutions to this problem, say something like “why not just isolate one or two of the cells from the leaf and propagate them?” Well, that’s where another unique feature of plant biology comes into play.

An example of a plant leaf and all the various cell types it encompasses. Each color here represents a different cell type. For example, the red cells here are xylem and phloem cells which transport water and sugar throughout the plant. Click the link below to get a full list and description of the cell types labeled here (there’s a lot). Credit: “Herbaceous Dicot Stem: Dermal Tissues in” by bccoer via Flickr. Licensed under: Public Domain

The identity of a plant cell is uniquely linked to their cell wall. In plants, the lignin-filled cell walls are the fundamental feature that differentiates their cells from animal cells. It gives plant cells their rigid box like shape, and makes them all but immovable in plant tissues (fun fact, your cells move more than you think). What researchers have discovered is that plant cells are so intimately connected to their cell walls that the minute you remove the plant cell from these walls (like you would if you tore apart leaf tissue in the example above), you’re actually fundamentally changing the identity of the cell.The equivalent example here would be if you were removed from your apartment, and you totally changed as a person.

This attribute of plant cell types being intimately linked to their cell wall makes culturing plant cells impossible. While this remains an issue in the field of plant biology, this phenomena has an odd, but advantageous benefit for plant biology. But, more on that next time. 


pablo_authorJohn Pablo Mendieta is a graduate student pursuing a PhD in bioinformatics and genomics at the University of Georgia. His specific interest lie at the intersection of agriculture, and genetic technologies. From Boulder Colorado, he enjoys the outdoors, science fiction, programming, and hip hop. You can email him at john.mendieta@uga.com or follow him on twitter @Pabster212.

Malaria: From Miasma to Elimination


Life on Earth is full of dynamic and complex interactions between organisms. Some of these interactions are mutualistic, where all parties benefit from the relationship. Others are commensalistic, where one organism benefits and the other isn’t really affected. Then there are the parasites, organisms that live and prey on others causing them harm. 

Parasites are everywhere and they come in all different shapes and sizes. There are single celled organisms, creepy crawly worms, and nasty bugs too. Lice infest our hair, tapeworms infest our intestines, and on occasion, brain eating Amoeba eat our brains. But of all the parasites that affect humans, the most feared and most deadly are single celled microorganisms from the genus Plasmodium that cause malaria.

The Plasmodium cells that cause malaria are transmitted by female mosquitoes from the genus Anopheles. These mosquitoes actually pick up the disease from biting infected humans resulting in a continuous cycle of transmission. In 2016, Malaria was estimated to have infected 216 million people, killing around half a million. Understanding how Plasmodium works to cause disease in humans is critical for developing effective treatments for those infected. 

Colorized electron micrograph showing malaria parasite (right, blue) attaching to a human red blood cell. The inset shows a detail of the attachment point at higher magnification. Image Credit: National Institute of Allergy and Infectious Diseases, National Institutes of Health via Flickr. Licensed Under: Public Domain.

Plasmodium feeds on red blood cells. Upon first infection, the parasite travels to the liver where it begins to establish an infection. Then, Plasmodium cells infiltrate red blood cells where they reproduce asexually before reaching a critical mass, causing the red blood cells to burst. Red blood cells are generally invisible to the immune system, meaning that the malaria parasite basically hides from the immune system by residing in the red blood cells which then clot to prevent destruction by the spleen.

There are a number of different treatments for Malaria, the oldest being quinine, an extract from the cinchona tree native to South America. You might be familiar with quinine as it is the compound responsible for the bitter taste of tonic water. In fact, the antimalarial properties of quinine was directly responsible for the creation of the Gin & Tonic. During the British occupation of India in the 1850s, British soldiers were given several daily rations of tonic water to prevent malaria infections. They often mixed the tonic water with gin as tonic water isn’t too pleasant to drink on it’s own, and thus one of the most famous mixed drinks in the world was born out of the need to combat malaria. There are of course newer and better antimalarial drugs that exist, but quinine is still used as a secondary treatment for malaria.

Much time has passed since the British occupation of India and malaria is still one of the biggest parasitic threats to human populations. Come to the Athens Science Cafe on September 26th at Little King’s to hear more about Plasmodium and how we can move towards a world without Malaria from David Peterson, faculty in UGA’s Center for Tropical and Emerging Global Diseases!


About the Author

Screen Shot 2018-10-08 at 2.36.17 PM Max Barnhart is a graduate student studying plant biology and genomics at the University of Georgia. Growing up in Buffalo, NY he is a diehard fan of the Bills and Sabres and is also an avid practitioner of martial arts, holding a 2nd degree black belt in Taekwondo. He can be contacted at maxbarnhart@uga.edu or @MaxBarnhart1749.

Preventing the Next Epidemic: Scientists Take a Closer Look at Rift Valley Fever


In 2015, Zika virus resulted in a global public health emergency. The epidemic caused severe brain defects in thousands of Brazilian newborns after the virus was transmitted to pregnant mothers via infected mosquitoes. The rapid emergence of disease caught everyone by surprise, and with little understanding of the virus pathogenesis it left scientists unprepared to prevent and treat disease in affected infants.

Like Zika, infection with RVF virus can go unnoticed during pregnancy and cause  catastrophic, often lethal, damage to the fetus. RVF was first reported in livestock by veterinary officers in Kenya’s Rift Valley in the early 1910s. RVF viral disease is most commonly observed in cattle, buffalo, sheep, goats, and camels, with the ability to infect and cause illness in humans. Outbreaks of RVF can have major societal impacts, including significant economic losses and trade reductions. In an effort to prevent history from repeating itself, scientists are now working to develop effective vaccines for Rift Valley fever (RVF).

Different types of vaccines for veterinary use are available to prevent RVF; however, they all have their drawbacks. The killed vaccines are not practical in routine animal field vaccination because of the need for multiple injections. Live vaccines only require a single injection, but because this is still a live virus it is known to cause birth defects and abortions in sheep and only provide a low level of protection in cattle. A weakened version of the virus has been developed to create a live-attenuated Clone 13 viral vaccine, which was recently licensed in South Africa with more than 19 million doses already used in the field. The Clone 13 vaccine performed well in controlled animal trials; however, a major hurdle for vaccine efficacy comes down to cold chain. A recent study demonstrated that the Clone 13 virus is stable for more than 12 months when stored at 4℃, but the virus is unstable at temperatures above 22℃. This temperature storage issue is not unique to RVF vaccines, and remains an ongoing battle when vaccinating in hot climates served by poorly developed transport networks.

Image by Kenya Red Cross via Twitter

The World Health Organization (WHO) considers RVF a potential public health emergency and calls for accelerated research and development due to the lack of approved treatments for animals or humans. Although this mosquito-borne, zoonotic disease has been reported only in Africa and the Middle East, the mosquito that transmits the virus also ranges from Europe to the Americas. From 2000 to 2018, 4830 cases of severe RVF in humans were reported to the WHO, including 967 related deaths. Epidemiology in human pregnancy is severely lacking, but among herds of livestock, RVF outbreaks lead to widespread miscarriage and stillbirth affecting more than 90% of pregnant livestock.

Following an outbreak of in , team has been working in to help manage the disease. So far, 85 cases, including three patients who were readmitted, have been reported in the county, with six having died since the beginning of the outbreak. Image by MSF East Africa via Twitter

In a recent study, researchers from the University of Pittsburgh Center for Vaccine Research discovered how the virus targets the placenta. These studies provide important information for the development of human vaccine. The researchers showed that in pregnant rats with no signs of clinical disease, RVF virus is vertically transmitted from mother to fetus through the placenta, resulting in a high rate of stillbirths.

Screen Shot 2019-04-04 at 6.34.24 PM
Image by Medical Xpress via Twitter

The group also exposed human tissue samples obtained from pregnant women in their second trimester to RVF virus, and then monitored viral levels every 12 hours. They found high virus levels in the placenta, including in a layer of cells called the syncytiotrophoblast. This makes up the outer layer of cells that actively invades the uterine wall and establishes an interface between maternal blood and embryonic fluid, allowing exchange of material between the mother and the embryo. A growing body of evidence suggests that the unique structure of the syncytiotrophoblast facilitates the placenta’s protective function.

But, here is the real kicker. The syncytiotrophoblast is typically resistant to infection by diverse pathogens, including Zika virus, raising a major red flag that RVF virus may be an even more frightening threat. Essentially RVF virus takes the expressway to get into the placenta as opposed to the windy back roads of its Zika virus counterpart.

While having these research models in place is an important step for combating RVF, the path towards a safe and efficacious vaccine for humans is still under construction. Ultimately, the prevention of a RVF epidemic will require a One Health approach assessing the interaction between the environment, animal health, and human health to inform risk mitigation and prevention measures.

Featured image: Image by Wellcome Trust via Twitter

andersonLydia Anderson is a Dual DVM-Ph.D. graduate student at the University of Georgia and currently serves as an Associate Editor for Athens Science Observer. Since completing her Ph.D. in Infectious Diseases, she has been working on her DVM at the College of Veterinary Medicine with an emphasis in public health and translational medicine. She plans to use her training to help address the questions and challenges facing One Health due to emerging and zoonotic infectious diseases. When she is not busy learning how to save all things furry and playing with test tubes, Lydia can be found either freestyle cooking for her friends and family or binge watching Netflix with her rescue pup, Luna. More from Lydia Anderson.

Lost in Translation


The year is 2019; the place, your local grocery store.  You, the unwary consumer, wander the aisles on your weekly shopping excursion.  Reaching for the milk, you hesitate; “non-GMO” is emblazoned across one milk carton.  Meanwhile another label holds no such distinction. It does not assure you, the consumer, that its contents are free of “harmful” GMOs. You are struck with indecision. What to do?

Well, what if I told you all dairy milk is non-GMO, and there are currently no genetically modified dairy cattle in use by the dairy industry? What that non-GMO milk label actually means is that the cows who produced the milk were fed a diet supplemented with only non-GMO grain. However, a literature review has shown that in numerous analyses of animal by-products, DNA fragments from genetically modified feed have never been detected in eggs, milk or meat from animals which consumed those GMO feeds.

“#6” by James Loesch. Licensed under CC BY 2.0

With the continually rising popularity of organic and clean living, a plethora of packaging publicizing products as a panacea to a puzzled populace has become a persistent problem – whew!  What I mean to say is that the hype surrounding healthy living has given rise to companies utilizing buzzwords such as ‘non-GMO’, ‘organic’, ‘vegan’ and, my personal favorite, ‘superfood’ to sell products to consumers.  This sentence makes a lot more sense than the one before it, right? Buzzwords used to market products to consumers can sometimes have a bit of the same effect; it sounds fancy and smart, but what is it actually telling you?  Here, we will explore the dichotomy between the marketing and actual meaning behind some common buzzwords.

59ccdc3467311.5a37d6b9a9b31 (1)
“* Vigilant Eats : Superfood//” by Eric Kass. Licensed under CC BY-NC-ND 4.0

There is no federal oversight or regulation of the term ‘superfood’.  This means that the Food and Drug Administration (FDA), does not manage how companies use the term in marketing their products.  Superfoods are generally assumed to possess high levels of vitamins, minerals, antioxidants or in some way benefit human health.  However, many products are labeled according to the shifting tides of the latest health crazes, often without any scientific basis. Without any regulated standard as to when a product may be labeled as a superfood, consumers have no guarantee beyond the manufacturer’s claim that said product has any elevated health benefits.

One of the first known uses of the term superfood was in the US after World War I.  The United Fruit Company utilized the term as a marketing strategy to promote the sale of bananas, one of their major imports.  By running an enthusiastic marketing campaign centered around the espoused virtues of bananas, including informational pamphlets on the health benefits of bananas, the United Fruit Company seeded a major health craze in the early 20th century.

Much like ‘superfood’, the United States does not employ a precise definition of what constitutes a Genetically Modified Organism (GMO).  Rather, the FDA and the Environmental Protection Agency (EPA) oversee whether a product should be labeled as GMO or non-GMO.  However, with no consensus as to what constitutes a GMO, the definition and subsequent regulation are murky at best.  Further complicating the matter; according to US regulations, all organic products must be non-GMO, however not all non-GMO products are organic.  In addition, since the definition of ‘organic’ is ‘process-based’ in the US, the presence of detectable GMO residues alone does not necessarily constitute a violation of the [organic] regulation”.

To better understand how complex these definitions can be, let us revisit our friend the banana.  Organic bananas are available in most grocery stores alongside bananas not certified as organic (conventional).  However, most bananas (organic and conventional alike) currently under mass production are essentially clones. Bananas today are extraordinarily different from their wild progenitors, who were smaller, starchier, and full of large, inedible brown seeds.  Through selective breeding a banana with much sweeter flesh and small, infertile seeds was developed: the Cavendish banana. Clones, in this case small rhizomes produced naturally by a mature plant, are one of the only ways to obtain new individuals in the face of infertile seeds.  ‘Clones’ do not fall under the definition of what constitutes a non-organic product or GMO, therefore bananas grown without assistance of certain herbicides or pesticides are labeled as organic.

“bananas” by liz west. Licensed under CC BY 2.0

The next time you head to your local grocery store, consider this: many terms used to espouse alleged superior health benefits or increased safety are subject to unclear and subjective definitions.  Just because a product is labeled as a “superfood,” doesn’t mean it has superpowers. Stay informed, eat healthy, and happy shopping.

About the Author

 Photo Jul 07, 3 16 12 PM Megan Buland is a graduate student in the Warnell School of Forestry & Natural Resources at UGA, where she studies forest health and microbial community ecology. When not visiting field sites or working under the flow hood, Megan is passionate about environmental communication and education, and exploring in nature. She enjoys rock climbing and hiking and loves her dog, Madra. You can reach Megan at megan.buland@uga.edu. More from Megan Buland.

Men control the reproductive rights of plants too


When confronted with the imprecise notion of “sustainability” in agriculture, most people’s thoughts drift to ideas of ecologically-mindful land management practices. I’ll dub these concepts “the classics”: rotate your crops, use less fertilizer and pesticides, always employ cover cropping. While these ideas are not wrong, they are incomplete in that they tend to omit some of the larger social contexts of sustainability, and agriculture is a realm in which the natural and social sciences are inextricably linked. Thus, agricultural systems are subject to the social structures and power dynamics of innumerable human societies and unsurprisingly, gender comes into play.  One particularly insidious way in which women the world over are marginalized is at agriculture’s foundation, within plant breeding and crop development. Is an agricultural system sustainable if there are inequities in who dictates which crops are developed?

Workers at a flower farm. Image credit:  World Bank Photo Collection via Flickr. Licensed under CC by-nc-nd 2.0.

At its crux, plant breeding strives to improve the genetic makeup of a plant for human consumption through the selection of a trait of interest. It is well-documented that there are differential crop trait preferences among men and women in the developing world. These differences arise when gender dynamics result in men and women interacting with the food system in functionally different ways; one classic example is when women are responsible for food preparation, while men are responsible for selling crops at market. Ergo, women tend to care about a wider “basket” of traits, with a greater focus on post-harvest traits pertaining to food processing, food use, nutrition, and familial food security. Conversely, men tend to have a narrower focus, caring more about crop traits pertaining directly to yield, crop productivity, and market orientation.

This is typified in one participatory plant breeding program for white pea bean in Ethiopia. While both genders were found to be concerned with traits pertaining to yield and drought tolerance, only women cared about bean cooking time and suitability for culinary purposes. Women were also more likely to prioritize an early harvest, a trait pertinent to familial food security, as this is the first crop to become available after the seasonal drought. Having different preferences is not in itself a problem, but issues arise when gender power dynamics influence who gets to exert their preferences.

Women often work in the fields. Image credit: The New York Public Library Digital Collections via Schomburg Center for Research in Black Culture. Licensed under: Public Domain.

While women produce more than half of the world’s food, they’re frequently excluded from formal plant-breeding networks, agricultural organizations that have regional decision-making power, seed markets, and agricultural extension services. This all contributes to a general under-representation of female-preferred crop varieties in the developing world. While women are frequently able to act upon their trait preferences in spaces deemed “feminine”, such as the home garden or subsistence plot, their preferences are often omitted from the larger, more productive plots of land used for cash crop production. In an increasingly globalized and urban food economy, the prominence of industrial, cash crops on our plates is ever-growing, and implicit in that is the deterioration of female-preferred varieties.

In one example from rural Mali men supplanted women and their traditional leaf and vegetable crops from stream gardens to plant non-traditional crops for market. One male farmer explained, “men in the community became more aware of the potential value of the low-lying stream areas and eventually displaced women in the cultivation of these areas. He said that they began to clear the areas and then proceeded to fence and claim them as their holdings. After all he said, ‘There was money to be made!’” Along with this shift in garden ownership came a reduction in the nutritional value of the community’s meals. It is particularly alarming that the gender socialized to care about familial nutrition and food preparation is the one often excluded from crop variety development, as it is widely accepted that women are critical to global food security.

Farmer with a buffalo near Yangshou. Image credit:  Andy Siitonen via Flickr. Licensed under CC by 2.0.

The solution to this issue is simple in principle: consciously include women in plant breeding so that both genders’ preferences are represented.  Breeding programs that do so have led to crop varieties that are more widely-accepted and quickly adopted, greatly improving the efficiency of breeding efforts and ultimately leading to increased food security. In reality, this involves eliminating the global gender gap, which is a significant undertaking that organizations such a CGIAR’s Gender & Breeding Initiative are actively attempting to tackle. A classic sustainable agriculture recommendation is to plant a diversity of crops to increase the resilience of your farm. An ideological complementation to that is a push for a diversity of voices in the selection of those crops to ensure the resilience of our global food system.

About the Author

image1 Tara Conway is an M.S. student in Crop and Soil Sciences, where she is working towards the development of a perennial grain sorghum. She is originally from Chicago, IL. Her work experience spans from capuchin monkeys to soap formulating. You can reach her at tmc66335@uga.edu, where she would like to know which bulldog statue in town is your favorite. Hers is the Georgia Power one due to its peculiar boots.

The Secret World of Plant Chemistry: Plant Communication


Part II of the series exploring plant chemistry through different lenses.

Plants are the perfect embodiments of natural selection – they can’t just get up and move; so whatever adversity they face, they generally have to stick it out. It leaves the strongest individuals to survive while the weaker ones perish. This situation warrants some extreme (and creative!) adaptations. For example, Venus Flytraps evolved into their famously carnivorous lifestyle because their ancestors were bound to nutrient-deficient soil and eventually formed a mouthlike structure to catch their nutrients. Cacti’s cylindrical shape were molded by harsh desert conditions – the conical structure allows for the least amount of surface area to be exposed to the sun, thereby reducing the amount of heat and water stress they experience. But there’s an invisible adaptation that plants have developed over their evolutionary journey: communication. Perhaps not communication in the way that we’re familiar with, but plants have an intricate system of relaying critical messages; and those messages are right under our noses.

Illustration by Vincent Warger. Used with permission.

Inaudible alarm systems

Think of the distinct smell of freshly cut grass. That smell is due to tiny molecules called volatile organic compounds (VOCs), which are being released into the air once the leaf tissue breaks. These VOCs act as a signal that can travel to other neighboring plants, relaying a range of messages. To put it as a chemical ecologist once explained to me, “Freshly cut grass is the smell of plant screams.” These screams aren’t into the void, they actually elicit responses.

The “screams” that neighboring plants are “hearing” are like a chemical alarm to other plants nearby. Some emit signals to tell their neighbors about an impending attack, allowing the plant receiving the signal to amp up its defense mechanisms in hopes of a better chance at survival. Plants can even call on insects to do the fighting for them. In an example of well-tuned coevolution, some plants can recognize the saliva of their insect attacker. That recognition in turn produces a specific VOC response, which calls on the attacker’s predator. This interaction is commonly seen with parasitic wasps and caterpillars – the caterpillar’s chewing triggers a VOC from its leafy lunch, attracting the deadly wasp, making a lunch of itself for the wasps.

On a less morbid note

Apart from warning signals and calls for help, VOCs are responsible for the delightful smell of flowers. Of course flowers didn’t evolve just to please our olfactory senses (or did they?), but a flower’s scent is an amalgamation of VOCs that act as a chemical billboard for pollinators. Pollinators can discern complex mixes of VOCs from specific plants and track them down over long distances. This is especially useful for plants that rely on a specific pollinator to reproduce. For example, a species of Magnolia tree has been found to release a very specific compound that only seems to attract the beetle that pollinates it. Since these chemical signals are often specific to a given pollinator species, it could explain why plants pollinated by bees and butterflies smell different to us compared to plants pollinated by bats and moths.

Southern magnolias release chemical signals to attract a specific pollinator beetle. Image credit: Rob Bertholf via Flickr. Licensed under: CC BY 2.0.

The complex world of plant chemical ecology is just starting to unravel, as scientists not only look at how plants communicate with each other but how we can use their evolutionary adaptation to our advantage. These VOCs are so effective that their uses in agricultural settings are starting to be explored – possibly leading to a more sustainable way to protect crops from natural enemies. So remember, when you smell freshly cut grass or the sweet wisteria that is just starting to bloom, you’re smelling the finely-tuned product of evolution and a quick whiff of the secret world of plant chemistry!



Big Science, Small Satellites


Is it a star? A moon? A comet even? No, it’s a satellite! NASA broadly defines a satellite as a moon, planet, or machine that orbits a planet or star. More specifically, “natural” satellites include the Earth, which revolves around the Sun, and the moon which revolves around the Earth. On the other hand, there are almost 5,000 “man-made” satellites that are currently in Earth’s orbit. These satellites are mainly utilized to facilitate communication, navigation, and observation for weather prediction, GPS, rescue operations, phone calls, and even establish a home in space with the International Space Station. Although we typically imagine satellites to be enormous structures made by highly experienced engineers and scientists, there are also smaller satellites in space that have been launched by everyday citizens and curious students.

PhoneSat in space Image Credit: NASA Ames Research Center via wikimedia commons. Licensed under: Creative Commons CC0 License.

The CubeSat was developed by professors Bob Twiggs (Stanford University) and Jordi Puig-Suari (California Polytechnic State University) in 2000, when they wanted to make space research and satellite development more accessible to students. They adapted the model of successfully launched picosatellites (1 kg or ~2.2 lb weight) to develop a standard 10 cm or ~3.94 in (1U) picosatellite that weights up to 1.33 kg (~3 lbs).

A typical CubeSat is powered by solar panels that surround a frame which protects the main processing units and payload (as shown in Figure 2). The payload is the variable component of CubeSats, differing based on the main purpose of the satellite, whether it will be tracking temperature levels, measuring radiation levels, or taking images of the Earth’s oceans. Although initially met with criticism from the Space community, the power and potential of CubeSats proved successful with its first launch in 2003. Also known as the QuakeSat, this CubeSat was used to predict earthquake activity. The device stayed in orbit for 1.5 years, and collected signature data about eight earthquakes around the world.

ArduSat (Arduino based CubeSat) Structure Image Credit: Peter Platzer via wikimedia commons. Licensed under: Creative Commons BY-SA 3.0.

Since their inception, CubeSats have gained increasing global popularity. In fact, the National Science Foundation’s Division of Atmospheric and Geospace Sciences set up a CubeSat-based research program in 2008 that financially supported CubeSat research. The program, along with NASA’s CubeSat Launch Initiative, motivated the rise of CubeSat development both within and outside of academia due to the ease and affordability of building these devices. There is even a smartphone-based CubeSat known as the PhoneSat, funded by NASA, that aims to build these nanosatellites using readily available components. As of January 2019, about 1030 cubesats have been launched into space with increasing numbers each year. In the next two years we will even see the launch of the University of Georgia’s (UGA) very own CubeSats.

Founded in 2016 by three students with the goal of educating and providing resources to students on the design and engineering of satellites, the Small Satellite Research Laboratory (SSRL) at UGA works on the development of CubeSats. They have two ongoing projects funded by NASA and the Air Force to build CubeSats that act as ocean color sensors, and image and motion-detect coastal regions, respectively. The satellites are set to launch in 2019 and 2020. If you are interested in learning more about CubeSats or the SSRL lab, please attend the Science Café on April 23rd at Little King’s, where there will be speakers from UGA’s SSRL.

big science, small satellites_ (3)

About the author:

chaitanya Chaitanya Tondepu is a Ph.D. Candidate in the Integrated Life Sciences program at the University of Georgia. Other than science, her favorite pastimes are dancing, hanging out with friends and family, exploring, crafting, and eating delicious food. You can email her at chaitanya.tondepu@uga.edu. More from Chaitanya Tondepu

Science Warning! Annihilation


Science Warning! Is a series about the science behind some of our favorite SciFi stories. Today we take a look at Annihilation starring Natalie Portman.

As a biologist, I find watching Annihilation a thrilling experience. The movie so expertly blends science-fiction and horror into a narrative where the rules of life are twisted to create a world that feels truly unique. Natalie Portman stars as Lena, a biologist with a rough military past out to avenge her husband by leading a group of ultra badass women scientists on a suicide mission into the Shimmer, an alien veil emanating from a lighthouse that changes the DNA of whatever steps inside. Annihilation is about our biology, at least vaguely, and although the scientific aspects of this movie are a bit of a stretch, some of the concepts discussed are great stepping stones from which we can learn about some real biology.

St. Marks Lighthouse in Florida, the inspiration for Annihilation. Image Credit: Reweaver33 via Wikimedia Commons Licensed under: CC BY-SA 4.0.

The Biological Species Concept

Early on in our journey through the Shimmer, Lena and her team are attacked by a vicious alligator like creature with teeth like a shark. One member of the team hypothesizes that maybe this creature is some sort of crossbreed. Lena quickly shuts down this argument by claiming, “No, different species can’t crossbreed.”

This isn’t entirely accurate. Actually, different species crossbreed all of the time. In fact, there are some pretty amazing hybrid species that are relatively common in agriculture and in the wild. Ligers are crosses between male lions and female tigers and are, surprisingly, the largest type of felines in the world! Mules are hybrids produced by female donkeys and male horses. Mules make great work animals because they are stronger than a horse of comparable size with the tame disposition of a donkey. Different plant species readily hybridize all of the time! Sweet Corn, Tangelos, Pluots, and Plumcots are just a few of the hybrid foods we can find at the grocery store. The world of plants is crazy and there are so many hybrid species out there that it would be impossible to list them all. Heck, even ancient humans and neanderthals would hybridize and produce viable offspring, and the evidence for this is present in all of our DNA!

A liger held in captivity at Novosibirsk Zoo. Image Credit: Restle via Wikimedia Commons Licensed under: Public Domain.

But what defines a species? This is actually a really controversial question in biology. There are many competing definitions of what makes a species, but the predominant method of defining a species comes from the biological species concept. The biological species concept defines a species solely as a population of interbreeding individuals that are reproductively isolated from other groups of organisms, meaning that there is some barrier that prevents breeding between different populations. Under this species definition, organisms that look almost nothing alike but readily interbreed with each other are considered to be the same species.

Hold up though, we were just discussing that there are tons of different species that can crossbreed, are they not really species then? Well under the biological species concept, no. However, not all species have been classified according to the biological species concept. In my opinion, Charles Darwin had the best take when he said, “I look at the term species as one arbitrarily given for the sake of convenience to a set of individuals closely resembling each other.”

Essentially, a species is whatever somebody decides a species is, and because taxonomists were classifying species for hundreds of years before the development of the biological species concept, we have tons of species that would not be classified as such by now. Imagine, if a shark and alligator really could interbreed to create the monster in Annihilation, would you really consider sharks and alligators to be the same species? This is a wildly unrealistic example, but it does appropriately address some of the debate surrounding the biological species concept.

Now the next time you watch Annihilation with your friends, you can pause the movie, correct Lena, and annoy everybody else with educated rambling about the biological species concept and interbreeding. Just make sure to suspend your disbelief for the rest of the movie. Discussing the science behind science fiction is fun, but just because a movie might not be spot on scientifically, that doesn’t mean it should ruin our enjoyment of the film. So until next time, happy viewing!

Vinyl Pressing: A Lost (and Found) Art


From providing a soundtrack for our road trip to elongating an awkwardly silent elevator ride, music finds its way into every niche of our lives. It is a luxury that many of us not only enjoy, but hold a deep emotional connection to. Today, a selection of mediums to listen to our favorite songs is widely available – our phone, the radio, a cassette tape, a CD – but those mediums were built on the foundation of the record player.

A Brief History

The record player, originally called the phonograph, was the first device where audio could be recorded and played back. It was invented by none other than Thomas Edison, the inventor of the light bulb among other things. Edison’s prototype was born in 1877 out of tinfoil, a cylinder, two needles (one for recording and one for playing it back), and a hand crank. To test his newly minted contraption, he recited “Mary had a little lamb” into the mouthpiece. When the cylinder was rotated back, his voice came out just as it went in. You can actually hear the recording here. The phonograph garnered the attention of the world, being the first contraption to record and play music – a convenience we often take for granted. With the phonograph as the foundation, Edison’s prototype evolved into the single-needle modern turntable we use today.

How it works

Though Edison was deemed a wizard after his invention, the mechanisms on how a record player works are surprisingly straightforward. When Edison spoke into the mouthpiece to recite that nursery rhyme, the needle took those discrete vibrations and physically etched them into the tin foil wrapped around the cylinder. To play it back, the second needle followed the etched grooves, relaying those vibrations to a diaphragm where the sound was then amplified into a flaring horn. The modern record player takes after this same concept – the needle translates the tiny, unique grooves that are pressed into the vinyl record. That needle relays the message to a coil, which is turned into an electrical signal, allowing us to listen to that signal through speakers.   

Thomas Alva Edison with his 1877 invention, the phonograph. Image credit: ciriana_85, via Flickr. Licensed under: CC BY-NC-SA 2.0.

Closer to home

The way records are produced today is not that far from Edison’s original design, either. In brief, recorded music is translated from a digital signal into vibrations, where a needle etches a lacquer as it spins. An impression is made from the lacquer, where it can be copied and mass produced by vinyl and a hydraulic press.

This process is still ubiquitous today, even more so with the vinyl resurgence in the past decade – it’s also more local than you think. Right here in Athens, Kindercore presses vinyl for artists. Starting as a record label in the 1990s, Kindercore sought to embrace and expand the local music scene. Signing notable artists such as of Montreal and Dressy Bessy, the Kindercore record label became known around the world. Today, they have narrowed their focus to providing high quality vinyl pressings. To learn more about Kindercore and the science (and art!) of how we can enjoy music in its most grounded form, check out their Science Café Event on March 28th at Little Kings Shuffle Club.


The Sugar Code: Representing Glycans


Lucky Charms. Photo by Sarah Mahala Photography (CC BY 2.0)

Hearts, stars, horseshoes, clovers and blue moons, pots of golden rainbows and me red balloons! If you’ve ever eaten Lucky Charms cereal, you probably know this jingle and the tiny shapes of marshmallows it references. Interestingly enough, glycobiologists, or biologists who study the sugars that make up those tasty mallows, have their own Lucky Charm code for the carbohydrates they study.

Carbohydrates are diverse and come in many different forms – each with a unique chemical makeup and properties. The sugar code, with its current twelve shapes and nine colors, evolved as a way for glycobiologists to represent the complex chemical structure of sugar chains in presentations and figures. But to someone new to the field, this crazy collection of colored shapes may seem strange and unfamiliar. I remember being thrown for a loop by the colorful models in glycobiology papers when I started out as a lab technician at the Complex Carbohydrate Research Center. Once I realized there was meaning and intention behind the selection of those colors and shapes, the symbols seemed logical. So for this blog, my goal is to decipher the sugar code for you.


Each symbol in the sugar code represents a different monosaccharide or single sugar unit. The color of the symbol represents the basic structure of each sugar. The same color is used for different monosaccharides with the same stereochemistry, or spatial arrangement of atoms. For example, every symbol with a yellow color has an arrangement of atoms like galactose. Here is a list some colors and the basic sugar stereochemistry they are associated with:

Green – Mannose stereochemistry

Blue – Glucose stereochemistry

Yellow – Galactose stereochemistry

Red- Fucose stereochemistry

So if you see a blue square you know that sugar has an arrangement of atoms similar to glucose because of its color, but what about its shape?


Every symbol in the sugar code also has an associated shape, which tells you something about the composition of the sugar’s functional groups, or collections of atoms attached to the sugar’s carbon skeleton. The most basic shape is a circle, which represents a hexose sugar. The other shapes indicate some kind of modification to this basic hexose structure. For example, a square indicates an N-acetyl group is attached to one of the carbons. Here is a list of some shapes in the sugar code and the functional groups they are associated with:

Circle – Hexose sugar

Square – Hexosamine with N-acetyl group

Diamond – Hexuronate with an Acidic group

Triangle – Deoxyhexose sugar

There are various exceptions to these general rules for color and shapes, but for the most part knowing these standards will help you apply some meaning to the symbols drawn in glycobiology figures.

Putting It All Together

Both the shape and color of a unit in the sugar code imparts meaning about the chemical composition of that sugar. With this new lens, let’s take a look at a common sugar we come across in daily life: lactose.

Sitting in your fridge right now is likely some milk. This milk, if it comes from cows, contains lactose. Lactose is a disaccharide of two sugar units, a galactose and a glucose. So how would we draw this sugar using the sugar code?

Galactose and glucose are both hexoses, so their shape will be a circle. Galactose will be yellow and glucose will be blue. A glycobiologist would represent this disaccharide as a yellow circle linked to a blue circle. Ta-da! You’re basically a glycobiologist in training, and you didn’t even know it.

Screen Shot 2019-03-12 at 4.45.44 PM
A comparison of the complex chemical structure (top) and the symbolic sugar code (bottom) of lactose. Image created by the author and colored according to the Symbol Nomenclature for Glycans.

Why Have a Code?

The sugar code enables glycobiologists to more effectively and efficiently communicate with one another and the public. Part of being a good scientist, or in this case a good glycobiologist, is being an effective communicator of our research. The symbols in the sugar code allow us to do just that. However, it’s important for us glycobiologists to remember that not everyone we talk to about our science has encoded deep meaning to our sugar code.

Hopefully, you’re feeling less confused about the sugar code after reading this blog. Just think of the sugar symbols as emojis for glycobiologists! Similar to how it took my grandma some time to jump on the emoji bandwagon, it may take some time to use the sugar code effectively. But like everything, it just takes practice! Now if only Apple would include sugar code emojis in their next software release… a glycobiologist can dream, can’t she?!

About the Author:

Stephanie HalmoStephanie M. Halmo is a former middle school science teacher turned graduate student, actively pursuing her Ph.D. in biochemistry from the University of Georgia. Stephanie currently serves as an Assistant Editor for Athens Science Observer. In her spare time she likes to dance, volunteer at local schools and tie-dye anything she can get her hands on. You can connect with Stephanie on Twitter and Instagram @shalmo or by email: shalmo27@uga. More from Stephanie M. Halmo.


The Cold Truth About Cryopreservation


Large “Cryostats” filled with liquid nitrogen and cold hopefuls. Image credit: Hawaiian Sea via flickr. Licensed under CC BY-NC-ND 2.0

Recently,  I was in the lab doing some routine work with cells. In order to start growing my own stock of cells I took a small vial out of a tank of liquid nitrogen, where it is stored at around -150°C (-238°F). Then I quickly thawed it to body temperature (37°C, or 98.6°F) and transferred it to a new dish where it began to grow. At some point during this process, I realized I had no idea why this actually worked. Is that scene in Return of the Jedi, where Han Solo gets thawed and is (mostly) fine, real? If I hop into this vat of liquid nitrogen, will you be able to pull me out in a hundred years? Armed with years of scientific training, I set off to find answers through careful research (i.e. Googling stuff that I don’t understand).

Han Solo frozen in carbonite. Image credit: FJ Fonseca via flickr. Licensed under Creative Commons License (CC BY-NC-ND 2.0)

A Brief History of Cryopreservation

The storage of biological material at ultra-cold temperatures, known as cryopreservation, is very real and routine in the research field. Scientists have been able to revive cryopreserved cells since the late 1940s. This was first accomplished by the Parkes group, who revived rooster sperm that had been frozen at -80°C. The technique has been crucial for maintaining important research cell lines, such as the famed HeLa cells and many engineered cell types.  Essentially, many of the chemical reactions that age and degrade cells can be slowed to a near halt at low enough temperatures. This “freezes” the cells in time.  The key to being able to revive the frozen cells lies in the addition of cryoprotectants.

Cryoprotectants are usually small molecules like glycerol or dimethyl sulfoxide (DMSO) that are able to diffuse into the cell and prevent the formation of ice crystals, which can destroy cells as they freeze. The expansion of ice rips apart the cells while also increasing salt concentrations to dangerous levels in the surrounding liquid.

Now, the big question is: can we freeze and revive an entire person? Believe it or not, research efforts for this are already underway. The cryopreservation of entire humans is called cryonics, and the first human to be cryonically frozen was James Bedford in 1967, whose body remains frozen to this day. James Bedford was a psychologist that suffered from an advanced form of kidney cancer. He opted to have his body cryopreserved upon his death in the hope that one day the technology would exist to revive him. Since then, many more people have paid large sums of money to be cryonically frozen upon their death by private companies, including baseball great Ted Williams. Many of these people suffered from advanced cancer or other incurable diseases and looked to cryopreservation as a final resort. The idea is that if they can stay frozen long enough, technologies will emerge that will allow them to be successfully treated when they are revived in the distant future.

Criticism of Cryonics: Cold Corpses Under Fire

To say that this approach is controversial would be greatly underselling it. One particularly scorching opinion from a neuroscientist states that, “those who profit from this hope [of being revived] deserve our anger and contempt.” Since there is no evidence that any brain activity is preserved after cryonic freezing, many believe that companies selling the idea are simply preying and profiting on the desperation of people trying to avoid death. One company called Alcor is funded using their clients’ life insurance, which deprives the families of much-needed funds after the death of a relative. Funerals are not cheap! Regardless of whether the technology to revive cryonically frozen humans will ever exist, it’s likely that most attempts at cryonics were botched. Many of the frozen bodies could be irreparably damaged due to formation of ice crystals or long intervals between death and freezing.

Liquid nitrogen tank. Image credit: Howard Stanbury via flickr. Licensed under Creative Commons License (CC BY-NC-SA 2.0)

Will cryopreservation of live humans ever be possible? It depends on who you ask, but there are a few that say the chances are low, but not impossible”.There have been cases of relatively simple organisms being revived after long-term freezing, but nothing as complex as a mammal. In my opinion, upcoming generations will have enough problems without having to worry about keeping Great-Great-Grandpa on ice. You’re better off spending the time, money and resources helping someone who’s still warm-blooded.

About the Author

Trevor3 Trevor Adams is a Ph.D. Student in the Integrated Life Sciences program at the University of Georgia. He is interested in how the molecular bits of life shape our world. His hobbies include hiking, reading, and hanging out with his cat Bustelo. More from Trevor Adams.


Cystic Fibrosis and Your Genes

Image credit: Caroline Davis2010 via Flickr.

Disease alters lives in permanent and often heartbreaking ways. Most people have a story about how they have been affected by disease, either firsthand, through a family member, or looking from the outside in on another person’s life. In a world where tragedy is at the forefront of our personal lives via news stories, gofundme pages, and the like, it is almost impossible not to be touched by disease. In the harsh reality where many diseases end in an individual’s death, why does disease itself not die off too?

While there are various causes that lead to disease, one important contributing factor may be your genes. A gene is made up of DNA and comprises basic units of heredity that are then transferred from a parent to offspring to determine some characteristic of the offspring. Therefore, genetic diseases can be passed down from parent to offspring because each parent gives a copy of his/her genes to his/her child. If the copies from both parents are identical for a given gene, then the child is considered homozygous for that gene; but if the two copies are different, then the child is considered heterozygous. This concept is better known as genetic inheritance.

Screen Shot 2019-02-04 at 10.17.58 AM
Image Credit: Science in the Classroom via Twitter

Cystic Fibrosis (CF) is an ideal model for studying genetic inheritance as it is associated with a single gene; therefore, it is a relatively straightforward example. CF causes damage to the lungs and digestive system via mucous secretions that cause obstructions in these organ systems leading to inflammation, tissue damage, and disseminated destruction throughout the body. This mucous that causes physical damage of the airway also predisposes patients to developing secondary bacterial infections, which can result in respiratory failure.

So what do our genes have to do with developing CF?

CF is considered a recessive genetic disease, meaning a person must receive one bad copy of the gene that is associated with the disease from each parent in order to develop the disease. This would be a homozygous individual. If an individual only gets one mutant copy of the gene, then they are heterozygous. Heterozygous individuals can also be called carriers because they carry one copy of the bad gene even though they do not show symptoms.

There are an estimated 20,000 genes in the human genome, and when just one of those genes has a mistake in it, or a mutation, CF can result. Many of the different mutations that result in CF cause a significant portion of the DNA sequence to be altered, resulting in a change in gene function. When one gene’s function changes, many processes are altered functionally from their original purposes if they were involved with that mutated gene.


DNA Model. Image credit: Caroline Davis2010 via Flickr.

These changes in the DNA sequence can be both inherited or acquired. Whether or not that mutation persists in the population is then determined by mechanisms of evolution. Natural selection is a process of evolution in which individuals better adapted to their environment have higher reproductive success. Due to the possession of advantageous traits, these individuals have a higher rate of survival, resulting in their offspring having a similar, high rate of survival. More of these offspring survive and pass on their advantageous traits. CF results in traits that are not advantageous for an individual’s survival. Due to this, natural selection acts against CF.

So if CF is selected against, why does it persist in the population? Carriers of the disease do not develop symptoms because they do not develop CF. These individuals experience no negative effects from carrying a bad copy of this gene and assist in the “survival” of this genetic disorder. They pass on their single copy of the bad gene to any children that they have. This pattern continues until a homozygous individual is born.

While carrier individuals are asymptomatic, there are still ways to genetically determine your carrier status. Genetic testing is an accessible screening process that tests your genes for the presence of a mutant copy, meaning the bad gene. These tests are sometimes covered by insurance and sequence the specific genes in question in the patient’s DNA. Genetic testing serves as the greatest preventative tool to allow individuals to make informed decisions when planning for their family’s future. A person with a family history of CF should strongly consider undergoing a genetic test to screen for CF when planning to have children.

About the Author:


Guest writer Callan Russell is a third-year student at the University of Georgia pursuing her Bachelor’s degree in genetics and a minor in music. Callan studies the molecular basis for epigenetic inheritance within the Schmitz Laboratory at UGA, but in her spare time likes to play trombone, volunteer with Extra Special People, serve at Athens Church, and play in the Redcoat Band at UGA football games. She plans on attending graduate school to study genetic counseling upon completion of her Bachelor’s degree. You can email her at callan.russell@uga.edu.


Building Strength from the “Floor” Up


Better posture. Better sex. Better poop?

If these happen to be part of your New Year’s resolutions (and if they aren’t, they should be), did you realize that working on your pelvic floor can help improve all three of these areas? If your answer is no, or if you’re wondering what the heck is my pelvic floor, then keep reading! My good friend Dr. Nidhi Patel PT, DPT, is an Athens native and UGA alumna. Now she works for the University of Georgia’s Health Center and is very passionate about pelvic floor physical therapy. She has talked my ear off about the importance of maintaining a strong pelvic floor, so I’ve asked her to share some wisdom on the topic.

A healthy foundation starts with a strong floor

Hammock. Photo credit: Michelle Dookwah.

So what’s the importance of maintaining a strong pelvic floor? The pelvic floor is the layer of muscles that span the bottom of the pelvis, supporting the pelvic organs – in women, those would be the bladder, bowel, and uterus, and in men, just the bladder and the bowel. “Think of your pelvic floor like a hammock,” says Dr. Patel, “where one end is connected to the pubic bone and the other at your tailbone – a taut hammock equals nice tight muscles, a weak hammock means loose muscles.” In addition to literally holding up your pelvic organs, these muscles are required for other functions as well. Dr. Patel explains, “Just remember the 3 S’s of pelvic floor function – Support, Sexual function, and Sphincteric control.” If I had a dollar for every time Nidhi has talked to me about pelvic floor health and constipation, I’d be a millionaire with the most regular bowel movements! From its name alone, it may sound like strengthening your pelvic floor will only affect things in the pelvic region, but that’s far from the truth. Everything in the body is interconnected somehow, and the same goes for the pelvic floor.

The pelvic floor – way more than just Kegels

Most people have heard of pelvic floor muscles in relation to Kegel exercises. Kegels are often touted as easy exercises to tighten the pelvic floor muscles in women, and, in turn, provide better control of the vaginal wall muscles. They’re so discreet and simple that we’re often told to just do them anytime and anywhere – in the car at a red light or while doing the dishes. I could even be doing them right now, as I write this very sentence! But there’s so much more to strengthening the pelvic floor than just doing Kegels – and in some cases, Kegels may do more harm than good. That’s why it’s important to see a professional who can help provide you with information on what exercises best suit the needs of your pelvic floor.

Deep central stability system diagram. Photo Credit: Ann Wendal Used with Permission.

The pelvic floor is part of the ”4 deep core muscles”. These include the diaphragm (under your lungs), the pelvic floor (in the pelvis), the transverse abdominis (surrounding your spine), and the multifidi (back muscles). These all work together to give you what’s called optimal core stabilization. Correct alignment of your core, think “ribs over pelvis”, is an important aspect of proper posture.

The diaphragm and pelvic floor should be working in sync. “You can picture it like an umbrella, with things working optimally in all directions,” says Dr. Patel “inhaling expands the diaphragm, like opening the umbrella, which then pushes down on the pelvic floor.”  When these muscles aren’t able to work in conjunction any more, think after a surgery, postpartum, or after a trauma – such as sexual abuse – these muscles may need to learn how to communicate and work in sync with each other again.

Core Activation GIF. Photo Credit: Jenny Burrell Used with permission.

Muscles not communicating? Maybe it’s time to talk to your PT

Signs of pelvic floor dysfunction include symptoms like leaking pee when you laugh or cough (common amongst postpartum women), lower back pain, urinary urgency or frequency, incomplete bladder voiding, pain with sex, constipation, or pain in your tailbone when you sit! These symptoms are common to both women AND men, but they don’t necessarily have to be something you just have to live with.

Do these symptoms sound familiar to you? If so, it’s time to visit a pelvic floor specialist. Pelvic floor physical therapy can teach your  muscles to talk to each other again and help regain proper function of the pelvic floor.

Want to learn more about your floor and core? Come check out this week’s Athens Science Café with Dr. Nidhi Patel and Dr. Teresa Morneault PT, DPT, WCS from the University of Georgia Health Center at Little Kings Shuffle Club this Thursday, January 24th at 7pm.


About the Author

IMG_0452 (6) Michelle Dookwah currently serves as an Assistant Editor for Athens Science Observer and is a graduate student at the University of Georgia Complex Carbohydrate Research Center, where she studies rare neurological disorders using patient stem cells. She’s pretty passionate about science and science communication. However, she also enjoys numerous activities in her free time, including reading, listening to podcasts and audiobooks, hiking, baking, and obsessing over her labradoodle named Goose! More from Michelle Dookwah.

A CURE for the Growing Demand of STEM Undergraduate Research Opportunities

ACC student Jacob Savell, right, from GEOL 1445 - Introduction to Oceanography class work inside a lab at Petroleum and Geosystems Engineering Department at UT Austin on Tuesday, June 27, 2017. The students are part of the Summer Undergraduate Research Experience Course (SUREC).

Many scientists agree that their love for scientific research began with their undergraduate research experiences. To fulfill the need for 1 million more STEM majors by 2020, university STEM programs are faced with the task of providing the multitude of students entering their programs with unique undergraduate research experiences. The demand for these transformative research experiences keeps growing, but how can we increase the supply?

What is the CURE? Ramping Up from 1:1 to 1:Many

CUREs, or course-based undergraduate research experiences, directly address the limited supply of research experiences available to STEM undergraduate students by simultaneously increasing the number of students involved in research while reducing the burden on faculty to mentor students one-on-one.

Summer Undergraduate Research Experience Course (SUREC). Image credit: Austin Community College via Flickr. Licensed under: CC BY 2.0.

So what makes a CURE different from the typical apprenticeship model where an expert researcher mentors a novice researcher one-on-one? The goals of a CURE are similar to the  goals of the apprenticeship model. Specifically, they provide students the opportunity to:

  • contribute to original research relevant to the broader public
  • formulate hypotheses
  • investigate research questions with unknown results (no cookbook labs here!), and
  • communicate the results of this iterative process of research to the broader public through #scicomm

However, CUREs are distinct from the apprenticeship model in that students work together collaboratively and iteratively alongside a faculty mentor during a designated class time. This means the CURE is experienced by multiple students, even an entire course full of students, at the same time. With this structure, one instructor can mentor many students at once, and the time invested by students is in class rather than outside of class. Additionally, by being on a university’s course list, CUREs are offered to a broader range of students rather than to just those who self-select to enter the apprenticeship model.

What makes CUREs effective?

CUREs have been successfully implemented in a variety of contexts, and their effectiveness has been demonstrated repeatedly. CUREs increase graduation rates and the completion of STEM degrees. There is also evidence that CUREs result in strong motivational and learning gains for students who experience them.

So what aspects of these experiences make them so meaningful and effective? The answer still isn’t entirely clear, but researchers have some hypotheses about why these CUREs are so impactful. First, because CUREs occur during dedicated class time, they may reduce the stress of balancing research and a full course load for students. Additionally, moving research experiences into required coursework may mitigate some barriers to research experiences, making research more inclusive. Second, CUREs can provide more of an opportunity for students to develop a sense of ownership over their projects, which may be a possible contributor to persistence in STEM. Lastly, since CUREs can be offered as introductory-level courses, compared to research internships that often occur later in an undergraduate’s career, they may have the ability to influence students’ career paths earlier on. While the outcomes of certain CUREs are well-studied, more research is needed to tease apart the specific aspects of these experiences that make them so impactful.

Want more?

Has this cure for the growing demand of STEM undergraduate research opportunities piqued your interest? If so, be sure to attend the next Athens Science Café at Little Kings Shuffle Club on Thursday, December 13th at 7pm. UGA’s Dr. Erin Dolan will be there to discuss this novel approach to providing mentorship and research experiences for all undergraduate students.


About the author:

image00 Stephanie M. Halmo is a former middle school science teacher turned graduate student, actively pursuing her Ph.D. in biochemistry from the University of Georgia. Stephanie currently serves as an Assistant Editor for Athens Science Observer. In her spare time she likes to dance, volunteer at local schools and tie-dye anything she can get her hands on. She is currently ASO’s News Editor. You can connect with Stephanie on Twitter and Instagram @shalmo or by email: shalmo27@uga. More from Stephanie M. Halmo.

Frosty the Microbe


‘Tis the season for stories of wintery magic. From Elsa and Frozone to their mythical grandfather, Jack Frost, there’s no cooler gift than the power to let it snow at will, or shock a pond skate-worthy with a single touch.

Little do we realize that these chilly abilities aren’t limited to the realm of holiday lore. If a microbiologist were writing the legends, they’d call Jack Frost by his scientific name: Pseudomonas syringae. Known for generations as the artist who sprinkles leaves with glitter on crisp winter mornings and blankets the landscape with snow, they’d add that he also happens to be about two and a half microns tall, and a well-studied plant pathogen.