78.7 F
Athens
Monday, September 28, 2020
Home Blog

Wall of Destruction: The impact of the US-Mexico Border wall on wildlife

0

Growing up in Arizona, we were told that people could go to jail for damaging a Saguaro cactus. Saguaros are a protected symbol of the Southwest. Yet in 2019, videos shot by Kevin Dahl, the Arizona Senior Program manager for the National Parks Conservation Association, recorded bulldozers uprooting Saguaro cacti and other desert shrubs at the United States and Mexico border in preparation to build a wall.

Man-made barriers have long impacted their surrounding environments. Most large-scale barriers are erected for national security reasons, with little regard for local wildlife. Currently, a 30ft tall, steel wall is under construction to act as an impermeable barrier along the nearly 2000 mile border between the United States and Mexico, and it is destroying everything in its path. 

Border Wall I by Russ McSpadden via Flickr is licensed under CC BY-NC 2.0.

How will the US-Mexico border wall impact wildlife?

The plans for the US-Mexico border wall span desert, woodland, grassland, and wetland ecosystems that are rich with biological diversity. One study shows that the wall will transect the habitats of at least 1,506 native terrestrial and freshwater animal and plant species. According to the Center for Biological Diversity, this number includes 93 imperiled species (i.e., endangered, threatened, or under review for protection). 

One of these 93 species is the Río Yaqui Fish, which relies on rare desert springs and streams for its habitat. Not only are these water reserves already susceptible to persistent drought and increasing temperatures, a borderlands campaigner for the Center for Biological Diversity states “there’s good reason to believe that the Yaqui fish’s only US habitat is drying up as a result of tens or hundreds of thousands of gallons of groundwater being pumped to build the border wall.” These freshwater fish species are facing possible extinction as their habitats are sucked dry. 

Additionally, all but five of the 93 species have populations on both sides of the US-Mexico border line, meaning a wall will split these endangered populations into even smaller units.  One of the most endangered mammals of North America, the Mexican gray wolf, was actually starting to recover its numbers after decades-long binational conservation efforts. This effort could be squandered as the wall splits the vulnerable group, preventing necessary genetic exchange for its continued survival. 

Wolf Jokes by MTSOfan via Flickr is licensed under CC BY-NC-SA 2.0

Even low flying animals are threatened by the wall. The Quino checkerspot, a fast-flying butterfly that ranges from the Santa Monica Mountains to Baja California, Mexico, is already facing extinction due to habitat loss from land development. In addition to preventing contact between surviving populations, the US-Mexico border wall will directly harm native vegetation that this butterfly relies on to reproduce. As a consequence, it will be a challenge for Quino checkerspots to recover their population sizes and maintain important genetic variability. 

It does not matter whether endangered species along the US-Mexico border live in water, on land, or can even fly – the construction of the wall will destroy or fragment their habitats. A wall reduces overall landscape connectivity, limiting access to food, water, mates, or migration corridors. The examples above represent just a few of the diverse endangered species that will be affected.

If these species are endangered, then why aren’t they federally protected?

Because of the importance of the species and landscapes along the US-Mexico border line, many environmental laws are set in place to protect them, including the Endangered Species Act and the National Environmental Policy Act. However, under the Real ID Act of 2005, the Trump administration can override environmental laws that would slow down building of the US-Mexico border wall. Indeed, the Department of Homeland Security has waived 48 environmental laws set to protect species and habitats along the border line. Though the ecological impacts this will have are likely unintentional, it is difficult to ignore the complete disregard of these critical ecosystems. 

How do we help? 

Different organizations have focused their efforts to defend conservation laws, conserve endangered species, and rebuild habitats. One organization in particular, Defenders of Wildlife, has filed a lawsuit in hopes for the Supreme Court to review the constitutionality of the Real ID Act. Their two-part report also describes how US and Mexican Agencies are teaming up for conservation projects in the Lower Rio Grande area, including efforts to document animals and plant native vegetation to restore habitats. The Defenders of Wildlife website lists ways for you to take action and have a voice in helping protect threatened and endangered wildlife. 

Most importantly, we must take time to truly understand the consequences of political motives on wildlife. It is also our responsibility to protect critical ecosystems with our daily choices and be thoughtful of our votes come election time. We are not the only ones to call this land our home.

Breaking the two-hour tape: Engineering the fastest marathon run in history

0

What does it take to reach the peak of athletic performance and break barriers thought to be beyond human capabilities? One of these barriers is the two-hour marathon, a feat which requires running 26.2 miles while maintaining an average pace of 4:34 per mile. At that speed, you could run the 100-yard length of a football field in under 16 seconds! With improvements in training and exercise physiology, the men’s marathon world record has steadily decreased yet still lingers just above two hours. Some scientists believed the two-hour barrier would not be broken while others said it was only a matter of time.

Image Credit: Pedro Perim via Wikimedia Commons. Licensed under CC-BY-SA-4.0

Enter Eliud Kipchoge, the Kenyan long-distance runner who holds the world record for the fastest marathon after finishing the 2018 Berlin Marathon in 2:01:39. Winning 12 of the 13 marathons he entered, Kipchoge is widely regarded as the best marathoner of modern times. Kipchoge has a long history of success, including winning middle-distance championships in the early 2000’s and becoming the 2016 Olympic marathon champion. After falling short of a sub-two-hour marathon by only 26 seconds in Nike’s Breaking2 project, Kipchoge would again attempt to break the two-hour barrier at the Ineos 1:59 Challenge. But when every second counts, what would it take to allow the best of the best to approach the limits of human performance?

To improve running economy (how efficiently the body turns energy into running motion), Kipchoge wears a groundbreaking model of running shoes designed for marathoners. These lightweight shoes contain a carbon-fiber plate and a midsole with thick foam which ultimately lessen the energy needed to flex joints in the lower body, reducing overall demand for energy. The foam is also flexible and resilient; after a runner’s foot strikes the ground, the foam is able to push back (similar to a spring) to help propel the runner forward. Combined, these small running economy boosts can help shave seconds off Kipchoge’s pace. 

Image Credit: Vianney de Montgolfier via Behance. Licensed under CC BY-NC-ND-4.0. 

To decrease wind resistance and ensure he keeps pace, Kipchoge is assisted by 42 world-class runners, including Olympic medalists. These pacers run in front of him in a V-formation, rotating in and out throughout the challenge. An electric car helps pacers maintain formation by projecting lasers onto the pavement. Periodically, a person on a bicycle provides Kipchoge with hydration and fuel in the form of a carbohydrate-rich drink mix, replacing water stations usually present in races.

Aside from training, fueling, and economy, there are a range of external factors which influence performance. A low-altitude environment with overcast conditions, minimal winds, and a temperature around 40-50°F set the stage for the best performances. High humidity and temperatures challenge the body’s ability to regulate temperature, raise lactate production, and ultimately decrease running efficiency. To optimize Kipchoge’s performance, event organizers narrow down locations, dates, and times until they find conditions sufficient for his next attempt.

On October 12th, 2019 in Vienna, Austria, temperatures ranged between 43-57°F with minimal rain, moderate humidity, and winds averaging about 5 mph. This location was under three hours from where Kipchoge lived (reducing jet lag) and at a low altitude, facilitating a higher  concentration of oxygen in the air. Here, Kipchoge would attempt a sub two-hour marathon by competing 4.4 laps around a flat, tree-shaded course consisting of two long stretches with small loops at each end. That morning, Kipchoge started off strong, maintaining a consistent speed aided by the pacers smoothly rotating in and out along the course. Reaching the halfway point 10 seconds ahead of pace, Kipchoge and the pacers steadily progressed. With just over half a mile to go, the pacers were waved off and Kipchoge accelerated down the final stretch. Waving to the crowd and pumping his chest, he crossed the finish, completing the 26.2 mile course in a breathtaking 1:59:40.

Image Credit:  Michael Gubi via Flickr. Licensed under CC BY-NC-2.0.

While he shattered the two-hour barrier, Kipchoge’s time does not count as a marathon world record as event conditions did not meet official standards. The Ineos 1:59 Challenge was not an open event, Kipchoge was led by rotating pacers and a pace car, and he was handed fluids by cyclers. Yet, he is still recognized as the first human to run the marathon distance in under two hours. So what is next for breakthroughs in the marathon? Thirty years ago, scientists predicted an ideal athlete in perfect conditions could run a marathon in 1:57:58. In a marathon compliant with world record criteria, is this possible? As Kipchoge stated in an interview following his sub-two-hour marathon run, “Personally, I don’t believe in limits.”

About the Author

Emily is a PhD candidate in the Department of Microbiology studying a regulator of aromatic compound metabolism in the soil bacterium Acinetobacter baylyi. She loves running, college football, and taking her dog everywhere around Athens. You can reach her at emcintyre@uga.edu. More from Emily McIntyre. 

Sleeping Beauty Seeds

0

This year I’ve been reading a lot about seed dormancy and while we’re all hunkered down, sheltered in place during the COVID-19 pandemic, I can’t help but feel there’s an apt comparison to be made. Most plants don’t get a lot of input on where they land as seeds, but they do have a say in when they sprout. Seeds in the soil will wait all winter, or sometimes for multiple years, before deciding that conditions are just right to launch into the world as seedlings. If you’ve been on walks around your neighborhood (a great way to stay sane right now), you’ve probably seen this process in action as different plants rapidly sprout and grow, like the oak pictured below. One of my favorites — bluebonnets — are just starting to bloom as spring comes into full swing.

oaksprout
A sprouting red oak. Credit: Kelly McCrum, used with permission.

Plants “spring up” now because one trigger to begin germination is warming temperatures after a cold period . This is also why, if you’ve ever tried to sprout seeds at home, you’ll see recommendations to put seeds in your refrigerator for a week or so before planting. Spring and summer bring rain, the elimination of frost risk, and plenty of light for plants to produce the energy they need for growth. Seedlings do their best to time emergence during these favorable conditions so that they can grow as much as possible before harsher cold climates return.

 In addition to environmental factors, there are internal triggers for germination, too. If you’ve ever cut into a tomato or an apple and found the seeds have already started sprouting, you’ve stumbled upon plant vivipary. This phenomenon is caused by fluctuating hormone levels in the seeds – namely, running out of the ‘dormancy hormone’ known as abscisic acid. Using both external and internal cues to break dormancy lessens the chance that a plant will sprout too early, such as during a warm spell in February when there’s still a risk of damaging frost later in the season. 

3284816735_c947288ca8_h
 Vivipary in a tomato.
Source: Tomato seeds premature sprouting licensed by mykhal under CC BY 2.0

Amazingly, some seeds can wait centuries for the proper signals to break dormancy. If you’re a plant nerd like me, you may remember the Judean date palm, nicknamed “Methuselah,” that scientists sprouted from a 2,000 year old seed in 2005; he’s still doing well, and scientists are attempting to breed him with modern and other ancient date varieties. Other successful germination attempts include the 1,300 year old lotus seed from China that scientists sprouted in 1994. Unfortunately, the resulting plant was sacrificed to carbon dating, but the lead scientist on the project, Jane Shen-Miller, has since been caring for other centuries-old lotus plants. Re-growing ancient plants can give us information about the aging process, as well as about the evolution of a plant and its associated diseases. It’s the plant equivalent of Jurassic Park, with no frog splicing required!

957px-Methuselah-Ketura-2018-10
Methuselah, the ancient Judean date palm. Credit: Methuselah-Ketura-2018-10 licensed by DASonnenfeld under CC BY-SA 4.0

While seeds are lying dormant, whether it be for centuries or a few months, they are part of the ‘soil seed bank.’ Humans also create seed banks to preserve genetic diversity of crops (which ASO has written about here), and these natural seed banks serve similar purposes. In the case of a natural disaster, having a seed bank means that a given plant species won’t become locally extinct. Less drastically, an individual plant can be assured more of its offspring will survive if some of its seeds wait to germinate in the following growing season. This has been studied in desert annual plants, where the harsh environment almost guarantees that not all seedlings will survive to adulthood. If some seeds have remained dormant in the soil (though what fraction does so varies year to year) then the parent plant still has offspring that might survive in the next year. This phenomenon is especially important in plants that can only make seeds once before they die, though seed dormancy exists in plants of all life histories.

Of course, humans can’t bury themselves and wait decades to emerge from a shelter-in-place order. But in the meantime, maybe it’ll help to imagine yourself like a little seed: waiting out a hard winter and prepping for the day that you can stretch back into the sunlight of normalcy.

 

About the Author

McCrum_photo

Forget What You Know About Alzheimer’s

0

Alzheimer’s disease (AD) is the sixth-leading cause of death among adults in the US. Its progression is devastating: the brain slowly deteriorates, cognitive ability degrades, and bodily functions gradually shut down. Given our aging population and the huge financial burden of care, the National Institutes of Health is expected to contribute almost $3 billion to AD research in the year 2020 alone. Researchers worldwide have been working for decades to find a treatment. Despite their best efforts, treatments have proven mostly ineffective in clinical trials. 

Comparison of the normal brain structure (left) versus brain structure of a person with Alzheimer’s (right). Image courtesy of Garrondo via Wikimedia Commons. Licensed under Public Domain.

Some scientists now argue that the very foundation of AD research may be outdated. There is surprising evidence that AD could be triggered by an infection, rather than some intrinsic property of the brain. If true, that means decades worth of research and development may be aimed at the wrong molecular targets. Understandably, skepticism abounds among the research community. What happens when dissenting new evidence butts up against established medical paradigms?

What We Think We Know

Our current AD paradigm is based on late-1980s observations that advanced AD patients accrue massive amounts of misfolded beta amyloid (Aβ) protein peptides in their brains. Known as plaques, these protein bodies are believed to trigger further neurodegenerative processes. The amyloid hypothesis is supported by genetic evidence, as the most significant marker for predicting future AD is the APOE4 mutation. ApoE is an enzyme which helps to clear Aβ peptides from the brain, but the APOE4 variant is impaired in this activity. However, treatments designed to target Aβ peptides have never been successful.

Depiction of Aβ plaques (orange) and tau neurofibrillary tangles (blue) in the brain. Image courtesy of NIH Image Gallery. Licensed under Public Domain.

An alternative theory, the tau hypothesis, focuses on tau protein fragments that form aggregates called neurofibrillary tangles inside of nerve cells. Proponents believe that this is the biochemical cause of AD pathology. Like plaques, tau tangles accumulate in the brains of AD patients. Tau is increasingly becoming a target of new drug development, but again, clinical trials have been mostly negative.

There is little doubt at this point that Aβ and tau play important roles in AD; their near-universal prevalence in AD patients cannot be ignored. But why is it, then, that anti-Aβ and anti-tau drugs have so far proven ineffective?

A New Context for Old Discoveries

New research is providing intriguing possibilities to answer this question. Given current trends in microbiology, perhaps it is not surprising that numerous links have been proposed between AD and the microbiome. Several independent labs now believe they have identified a specific culprit: Porphyromonas gingivalis. P. gingivalis is the main bacterium involved in gum disease. What’s staggering is that this bug is able to invade the brains of ApoE-deficient mice, cause inflammation in the same brain regions that are affected by AD, and even induce production of Aꞵ plaques in the brains of previously healthy mice. 

Recently, researchers found evidence of two P. gingivalis toxins in the hippocampus —  the brain’s memory processing center —  in 91-96% of the 54 human AD brain samples that they tested. These toxins, known as gingipains, give bacteria the ability to invade and feed on human tissues. Higher gingipain levels directly correlated with higher levels of tau neurofibrillary tangles in the human samples, demonstrating increased cognitive impairment. In a mouse model, treatment with gingipain inhibitors reduced inflammation in the brain, blocked formation of Aβ plaque components, and even rescued damaged neurons in the hippocampus. Although initial sample sizes are small, there is compelling evidence that infection may play a role in the onset or exacerbation of AD.

Changing the Paradigm?

Proposing that AD is an infectious disease seems counter to everything we’ve ever known about this illness. Assuming for a moment that this hypothesis is true, what does that mean for traditional AD research? Maybe Aβ plaques and tau bodies are not the direct causes of AD, but rather the symptoms of infection. Maybe this is why targeting these proteins has proven so ineffective. Maybe this is why there have been no significant breakthroughs in AD treatment in 40 years.

Despite the saying, science is not a perfect science. We are always bound by the limits of the information available to us at the time. What these novel studies demonstrate is that we don’t have all the information yet regarding AD. We should be willing to entertain radical new ideas that are supported by evidence, rather than hold tight to established yet fruitless paradigms. Now is a time when we can choose to be open to new ideas, or we can continue to delay life-saving advances while again confirming what doesn’t work.

Loved ones raise funds and awareness for AD research. Image courtesy of Susumu Komatsu Photography, Licensed under CC BY 2.0.

About the Author

Jennifer Kuraz

Jennifer Kurasz is a graduate student in the Department of Microbiology at UGA, where she studies the regulation of RNA repair mechanisms in Salmonella. When not in the lab, she prefers to be mediocre at many hobbies rather than settle on one. She greatly enjoys her women’s weightlifting group, cooking, painting, meditation, craft beer, and any activity that gets her outdoors. She can be contacted at jennifer.kurasz25@uga.edu. More from Jennifer Kurasz.

Fanning the flames

0

In recent years, it feels like we have watched parts of the world be swallowed whole by fire, painting a very apocalyptic picture of the future. Nearly 40,000 square miles in Australia were decimated by bushfires last year. California’s Camp Fire displaced about 50,000 residents, and Indonesia saw over 2 million acres of land consumed by flames, including precious orangutan habitats. The scale and frequency of this destruction feels unprecedented, but what’s causing them? And why now?

1920px-Camp_Fire_oli_2018312_Landsat
The 2018 “Camp Fire” in California was the deadliest and most destructive fire in CA history. Credit: NASA via Wikimedia Commons licensed under Public Domain

In the Devonian period around 400 million years ago, a rise in trees caused a rich production of oxygen in the atmosphere; with that came the natural process of forest fires instigated by lightning. The fiery landscape continued for millions of years, whilst organisms evolved alongside it – the results of that coevolution are easily seen today. For example, Jack pine, a pine tree native to the northern US and Canada, evolved serotinous pine cones, which only open to spread seed in intense heat. The longleaf pine, Jack pine’s southern relative, developed different growth stages that revolve around fire, with its seedlings being essentially fire-proof. Smaller understory plants evolved to store more of their carbon-rich biomass underground, effectively ‘hiding’ more of themselves until the fire passed. Plants weren’t the only ones to innovate – animals, too, have evolved to thrive in a fiery ecosystem. Take the gopher tortoise, for example, which is considered an ‘ecosystem engineer’. The tortoise’s burrows become the perfect underground bunker for it, and hundreds of other species, to wait out a fire.

emma_gopher
Gopher tortoise borrows can shield hundreds of species of animals during fires. Illustration by Emma Roulette. Used with permission.

Historically, fire has been a valuable tool for humans and their ancestors, who have been using it as early as one million years ago in Africa. With their arrival in North America around 14,000 years ago, early humans learned to use controlled fires – what we now call prescribed burns – to their benefit. The purpose of these burns was multifaceted. They provided mineral ash and carbonized leaf litter, which settled and fertilized the soil, creating a rich substrate for agriculture. The fires also facilitated hunting. The tender new growth of shrubs and grasses attracted animals, which hunters could more quietly stalk in the newly cleared forest.

IMG_3331
Grass growing back a few days after a prescribed burn. Photo by author.

These adaptations by plants, animals, and people played a role in the establishment of fire-dependent ecosystems. In North America alone, these range from savannah plains and swamps, to conifer forests. However, with the arrival of colonialism on the continent in the 15th century, forest fires were deemed destructive and wasteful. Ironically, the land the colonists saw as pristine and untouched by man was the result of millennia of fire practices by natives. The combination of fire suppression and excessive logging by the colonizers left ecosystems massively disturbed. Flammable forest debris build-up, what foresters call “fuel,” was left unchecked, leading to devastating wildfires. These wildfires, unlike prescribed fires, are uncontrollable, extremely hot, and damaging to plants and animals. As early as 1910, a series of wildfires known as the Big Blowup, swept through three states and killed 85 Americans. These are the same fires we see today, torching millions of acres on the news. 

Frederic_Remington_The_Grass_Fire
Native Americans shaped the landscape for thousands of years before the arrival of Europeans. Painting by Frederic Remington National Gallery of Art, Washington D.C. licensed under Public Domain.

Now that we better understand forest and fire ecology, forest managers can facilitate natural processes by incorporating prescribed burns in management practices. But prescribed burning is not implemented in every fire-dependent ecosystem in the US. In the southeast, prescribed burns are heavily implemented; whereas in the west, prescribed fires are not as incorporated into management practices.

Screen Shot 2020-03-29 at 6.21.11 PM
Graphic showing acres burned by wildfires and prescribed burns in the US. It’s no coincidence that areas that undergo prescribed burns are less scathed by wildfires.  Image credit: https://www.climatecentral.org/

Of course, prescribed fires near residential areas or highways can present urgent safety concerns, especially for those with respiratory illnesses. Non-fire solutions to prevent the accumulation of fuel have been explored, such as allowing goats to intermittently graze potentially flammable grasses. The Forest Service also puts an emphasis on outreach and education, providing people with the tools and knowledge to prevent wildfire on both public and private lands.

ezgif-6-0bb1944dc591
Prescribed fires can be initiated using many tools, including a drip torch, shown above. Video by author.

There is no singular answer as to why wildfires like the ones we see in Australia and California are so destructive – but it can be boiled down to fire suppression and the dark cloud looming over everyone’s environmentally conscious head, climate change. There is no doubt that the frequency and severity of wildfires will increase as extreme droughts and higher temperatures are expected in the wake of climate change. Mitigation of climate change is imperative to reducing these wildfires, and so is education. The US Forest Service is taking measures to educate the public on the benefits of prescribed fires and how we can prevent wildfires, since more than 80% of wildfires are caused by people.

 

Our dependence on forests is more than one can imagine – in unexpected ways too – and preserving these ecosystems is integral to saving ourselves, the land’s history, and the millions of beings who were here before us. Find out more on how to prevent wildfires and how to curb climate change

 

About the Author

Simone Lim-Hing is a Ph.D student in the Department of Plant Biology at the University of Georgia studying the host response of loblolly pine against pathogenic fungi. Her main interests are chemical ecology, ecophysiology, and evolution. Outside of the lab and the greenhouse, Simone enjoys going to local shows around Athens, cooking, and reading at home with her cat, Jennie. You can reach Simone at simone.zlim@uga.edu or connect with her on Twitter. More from Simone Lim-Hing.

Double Merle Dogs

0

Dog coats come in a seemingly endless variety of patterns, lengths, textures and colors, determined by their genetic makeup. Just 8-14 different genes are responsible for most of these differences in coat color and pigmentation. Dogs inherit two alleles, or variations, of each of these genes, one from the father and one from the mother. Alleles can be dominant, where the effect is present if there is just one copy present, or recessive, where the effect is only shown if there are two copies of the allele present. The resulting combinations of alleles that are inherited influence certain aspects of their coat color. One of these genes, the merle gene, impacts coat color by producing distinguishing markings in numerous breeds. 

The merle gene exists as two alleles: the dominant allele Merle (M), and the recessive Non-merle (m). If a dog inherits the dominant M allele from at least one parent, it will have merle characteristics. As a result of the M allele, random sections of the dog’s coat will be diluted or mottled. Merle dilutes the dark pigments, and can result in partial or completely blue eyes as well as lightened colors on the nose and paw pads. Typical merle dogs have one dominant merle allele and one recessive non-merle allele (Mm). If two of these merle dogs are bred, there is a ¼ chance that their offspring will inherit two copies of the Merle allele (MM). These dogs are called double merles. 

Double Merle Shetland Sheepdogs Kalisi and Adora. Photo by Dawn H. – used with permission

Double merles are mostly white in color; they are also more likely to have hearing and vision problems. The main connection between the merle gene and health problems is rooted in the pigment producing cells, or melanocytes. Melanocytes in the inner ear help convert vibrations from sound waves into electric impulses sent to the brain to be interpreted as sound. The merle gene causes a reduction of melanocytes. While dogs with a single copy of the merle gene normally have enough of this type of cells present, double merles have very few, to the point where hearing loss can occur. Lack of melanocytes also leads to reduced blood supply and ultimately the death of nerve cells in the ear. Double merles can be deaf or hearing impaired in one or both ears. Double merles are also more likely to have eye and vision defects, though the exact link between the merle gene and vision defects is unclear. Resulting abnormalities including irregular development of the pupils and iris, or reduced eye size can cause light sensitivity, poor vision, and partial or total blindness.

Curious about the experiences of raising double merle dogs, I reached out to Dawn, who owns two double merle Shetland Sheepdogs, Kalisi and Adora. Kalisi is deaf and vision impaired, and Adora is deaf and blind (8,9). Dawn is an advocate for educating people on double merle and specially abled pets, and shared some of her knowledge and experiences with these unique dogs.

Two large misconceptions are that hearing and vision impaired dogs are not trainable or that they startle and bite easily. However, they are just as intelligent and food motivated as any other dog and are able to be trained to respond positively to touch. Both Kalisi and Adora are trained using touch commands. Kalisi’s training also involves different hand signals for different activities including obedience, tricks, and agility. She has two dog trick titles and works as a therapy dog. Dawn says their training is comparable to that for her non-hearing or vision impaired sheltie, Kappi. The main difference is in the way of giving a command, be it by voice, hand, or touch.

Kalisi has earned both novice and intermediate trick titles. Photo by Dawn H. – used with permission

Purposeful breeding of two dogs that both carry the Merle allele (often referred to as merle to merle breedings) are typically avoided. However it is not always possible to tell if a dog is a merle by looks alone, as other factors that determine coat pattern could make the characteristic diluted patches less apparent. Genetic testing should be performed to ensure a dog is not one of these “cryptic” merles. Additionally, there are over 15 breeds known to carry the merle gene, so double merles can still occur in a litter from two merle dogs of different breeds. 

Most double merle puppies are a result of poor breeding practices or the accidental breeding of two merle dogs, and are often euthanized shortly after birth or placed in shelters. Those that aren’t killed often face trouble finding homes as there are many misconceptions surrounding both their ability to be trained and regarding health problems they may face.

Fortunately, numerous groups and advocates are working to combat these myths. With the correct knowledge and training, double merles and other dogs with disabilities are capable of living normal lives and make wonderful pets. Dawn and other advocates want everyone to know that double merles are “different, not less” and that their only limitations are those we put on them.

About the Author

thumbnail_biopic_EAM

Emily is a PhD candidate in the Department of Microbiology studying a regulator of aromatic compound metabolism in the soil bacterium Acinetobacter baylyi. She loves running, college football, and taking her dog everywhere around Athens. You can reach her at emcintyre@uga.edu. More from Emily McIntyre. 

This looks familiar…

0
Rushed city by Huub Zeeman is licensed under CC-BY-NC-ND 2.0

How many times has this happened to you before? You walk into a room–it could be one you’ve stepped foot in a dozen times that day, or never at all– and hesitate by the doorway. There is something about that space that is nagging at the back of your mind. You decide that, somehow, you have lived through this moment before or you’ve seen this room, exactly as it is arranged now, at this precise point in time. You likely know what the feeling is called, déjà vu, but what is it? And why does it happen? 

What is déjà vu?

Déjà vu is a French term that translates as “already seen”. It was coined by philosopher Émile Boirac to describe the brief sensation of having already lived a novel moment. The majority of the population has had at least one episode of déjà vu in their life, with 60% of the population experiencing them regularly. These episodes can persist anywhere from ten to thirty seconds and have no damaging or lasting effects. The sensation of déjà vu comes from the combination of two different cognitive processes: the recognition of a particular event (knowing you’ve been there/seen that before), and the awareness that recognizing the event is incorrect (knowing you possibly couldn’t have been there/seen that before). What’s more interesting, though déjà vu is a very common phenomenon, its causes can vary depending on the individual.

An odd side effect

“Brain Illustrations” by Denise Wawrzyniak is licensed under CC BY-NC 4.0 

Instances of déjà vu are often associated with neurological or psychological conditions. However, regardless of whether the cause of déjà vu is benign or pathological, scientists agree that the areas of the brain involved are all found within the temporal lobe. These regions are in charge of processing sensory stimuli (everything you hear, see, smell, taste, and feel) and converting them into memories in the rhinal cortex. The rhinal cortex acts as a middle man–it helps take the information received by your senses and turn them into memories. Those memories can then be consciously recalled when needed. The more common an event is, the less rhinal processing it requires to be stored and retrieved. Certain conditions can affect this storage/retrieval process, suggesting that déjà vu could result from gaps in memory conversion in the rhinal cortex, like when you save several files in your computer to similar names and you have to open a few files before finding the right one. 

The most common pathological cause of déjà vu is temporal lobe epilepsy (TLE). TLE is a common form of focal epilepsy and can be subdivided by the severity of symptoms, going from momentary loss of awareness (simple partial seizure) to strong convulsions. Déjà vu often precedes epileptic episodes in the mildest form of TLE, simple partial seizures. It serves as an “aura” for oncoming episodes of epilepsy felt before the onset of more severe symptoms. This particular type of déjà vu differs from that of healthy individuals because the sense of familiarity is not connected to anything in the environment–they do not feel as if they’ve lived that moment or been in that place before. This distinction has led scientists to conclude that disease-associated and “normal” forms of déjà vu must have different causes.

A glitch in the matrix?

While there is certainly evidence linking it to disease, déjà vu is most often just a glitch in a completely healthy brain. Incidence is significantly higher in young adults (15-25 years of age) and in those with higher education, socioeconomic class, or who travel often. Additionally, déjà vu is also increased in subjects who are under significant stress or lacking sleep. This evidence had led scientists to think that déjà vu is a by-product of memory consolidation, the process of transforming short-term memory to long-term. Therefore, with increasing stimuli like enjoying movies, documentaries, and books, or decreased amount of time for processing, the frequency of episodes increases.

Study by David A Ellis is licensed under CC BY 2.0

Probably the most fascinating fact about déjà vu is that brains of healthy individuals who experience the phenomenon are different from those who report no episodes. One particular study observed the volume of brain cells, or grey matter, in different regions of the temporal lobe in subjects with and without déjà vu experiences. People who report experiencing déjà vu episodes have a lower total volume of grey matter in memory-specific areas of temporal lobe compared to those who don’t experience the phenomenon. 

The brain is a magnificent machine capable of unimaginable wonders, but that doesn’t mean it’s perfect. In its quest for efficiency, it sometimes takes a few ill-advised shortcuts that can leave you feeling confused. So, next time you walk into a room and feel like you’ve lived that moment before, remember that it’s the harmless side effect of a brain trying to juggle too many things at once. Take a moment to appreciate the complex process that brought on this phenomenon and maybe consider taking more naps.

Saving more than just seeds, in situ

0

While I’m often left paralyzed by apple choice in Kroger, I know the breadth of options at grocery stores mask a far different reality: we’ve lost roughly 90% of the world’s crop varieties in the past 100 years. This threat to future food security is referred to as genetic erosion and primarily attributed to the proliferation of modern cultivars, which displace local crop varieties. Conservation methods to maintain crop biodiversity rely on either the use of external seed banks and greenhouses (ex situ) or through continued cultivation on farmland (in situ).

As I’ve previously alluded to, ex situ conservation is imperfect. While there is a bleak romance to seed banks as our planet’s emergency supply closet, this shouldn’t be our only option. One of the most obvious drawbacks are the physical limitations to conserving all plant genetic material. Everything cannot be banked – for instance Svalbard has roughly 5,000 of the estimated 390,900 existing plant species in its collections- so then which plants are deemed worthy of this protection? And who gets to make those decisions? 

pasted image 0
Pea Sample- Pisum sativum (Fabaceae), 1880-1960. Image credit: Museum Victoria licensed under CC BY 4.0

Then there’s a messier issue I will distill to: seeds have context. Germplasm is not a standalone technology, but rather interwoven to its ecological and cultural surroundings. Severing such ties without mindful consideration has consequences. One such example can be found in the 16th century proliferation of maize in Europe. The crop’s relative affordability led to its quick adoption as a food item among Italian peasantry. But lost in the grand crossing of the Atlantic was the concept of nixtamalization, the traditional Mesoamerican alkali treatment of corn, which ensures the bioavailability of niacin (also known as essential vitamin B3). Without nixtamalization, and with corn as the primary food source, chronic niacin deficiencies emerged in a scourge of pellagra in Italy, a disease marked by neatly descending “Ds”: diarrhea, dermatitis, dementia, and death. This same pattern reemerged in the American south in the late 19th century, taking thousands of lives. It’s embarrassing to think that a glint of respect for the cultural knowledge surrounding food preparation could have averted centuries of human suffering.

pasted image 0 (1)
“Pellagra, an American problem.” Image credit: Medical Heritage Library, Inc. is licensed under CC BY-NC-SA 2.0

What use is a seed if we do not know how to appropriately grow, process, and eat it? Storing seeds in a vault decontextualizes plants, necessitating a complementary mode of conservation that maintains the robust cultural knowledge surrounding crop variety production and consumption. This can be found in in situ conservation, where continued cultivation on farmland ensures maintenance of both germplasm and its kindred socio-ecological system. Critical to this type of conservation is traditional and indigenous knowledge.

The Potato Park in Peru is one example of a successful landscape-scale in situ conservation model in the Andean region, which encompasses two of Vavilov’s centers of origin. The site is classified as an Indigenous Biocultural Heritage Area, and aims to protect the region’s incredible biodiversity and improve indigenous livelihoods through use of traditional knowledge. Methods of crop cultivation here are emblematic of traditional modes of farming more generally, in that they are incredibly complex and low-input, with a typical farm plot containing between 250-300 potato varieties. Success of such farming systems is highly reliant on deep agro-ecological knowledge.

unnamed
Varieties at Potato Park. Image credit: The International Institute for Environment and Development licensed under CC BY-NC 2.0

However, traditional farmers are continually facing incentives to switch to higher yielding, profitable commercial cultivars and more generally, a global economy that devalues traditional modes of existence. This has been displayed in the indigenous Arawakan women of Venezuela, who have customarily cultivated over 70 varieties of bitter manioc (cassava). With cultural shifts to an education system that encourages the abandonment of traditional modes of crop production, there has been a concurrent erosion of traditional cultivation, knowledge and the associated agrobiodiversity.

Maintenance of genetic diversity is a global public service. Thus, structures should be put in place to support both traditional varieties and their corresponding knowledge. Some suggestions range from community based conservation approaches with designated funds for compensating communities for income losses, to establishing separate “farmers rights” legal systems that explicitly recognize farming community’s contributions.  But instead, we primarily have western Intellectual Property structures that incentivize commoditization and individual ownership. While I am no etymologist, there does seem to be a glaringly obvious “culture” in agriculture that should be paid heed. 

The path to extinction is paved by both loss of genetic diversity and loss of knowledge, and so we need ex situ and in situ conservation hand in hand.

__

It’s worth mentioning the various directionalities in human relations with plant material. While most of us are attuned to the thinking of humans domineering plants to suit our needs, plant genetic material can similarly influence humanity. Landrace varieties that are interwoven with their local ecologies demand that we too pay more attention to our immediate environment in order to successfully harvest them. In a way, fostering this relationship with localized plant material can produce subtle human-environment relational shifts away from domination and towards respect. And because I am writing for a website with the word “science” in the title I will spare you from the philosophical zenith of this train of thought, but will leave some links in case anyone cares to meander in that direction.

 

About the Author

Tara Conway is an M.S. student in Crop and Soil Sciences, where she is working towards the development of a perennial grain sorghum. She is originally from Chicago, IL. Her work experience spans from capuchin monkeys to soap formulating. You can reach her at tmc66335@uga.edu, where she would like to know which bulldog statue in town is your favorite. Hers is the Georgia Power one due to its peculiar boots. More from Tara Conway.

Featured image credit: “Cobs of Corn” by Sam Fentress licensed under CC BY-SA 2.0.

The undead ghost forests of Georgia

0

The US Atlantic coast is a dynamic, living landscape. Georgia in particular displays a picturesque mosaic of barrier islands, salt marsh meadows, maritime forests, brackish marsh and river networks snaking up the Coastal Plain. Together, coastal habitats form a dynamic ecosystem capable of protecting the coastline, storing carbon, filtering water and providing coastal regions with valuable fisheries.

Spartina marsh and creek network in the low elevation foreground with maritime forest at a higher elevation in the background. Image Credit: Rebecca Atkins. Used with permission.

 The last hundred years, however, have set the stage for unsettling trends in the rate at which coastal areas are changing. As the earth warms and glaciers melt into the ocean, scientists are predicting an increase in sea level between 3 and 11 feet for the Georgia coast by the end of the century. While this “sea-level rise” may not sound significant, a minimum of 12,500 homes, 350 miles of road and 278 square miles of the Georgia coast will face catastrophic flooding. Similar flooding scenarios are expected to play out along the entirety of the eastern US coast. Notably, it’s not just the rising sea-level that’s an issue, but also land sinking. Much of this sinking is the natural result of post-ice age glacial rebound, which is one of the main contributors to sea level rise in coastal Georgia. 

Marsh edge becoming submerged by the tide. Here you can see a layer of green marsh cordgrass (Spartina)  and the muddy marsh platform held together by an intricate grass root network. Image Credit: Rebecca Atkins. Used with permission. 

The effects of rising sea levels aren’t always as visible as flooding. Increased rates of saltwater intrusion into groundwater and low-lying areas is also a growing problem which can lead to even faster soil breakdown and further loss of elevation. One major example of this process is being observed in the Florida everglades. Another phenomenon resulting from this saltwater intrusion is occurring on a large scale in coastal trees. As saltwater pushes inland, salt-intolerant hardwood trees are dying. From the roots up, coastal tree communities are transitioning into “ghost forests.”

Ghost forests do not pop up overnight, but they are becoming increasingly prevalent. Tree death is a gradual process, normally taking years to decades, but the increasing frequency of extreme weather events like storms and drought can accelerate forest loss. Hardwood species such as oaks and tupelo are usually the first to go, followed by more salt-tolerant species like sweet gum, red cedar and loblolly pine. Eventually, entire landscapes will transition from forest to marsh, and perhaps in time to open water. 

A similar phenomenon has been noted on barrier islands, like those spanning the coastline of Georgia and South Carolina. These islands are shaped by the movement of wind, ocean currents and sediment. Typically, sediment gets stripped from the northern end of barrier islands and is then deposited along the southern end, forming a new beach. This process of sand-sharing can give rise to “skeleton” or “boneyard” forests along the eroding beaches. 

As forests succumb to the sea, the skeletons of maritime forests help to stabilize eroded beaches. They can even be beautiful, serving as popular tourist attractions. However, even though skeleton forests may represent a natural part of Georgia’s barrier island life cycle, the increasing rate of land loss due to the combination of rising sea levels, human development and extreme weather is faster than some islands can keep up with. 

A staircase to a beach on Jekyll Island being submerged by a high tide and shoreline armoring (here a sea wall) installed to minimize beach erosion. Image Credit: Rebecca Atkins. Used with permission.

 Ghost forests can be viewed as a natural response to changing environmental conditions. Emergent marshes are better able to store carbon and keep up with sea level rise than forested areas because of their ability to capture sediment and vertically accrete. However, the overall area of marsh land is declining faster than it can accrue due to sea level rise. Marsh expansion also depends on the availability of natural land at higher elevations to compensate as lower elevation land becomes completely submerged. This ability is limited by human activity when coastal communities build homes and install hard structures like sea walls to prevent beach erosion.

Overall, the growing presence of ghost forests from Louisiana to Canada is a worrisome indicator of a rapidly changing coast, and researchers are taking notice. Within the Georgia Coastal Ecosystems Long Term Ecological Research Program (GCE LTER), a project has been initiated to measure the response of trees along the Altamaha river to hurricanes. So far 45 trees are being repeatedly surveyed as an indicator of forest health as storm events increase and salt water pushes further up into rivers. 

Compared with the more developed coastline along the Northeastern US, Georgia is praised for its roughly 100 miles of pristine coast. Unfortunately, sea level rise is both a global and a local problem that we’ll all have to face, and ghost forests, although captivating, are a haunting reminder of what’s at stake.

One extremely popular wedding destination is Driftwood Beach on Jekyll Island. Image Credit: Rebecca Atkins. Used with permission.
The sanded down surface of a ghost tree. Image Credit: Rebecca Atkins. Used with permission.

 About the Author

Rebecca Atkins is a Ph.D. student in the Odum School of Ecology studying. She is passionate about coastal ecology and is currently studying the effects of temperature on snails populations across Atlantic US salt marshes. In her spare time, she pursues art, weight lifting and drinking copious cups of local coffee. You can email her at Atkinsr@uga.edu or follower her @ RL_Atkins

Plastic tips: a more sustainable science

0

Alternatively, this post could have been titled, My Guilty Conscience Series: Plastics

This blog post has been a long time coming – given the fact that I (and many others) have been conditioned to “reduce, reuse, and recycle” before we could even multiply. Yet, as I continue to diligently organize my empty jars and cans into recycling bins, I come to the lab everyday and amass a sizable amount of single-use plastics, and they’re not even recycled. 

They just go straight into the garbage.

It doesn’t come as a surprise: we have a global plastic crisis. The increasing plastic pollution has been well-documented by researchers around the world. If our current plastic waste production and management persists, we face long-term, detrimental consequences that include endangerment to marine life, economic damage to coastal cities, and increasing microplastics in our diets. Currently, there is a movement to limit or ban single-use plastics for average consumers, largely focusing on everyday plastic bags, utensils, and packaging.

However, it would be reckless to claim that all plastic waste is due to individual consumer behavior. There is a more insidious current of plastic waste coming from a bigger, systematic entity: the research and development sector. Without exception, academic and industrial research bears a responsibility to curb its own plastic usage.

10mL serological pipette tips in a vase. A lovely bouquet. Image Credit: reerdahl via Flickr. Licensed under CC BY-NC-ND 2.0. 

In 2010, approximately 275 million metric tons of plastic waste was generated by 192 countries. Researchers at the University of Exeter estimate that life science research institutions generate 5.5 million metric tons of plastic waste each year, or roughly 2% of the global plastic waste production. This waste contribution is overwhelmingly disproportionate, considering the fact that life science researchers make up just 0.1% of the world population.

The reason for researchers’ large plastic contribution lies in the fact that plastics are well-integrated into laboratories. They’re cheap, disposable, and most importantly, sterile. 

Reagents are delivered to our door in plastic bubble wrap and Styrofoam. On our hands are periwinkle blue, latex-free gloves. Plastic pipette tips and sample tubes are disposed after a single-use, unless you want to introduce cross-contamination to your samples.

Curious about my own contribution, I collected all the single-use plastics I used in a day and estimated the amount of plastic waste I would generate in a year. With maintenance of my fly stock, cell culture, and miscellaneous experiments, I accumulated 254 g of plastic by the end of the day. This totals to approximately 66 kg – roughly the mass of a small woman – of plastic in a single year.

It quickly adds up. But how do we limit our plastic consumption when our research depends on it? 

67 g of my non-hazardous plastic waste. Not pictured: the rest of the 187 g of biohazardous plastic waste – which had already been safely disposed of. Image Credit: Kathy Bui. Used with permission. 

Some eco-conscious scientists are attempting to change their daily lab practices without compromising results, and they are calling for more awareness of science’s sustainability issue. There are open resources and hashtags (e.g. #labwasteday, #labconscious, #sustainablescience) dedicated to sharing sustainable practices and inspiring other scientists to follow suit in this movement. Currently, some general tips are to use glass containers as an alternative, wash and reuse single-use containers (whenever contamination is less of an issue), and support suppliers that sell sustainable products.

In addition, some universities are taking matters into their own hands. The University of Leeds made an ambitious initiative, pledging to give up single-use plastics entirely by 2023. This does not only include plastics in office spaces and cafeterias, but in laboratories too. Currently, the university is working with suppliers to limit the amount of plastic packaging and products as well as developing other alternatives to plastic equipment. Similarly, University College London, the UK’s largest university, plans to cut out single-use plastics and increase support for sustainability research by 2024.

Throughout the past few decades, there has been a major rally to control individual consumer plastic waste. However, there have not been any regulations on the research sector. While there is some recent progress on making scientific research more sustainable, there is still a need for systematic intervention and regulation for an entire sector’s worth of plastic waste. Some steps towards a large-scale change are to (1) contact your university’s sustainability program about a bigger initiative towards more eco-friendly practices and recycling programs in research or (2) express interest for more sustainable lab products with your supplier on social media. In the meantime, we can only be more conscious of our actions to reduce our environmental footprint – whether it’s recycling cans at home or just one less pipette tip at our bench.

Kathy Bui is a Ph.D. student in the Department of Cell Biology at the University of Georgia. She is currently working on CRISPR-gene editing in Drosophila melanogaster and developing split fluorescent protein technology. She uses sturdy glass tupperware for lunch and her Google Pixel 3 to take high-quality pictures.

The Treasure in Your Trashcan

0

Many of us can recall a time where someone we knew (or even ourselves) threw a banana peel out a car window.  They’re biodegradable, so what’s the harm? I’ll never forget the time my mom did not dispose of that peel in a proper way… My family and I were driving through Yellowstone National Park, and we had each eaten one of these tasty fruits.  One by one, my mom threw the peels out the car window and onto the dirt path, not even batting an eye. Unfortunately, a park ranger was following us, and after turning on his lights and pulling us over, we quickly learned that it was not the appropriate time or place to freely whisk away our peels.  Although many of us probably aren’t tossing our leftover produce in the middle of National Parks, there is still a lot that we don’t often consider when we carelessly chuck our organic waste.

Environment is Key

Depending on where you throw that banana peel, it can actually take up to two years to fully decompose. Rather than letting that banana peel slowly disintegrate in the wild, composting is the better option.  Composting, in itself, will help speed up the degradation of that banana peel, and cutting it into smaller pieces will make that process even faster.  Having a specific place in your yard or a bin on your porch isn’t enough to have a well-working compost, though.  You’ll need all the right conditions– a pH of 6.5-8.0, 40-60% moisture, and a temperature range between 80º-150º F, with higher temperatures being preferred since pathogens are destroyed.  Add in some earthworms if you need further assistance breaking down your scraps into smaller pieces. 

The Science of Composting

So what exactly is going on in that backyard compost box?  Composting is the process by which solid organic waste is turned into an environmentally-useful material.  But it doesn’t just happen as soon as an orange peel hits the ground. They key is having helpful microbes such as bacteria, actinomyces, and fungi, which assist in converting organic waste into smaller substances- namely carbon, nitrogen, phosphorous, potassium.  There are two types of degradation- aerobic, requiring oxygen, and anaerobic, having no oxygen requirement.  Aerobic degradation occurs much more frequently. The newly converted material can be used to boost the soil fertility of a garden or as a renewable energy source.  So what happens when you don’t properly compost food waste? Annually, every American throws out roughly 1,200 lbs of organic waste that could have been composted.  Sadly, when that leftover produce falls into a landfill, the biodegradation process doesn’t typically happen.  Due to their dry and oxygen-poor conditions, organic matter will most likely “mummify” rather than decompose.  

LetUsCompost

One Athens resident in particular saw a need for increased compost efforts when she decided to create her own composting business in 2012.  Kristen Baskin started LetUsCompost, a company that provided roadside compost pickup and compost-enhanced soil delivery service, in addition to compostable plates, cups, and silverware.  Over the seven years that they operated, they paved the way for Athens compost culture, getting several local businesses to hop on board. Hendershots, Collective Harvest, and The Hub Bicycles all worked with LetUsCompost to properly dispose of their food waste.  Although the company recently announced it is ending operations, Kristen and her crew have made a great impact on Athens compost culture that can still be seen today.

Kristen Baskin of LetUsCompost. Join Kristen and I to learn more about the science behind composting and how you can help turn your trash into treasure at this week’s Science Cafe! -Little Kings Shuffle Club, Thursday January 23rd at 7pm. (Photo used with permission)

About the Author

Hallie Wright studies host plant resistance and fungal avirulence of finger millet blast in Katrien Devos’s lab.  She’s passionate about enhancing agricultural literacy and helps middle schoolers conduct agricultural science experiments.  You can find her at local punk shows or eating jalapeño pineapple pizza at Fully Loaded.

The False Promise of Animal Testing: Safety and Efficacy

0

One fact that was drilled into my head while studying biomedical science was how few experimental drugs ever make it past clinical trials. A failure rate of 90% is reported. This struck me as odd, but I chalked it up as an example of how difficult drug development is and didn’t ask why. That changed when I decided to use mice as part of my thesis project. Initially reluctant, my graduate advisor convinced me it would be the best way to prove my hypothesis. As my experiments progressed, though, I started to wonder if the mice on my lab bench could really predict how a human would respond to the same treatment. This led to discoveries that would completely change my outlook on preclinical drug testing.

unnamed.jpg
Laboratory rats in typical research housing. Image Credit: Understanding Animal Research via Flickr. Licensed under CC BY 2.0

Ultimately, the reason so many drugs fail clinical trials comes down to two pillars of biomedical science: safety and efficacy. If a drug has dangerous side effects or if it doesn’t provoke a therapeutic response in enough people, it’s thrown out. As part of the preclinical regulatory process, the Food and Drug Administration (FDA) mandates that any investigational drug compound must be extensively tested in at least a few different species before approving it for clinical trials. To better understand why that is, it’s useful to examine the medical tragedy happening as the legislation was passed.

In the late 1950s and early 1960s, the world was reeling from the discovery that a new sleeping pill, thalidomide, would cause severe birth defects when taken by pregnant women. Developed and marketed in 1957 by the German-based company Chemie Grünenthal, it is estimated that over 15,000 children worldwide were born with deformities linked to thalidomide. The US, however, was mostly spared due to the FDA’s refusal to approve the drug. Politicians such as Senator Estes Kefauver (D-Tennessee) criticized Grünenthal while praising the FDA for recognizing the potential danger. Surely, the whole tragedy could have been prevented if the company simply tested their drugs on pregnant animals! In 1962, the Kefauver Harris Amendment to the 1938 Federal Food, Drug, and Cosmetic Act was passed, mandating that new drugs had to be proven safe and effective before being administered to humans. Extensive animal testing was enforced as the gold standard of ensuring this.

Here’s the thing: no one knows if Grünenthal actually tested thalidomide on pregnant animals. All of their records were destroyed. What we do know is that teratogenicity (embryonic toxicity) testing was routine by the 1950s. It’s unlikely that a well-established pharmaceutical company would just not perform those tests, but let’s assume they didn’t. Would more animal testing have prevented the disaster? To answer that, consider Karnofsky’s law:

Any drug administered at the proper dosage, and at the proper stage of development to embryos of the proper species…will be effective in causing disturbances in embryonic development.

2231477336_0ae45cd299_b.jpg
“Thalidomide babies” would often be born with underdeveloped limbs that resembled flippers. Image Credit: wild.sproket via Flickr. Licensed under CC BY-NC-ND 2.0.

Extensive animal testing has proven this to be true. By 2004, 1500 drugs had been shown to produce birth defects in at least one animal species, while only 40 were known human embryonic toxins. Mice and most other rodents do not exhibit classic thalidomide toxicity, even at doses of 4000 mg/kg. (In humans, thalidomide typically produces birth defects at 0.5 mg/kg!) Only monkeys consistently experienced birth defects when given thalidomide, and then only at 10 times the usual human dose. Unfortunately, this turned out to be an exception rather than the rule. For other known human embryotoxins, the value of using toxicity in non-human primates to predict human toxicity is very low. So, with all of these divergent results using modern techniques, would it seem likely that 1950s scientists could have been able to make sense of more animal data?

Animals are not little humans. Biological systems are so complex that even if two species share almost all of the same genes, the way that those genes are regulated and how they interact with each other can lead to totally different outcomes. Animal models routinely fail to predict safety and efficacy in humans, despite that being the very measure they are supposed to assess.Imagine how many potentially life-saving drugs have been discarded based on poor results in animals! It’s clear to me that the FDA needs to revisit Kefauver Harris, but what can be done in the absence of reliable alternatives to animal testing?

Stay tuned for part 2 of this series, where I will go over current efforts to phase out animal testing in preclinical drug research.

 

About the Author

unnamed (1).jpgIsrael Tordoya is an MS student in the Department of Pharmaceutical and Biomedical Sciences, studying the relationship between obesity and breast cancer. One day, he hopes to be an advocate for marginalized people (and animals!) in medicine while developing generic drugs. In his free time, he likes to run, listen to audiobooks, and make bad music. Find him on Twitter: @TordoyaIsrael or email him: it37190@uga.edu

The roots of your tea

0

While coffee has seemingly had a cultural renaissance, with independent coffee roasters popping up all over the country, and even the naivest 7 year old being able to spout the  difference between arabica and robusta, a far older, and ancient drink seems to remain in obscurity in the continental United States. The drink I’m referring to is tea – the national drink of Britain, and the world’s second most popular drink. 

Tea Styles

If you ask any random person on the street about tea they’ll easily be able to name a few of the classic varieties found at the supermarket. Green Tea, Earl grey, Oolong, pu’erh tea if they really know their stuff. But if you then prod their knowledge a little bit more, asking simple questions such as “what makes a green tea a green tea?” Or “how is an Oolong different than white tea”? They’ll most likely look at you askance, and mumble something incoherent. Or, they might make an all too common mistake of assuming that such a variable drink must come from different plants. Well, in fact they’d be wrong. All tea comes from a single plant, Camellia sinensis. But, then you’re still left with the same question, what makes different styles of tea unique if not the plant itself? The answer lies in the processing.

green plant scenery
Tea leaves. Photo by Arfan A licensed under Unsplash.

The single process that determines the style of tea is oxidation. Oxidation from a chemical standpoint is the loss of an electron mainly due to oxygen. Or in layman’s terms, it’s the change of one chemical to another with the help of an enzyme (in this case polyphenol oxidase) and oxygen. It’s this very process which causes bananas and apples to brown. However, it’s essential to realize that oxidation isn’t always a bad thing. Sometimes, when chemicals in our food change, they can change for the better, unlocking different compounds, and creating more unique flavors and aromas.

So, to create these novel flavors, tea growers expose tea leaves to unique sets of processes that either increase or decrease the amount of oxidation. What this process looks like in practice is taking large batches of tea leaves, and either rolling, or cutting them into smaller pieces. This type of mechanical action breaks up the cell walls of the leaves, spilling their cellular contents into the leaves, and causing oxidation due to enzymatic reactions. Oxidation then causes the polyphenols found in the leaf tissue to convert to flavonoids and terpenoids (the molecules which give tea its taste), while at the same time browning the leaves. This alteration of the native chemicals in the leaf gives each style of tea a unique flavor combo. 

green grasses
Tea leaves oxidizing after rolling. Image Credit: 蔡 嘉宇 licensed under Unsplash.

After a tea has reached its desired level of oxidation, the leaves are heated and dried to denature the enzymes in the leafs, and halt any further oxidation. This is an essential step, and requires the utmost precision. For example, if you were crafting an oolong and let it oxidize a little too much, you’ve actually created a black tea. Of the six different styles of tea ranked from least to most oxidized (white, yellow, green, oolong, black, post-fermented), the least oxidized teas have the least caffeine, while the most oxidized teas have the most caffeine. The lack of processing of less oxidized teas generally results in teas of this type having a “leafier”, or “fresher” taste than some of the more oxidized teas. However, it should be noted that this is only a general rule, as oxidation ranges can be quite varied even within the same style of tea. For instance, two different tea producers might oxidize their oolongs differently, with one oxidizing their leaves to 40% oxidation and another oxidizing their leaves to 80%. Both are still technically oolongs, but will be quite different to drink.

There are other factors which can have lesser effects on tea’s style, from where and how the tea was grown, the age of the leaf when picked, or if the tea plant had been exposed to pest. And yet, the major actor still remains the same process which browns our bananas. The simple act of damaging leaves and letting them sit and change has created a class of beverage that’s incredibly large and diverse. So, next time you’re sitting down with a cup of tea, why not take a minute to savor the taste of oxidation. 

About the Author

Pablo Mendieta is a graduate student pursuing a PhD in bioinformatics and genomics at the University of Georgia. His specific interest lie at the intersection of agriculture, and genetic technologies. From Boulder Colorado, he enjoys the outdoors, science fiction, programming, and hip hop. You can email him at john.mendieta@uga.edu or connect with him on Twitter. More from Pablo Mendieta.

 

Deadly realism, science communication, and dropping out of high school: an interview with Dr. Diana Six

0

Dr. Diana Six is a professor of Forest Entomology and Pathology at the University of Montana. Dr. Six’s work focuses on bark beetles, their symbiotic fungi, climate change, and what these factors mean for forest health. Her work has received the attention of the media and she has presented at TedX and has been featured on National Geographic, all while pursuing a master’s degree in journalism. Here, Dr. Six and I discuss her path from high school dropout to professor, how to feel hopeful as a scientist in the face of climate change, and the importance of science communication. 

Diana Six photo
Dr. Diana Six. Used with permission.

You took a different route to get to academia, can you talk a little bit about that experience? 

I think I can describe myself as the accidental tourist in a way. I’m a first generation college student. My mother got through high school and my dad dropped out in the 5th grade – he was virtually illiterate. It was a disruptive and abusive home. I dropped out of high school and spent years drifting and doing drugs. And then, I can’t even tell you what happened, but something changed and I decided I couldn’t go on that way. 

I went to night school to get my high school diploma. Two teachers there took an interest in me and talked me into enrolling at  a community college. I didn’t really want to go – I was doing it more to make them happy. I enrolled in library science because I couldn’t figure out what to do, but I liked books. I took a biology course my first semester and switched majors immediately to microbiology, receiving an associates degree. I went on and got a bachelors in agriculture because I liked bugs – that led me to do a masters in medical and veterinary entomology. 

At that point I wasn’t sure what to do; I had worked equally on insects and fungi and loved both. In fact, as a kid I had an insect and a fungus collection. I got offered to work on a PhD with bark beetles and fungi, and went ‘Oh god, this is perfect!’  

In the end, do you think your non-traditional route helped you become a better scientist?

Maybe. I have a deadly realism when I look at the world – I have a very critical eye. I think in ways it has helped, but in other ways my background really held me back. I had to really fight to get out of being insanely shy with no confidence and I still suffer badly from imposter syndrome – I don’t think that ever goes away. So it was a struggle, but I think it’s made me a better scientist. It took me a lot longer to get here, but I got here. 

[The non-traditional route] helps me advise students. In Montana at least, we get a lot of first generation students and it helps me talk to them. It also helps me talk to students that have been growing up in an abusive home and have had a rough start. They can kind of see if I made it, they can do it too. So, it’s helped me be a good mentor as well.

1280px-Mountain_pine_beetle_damage_in_Rocky_Mountain_National_Park
Mountain pine beetle damage in the Rocky Mountain National Park. Photo by Bchernicoff licensed under CC BY-SA 3.0.

You have a very impressive record of science communication. I was wondering how you got into science communication and how you balance it with your research? 

I’ve always been interested in improving science communication. At first it was to other scientists – there were papers that I knew had cool stories, but those stories were certainly hidden well. I wanted my papers to narrate what was actually happening. Then, bark beetles began to be a big thing and became quite political, so I was getting interviewed a lot. It was a pretty unsatisfactory interaction for me and the reporters – I didn’t know how to talk to them. Consequently, I wasn’t communicating well and wasn’t very happy with what they reported, although a lot of it, I soon realized, was my fault. So I started talking to journalists, looking more at how they operate so that I could interview better and start preparing in a different way.

Then I did something really crazy. Six years ago, I enrolled in a journalism program for a masters and I’m finally getting there. I’ve got one more class and am putting my thesis together! I realized in order to be really good at [science communication], I had to do more than take a two-hour workshop.. Now I’ve had to go out and do the reporting, actually work as an editor in a magazine, and do all sorts of things. I feel like I can write better, I can interview people better, and I can make better products. 

But now at least when people interview me, I know how to tell them a good story that journalists can report accurately. They don’t have to piece together jumbled stuff that I give them – you can lead them into the bigger story. 

As a journalist, do you have any tips on how to efficiently communicate your research for graduate students?

 We always tell linear stories as scientists: this is a question, this is how we’re looking at it, this is what we found, this is what it means. That’s not how you talk to a journalist; they just want to hear the end. And they need it in fairly jargon-free, short, clear, concise sentences. We have a tendency to talk in very long sentences and go on and on and on to explain one little thing, which is what I’m doing right now.

It’s hard to gather your thoughts if you go into this cold. If you can get an idea, briefly, of what they’re going to ask you about, it’s good if you have time to sit down ahead and put together soundbites. These are little short sentences that make clear what’s happening and it’s always good to have some metaphor in there. So if you want to make a point, use some kind of cool visual term that will make it kind of interesting.

There’s a book, Escape from the Ivory Tower by Nancy Barron, which is a way for scientists to learn to communicate with the media – it’s an awesome book. There’s one page in particular in there called ‘The Message Box.’ If you do nothing more than to use that page, you’ll become a better communicator for science. In one of my journalism courses, they had us use this and now I never go into an interview without filling out a message box ahead. In fact, I have a whole stack that I pull back out, depending on what I’m doing. I would recommend that for any grad student or academic that wants to do interviews that they start with that message box approach. It’s really powerful.

 Where do you see the field of science communication going?

All people going into science should learn good science communication skills. I don’t think everybody has to get a degree in journalism, but developing that as a skill is crucial.

One of the reasons that science has lost credibility is that people don’t understand it – they don’t hear about the science that’s being done. If scientists were communicating what they’re finding and it’s value more often, then appreciation for science would be stronger and people would see the value of it in their lives.  Even for people who are communicating about climate change – if you don’t understand your audience, the communication isn’t going to happen. So, learning how to be a good communicator of science is crucial and I think all graduate students should be doing some aspect of it. It should just become a natural part of training. 

Life of Pine featuring Diana Six from CJ O’Flair on Vimeo.

Your twitter bio says, “Climate change is real – just ask the bark beetles and pretty much all of nature.” Do you have a go-to spiel about climate change for those who deny it or aren’t so familiar with it?

I do it by examples that affect their lives. People don’t really become concerned unless it’s affecting something that’s very real and important to them. I think about what community they live in and what kinds of things that they do that are important to them. Then I’ll point out things that have likely changed in their lifetime that they can see. They often start to understand; then you can build on why those things are changing and how that could influence them. I can’t tell someone, ‘You need to worry about this because of polar bears.’ That makes them go ‘Oh I like polar bears and that’s a bummer,’ but it’s not going to affect them in their heart. So for me, when I give talks or meet with people, I try to bring it to the effects that are in their lives. 

Do you have any advice to give to new scientists who feel unhopeful about the future in the face of climate change?

This is the toughest time to be an ecologist. Ecologists study interactions between species but we’re seeing these interactions changing. They’re either being torn apart or enhanced. It’s not only just sort of depressing to see ecosystems you study begin to change and fall apart, and extinctions increasing, but this makes it increasingly difficult to study just basic questions. 

My advice is, if you’re an ecologist and that’s what you want to do, there’s probably no more of an important time for you to be one. The information that you can gain right now has such added value and importance and I think you can make a major difference like never before.

To hear more from Dr. Diana Six, you can follow her on Twitter or visit her lab website. To learn more about the mountain pine beetle outbreak and her work on it, you can check out her interview here.

About the Author

Simone Lim-Hing is a Ph.D student in the Department of Plant Biology at the University of Georgia studying the host response of loblolly pine against pathogenic fungi. Her main interests are chemical ecology, ecophysiology, and evolution. Outside of the lab and the greenhouse, Simone enjoys going to local shows around Athens, cooking, and reading at home with her cat, Jennie. You can reach Simone at simone.zlim@uga.edu or connect with her on Twitter. More from Simone Lim-Hing.

 

From Touring Musician to International Mycologist

0

Dr. M. Cathie Aime is a Professor of Botany and Plant Pathology and Director of the Arthur Fungarium and Kriebel Herbaria at the University of Purdue. Her lab specializes on the biology of rust fungi as well as the biodiversity of tropical fungi, which has led her research to have an international focus. Interestingly enough, Dr. Aime didn’t follow the traditional method to academia by any means.

TLC
Photo by Cathie Aime, used with permission.

Can you describe your undergraduate experience?

 I dropped out of undergraduate school in my third year to take care of my grandmother in New Orleans. I was a musician, played in a bunch of bands — even toured. I worked in bookstores and as a waitress to support my music habit for about 10 years. When I turned 30, I told myself, “You’re not going to make it as a musician. You should do something with your life.” So I decided to finish my undergrad degree and eventually got my PhD in mycology.

How did you become interested in mycology?

My last course in undergrad at Virginia Tech was in mycology. I didn’t know anything going in —  other than that mushrooms sounded cool. Sure enough, it’s what changed my life. I had a really good professor, so I became fascinated with fungi and how much there was unknown about them. He (Orson Miller) convinced me to go to grad school, and I ended up staying at Virginia Tech to work with him.

Aime-Schneider-Rush.JPG
Photo by Cathie Aime, used with permission.

What inspired you to stay in academia?

Really, everything from that point is about Orson (Miller); he was a fantastic teacher. Without his encouragement, his explanation of academia, and grad school and research, I probably wouldn’t have considered research as a career. I knew people did it; I just didn’t know how people did it. When I started in the lab, I knew that’s what I wanted—to do research and be in academia, just be in that environment. 

How did you get involved with field operations for your biodiversity work?

 When I dropped out, there was no such thing as molecular biology. In those ten years, the entire field of biology had changed. At Virginia Tech we didn’t have the facilities to do molecular biology, but at Duke (about 3 1/2 hours away) there was a mycologist doing molecular mycology. I would drive down there every weekend and holiday and work in his lab. When I was there, I met a grad student who had previously worked in Guyana as a botanist. We decided to do a one year study in Guyana to look at the fungi there. It had nothing to do with my research; it was just something fun to do as a side project. Of course this side project has been going on for 20 years now. 

What was the hardest part about doing these field experiments/trips?

 All of the permissions and permits from the local governments. Wherever you are doing the work, getting the permits is always time consuming and what you need is different for every country. Even in Guyana, where we have been for 20 years, the rules change every year. It’s expensive to get the permits for doing research, and additionally you need to get separate permits to export whatever you are taking out of the country.

 Places like Guyana have no overland routes to where we go, so we have to take little charter planes that can land at abandoned mining camps. The planes take 600 pounds, so we have to  figure out what we can take and how many planes can take us. The weather is always bad. Sometimes you’re sitting on the airstrip on the other side for days, waiting for the weather to clear up so a plane can come back and get you. 

Usually, buying all the gear and rations goes well. But if you forget something, like your salt or your toothpaste, you are out for 2 months. Building the camps itself isn’t so difficult because we keep going back to the same place. One year, we had a lightweight aluminum canoe to get us around, but there was a flood and the canoe washed away. There we were, stuck in the middle of nowhere, with no boat, no way to get back to where the plane was supposed to pick us up, and no way to signal anybody. Eventually, we got back down to the airstrip after building a dugout.

guyana.jpg
Photo from the personal collection of Cathie Aime, used with permission

So how do you pick your field work locations, a lack of studies or that there is something interesting going on there?

It’s a little more haphazard. For instance, in Vanuatu, in the middle of the South Pacific no one has studied the microfungal flora. I know that there’s a lot of endemism in those islands, but surveying there requires immense resources and infrastructure.  I got lucky when I was offered an opportunity to hitch a ride with a research group led by the New York Botanical Gardens. Some of the other locations, like Queensland, were very targeted. There was a specific fungus in the rainforest that my post doc and I needed to resolve a tangled problem. After a few years of trying to get samples or work around it we just said “let’s go and get it ourselves!”. In Cameroon we got funding to set up a long-term study to match that in Guyana. That was a very deliberately chosen forest and region to test specific biogeographical hypotheses. So, overall, a mixture of different reasons.

microscopy@night.JPG
Photo by Cathie Aime, used with permission.

With all the international work that you do, how do you maintain an international student presence in your lab?

 I don’t know if it was so deliberate at first. When I go to different countries, I often get to work with local students that are interested in mycology but don’t have access to rigorous mycological training, especially in the developing world. If a student is passionate and shows promise, then I’m going to do everything I can to get them into my program. A lot of my students are that way. The way I see it, your productive time as an academic is limited. I started in my 30’s, so I have 20 to 30 good years in academia—what do I really want to do with that? I want to train students that are passionate about mycology and the environment, who will go on to train the next generation around the world.

 

About the Author

Inam Jameel is a PhD student in the Department of Genetics at UGA. He is interested in how natural populations adapt to rapidly changing environments. When not in the greenhouse or in the lab, Inam likes to run, attend concerts, watch the Washington Capitals, and do poorly at trivia. He can be reached at inam@uga.edu or @evo_inam. More from Inam Jameel.

Rethinking Anorexia: Making the Biopsychosocial Connection

0

With only 50% of patients recovering fully in the long-term, anorexia is the deadliest psychiatric disorder. Typically associated with poor body-image and unhealthy eating habits, anorexia has captivated and bewildered the minds of laymen and scientists alike. While not every person suffering from anorexia is underweight,  there is still a general misunderstanding of what is really going on in the mind and body. It is a myth that anorexia is a purely psychological phenomenon – where one’s desire to be skinny goes too far. The reality of the disease is much more complex. With side effects ranging from osteoporosis and anemia to heart failure and nerve damage, the consequences are far more severe than just being “too skinny.”

32305443845_7116e9edc2_c.jpg
Credit: Thigala shri via Flickr. Licensed under Creative Commons Public domain.

Anorexia typically “begins” with an environmental trigger, such as stress or a simple desire to eat healthier, that signals the desire to overexercise or restrict food intake. But not everyone who loses weight develops anorexia. Rather, some may have genetic predispositions – mentally, physically, or metabolically that drive habits that enforce anorexic behaviors. This results in typical anorexic habits such as increased drive for thinness, increased body dissatisfaction, and ongoing food restriction that perpetuates the cycle of behavior and reward, namely restricting food and losing weight. 

This caloric restriction drives drastic changes in the gut microbiome, which is a collection of microorganisms in the GI tract that influences our metabolism and mood. While factors such as genetics, age, and sleep play a role in the diversity of microorganisms,  fiber intake is a huge player in the gut’s microbial composition and function.   

Escherichia coli grown in culture and adhered to a cover slip. E.coli is an example of a bacteria that is negatively correlated with BMI in patients with anorexia.  Credit: Rocky Mountain Laboratories, NIAID, NIH via Wikipedia. Licensed under CC Public Domain Mark 1.0.

What are the characteristics of the gut environment of anorexics? Chronic caloric restriction, food group imbalance, micronutrient deficiencies, and high fiber- to name a few.  This results in dysbiosis, or microbial imbalance, that affects not only the host’s metabolism, but behavior and the immune system as well. What was once a balanced and diverse environment now becomes competitive, selecting for microbiota that can sustain on low energy and nutrients. 

Usually, our gut and brain communicate with each other via the “gut-brain axis” to let us know whether we are satiated or still hungry. Gut microbes can release chemicals, such as short-chain fatty acids (SCFAs) and hormones that affect the appetite and metabolism control centers of the brain.  When we drastically reduce our food intake, our bodies get confused and the normal communication between the brain and the gut is impaired.  A well-maintained intersection now becomes a traffic jam and unfortunately, that traffic jam seems to have lifelong effects for anorexics.

Our current “treatments”  focus on refeeding and addressing psychosocial needs, but less so on fixing the microbial dysbiosis.

mariasciencearticle-01.jpg
A graphic illustrating the relationship between the gut and brain in patients with anorexia. Used with permission from Ashleigh Gehman.

Knowing what we know now about the intricate connection between the gut and brain and its genetic underpinnings, it is important to translate this research to new therapies – such as prescribing pre/probiotics and the use of fecal-matter transplants. Studies have shown promising results by targeting the microbiota for treatment of many mood disorders, primarily by regulating serotonin levels, which has an active influence on appetite and behavior. Some bacterial strains have also been shown to alleviate symptoms of anxiety and stress-two  hallmark symptoms of anorexia. These novel approaches could theoretically improve weight gain, decrease stress to the gut , and even reduce psychological symptoms!

While anorexia treatment has long focused on alleviating psychiatric symptoms, it is perhaps time to turn our attention to the relationship between the patient’s gut and brain.  The development of novel treatments that target the gut microbiome are essential if we are to properly tackle this disorder.  All of these gut-brain factors, combined with the genetic effects on mental health and metabolism, might be of importance to improve long-term outcome of one of the most chronic psychiatric disorders of adolescence. 

If you or someone you love are currently struggling with an eating disorder, contact the NEDA helpline (1-800-931-2237) for support, resources, and/or treatment options. You can also click to chat here

 

About the author

54375820_2690645910950619_1697015001115525120_n

Maria Flowers is an undergrad studying Biochemistry and Molecular Biology/Spanish at UGA.  Her dreams are to continue demystifying the sciences through art and writing. In her free time, she loves to dance, read the New Yorker, write poetry, and listen to her favorite podcast “On Being” while doing yoga.  She also loves frisson-ing to any and all types of music. If you wanna chat about science or her wide-array of special interests, you can email her at @ mwf38801@uga.edu

 

A not so familiar face: How a transferrable cancer could be the end for an Australian mammal

0

Cancer, a complex disease caused by an accumulation of mutations in our DNA, affects millions of individuals each year. Cancer poses a very serious threat, but is it contagious?

The short answer is “no”, at least for us humans. In humans, the only way to truly transfer cancer from one person to the next is by means of an organ transplant. No cases have been reported of cancer itself being contagious, but certain viruses, such as the familiar Human Papilloma Virus (HPV), have been linked to causing cancer in humans. This type of viral disease transmission is pretty unusual for cancer, but some examples do exist in other organisms outside the scope of humans.

1551px-ZooParc_de_Beauval_Sarcophilus_harrisii_11082019_07_1205.jpg
Tasmanian devil (Sarcophilus harrisii). Image credit: Vassil via WikiCommons. Licensed under CC0 1.0.

Tasmanian devils, the largest carnivorous mammals on earth, can pass on cancer like we can pass on a cold to someone else. These devils, roughly the size of a small dog, suffer from a facial cancer known as Devil Facial-Tumor Disease (DFTD). This disease results in large tumors on the face and neck, causing asphyxiation or starvation at an average of six months after onset. This facial cancer goes undetected by the devil’s immune system, resulting in the growth of tumors until it is too late. But how does it spread from one devil to another? Devils are gregarious creatures. During acts such as feeding or mating, the devils can wound each other through biting, ultimately causing the spread of cancerous cells from one devil to another. This transmission of DFTD is so efficient that the origin of the disease has been linked back to a distinct region of Tasmania using genetic techniques on tumors sampled from a variety of affected devils. Tasmania is an island state of Australia located off its southeastern coast and the last remaining native range for the Tasmanian devil.

Map_of_Australia
Modified map of Australia. Image credit: Mark Ryan via WikiCommons. Licensed under the GNU Free Documentation License.

Initial reports of devils with Devil Facial-Tumor Disease cropped up around the mid 1990’s. After just 10 years, estimates place about a 70% infection rate among the current population. It’s approximated that up to 60% of the devil population has been decimated by DFTD since 1996. Unfortunately, since the discovery of the first incidence of Devil Facial-Tumor Disease (DFTD1), a second wave of Devil Facial-Tumor Disease (DFTD2) arose sometime between 2007-2010, further decimating the devil population.

This Tasmanian DFTD epidemic has brought the endangered species near the point of extinction, which is projected to occur in as little as 35 years if no action is taken. Luckily, conservation efforts have been instituted to relocate uninfected populations of devils to various zoos around the world, as well as the nearby Maria Island, located off the eastern coast of Tasmania. Researchers at the University of Tasmania are spearheading a large collaborative effort that is showing promising results for the testing of recently developed vaccines against DFTD. Reports from the researchers at the University of Tasmania have also noted that the devil’s themselves are evolving to combat this disease, showing genetic mutations that improve resistance and tolerance to DFTD. Hopefully these human-facilitated efforts in combination with the naturally occurring mutations can lead to a successful and robust recovery of the Tasmanian devil population.

Ultimately, these devils provide an interesting and unusual case of transmissible cancer that could be used to further cancer research. Similar diseases have been documented in both domestic dogs and Syrian hamsters that seem to show related mechanisms for cancer establishment and metastasis (spreading of the disease to new locations in the organism). These animals have the potential to confer valuable information regarding basic tumor biology, tumor evolution, and common tumor transmission mechanisms to human studies.

 

IMG_2550 (1).JPGBen Luttinen is a Ph.D. student in the Department of Genetics studying the development of beneficial viruses in parasitoid wasps. In his spare time he enjoys watching movies, playing golf, and the occasional drink. You can reach ben at benjamin.luttinen@uga.edu.

The Wonders of Human Milk!

0

It’s a girl (or boy)! Your bundle of joy is finally here. Stepping into parenthood, life is magical.  But it is not all sunshine and roses either with the constant cleaning, frequent feedings and sleepless nights. The baby falling sick on top of it, is your worst fear. No wonder you find yourself paranoid, sterilizing everything all the time. Despite your habit of sterilization, millions of bacteria are making their way in through your baby’s mouth. Did you think that breast milk is sterile? No! it is teeming with bacteria which invade and colonize your baby’s gastrointestinal tract. These bacteria, along with viruses and fungi constitute the gut microbiome. This diverse microbe population, crucial for our well-being, enhances metabolism, synthesizes vitamins, and fights infections.

A newborn is prone to be sick, frequently encountering novel pathogens. However, human milk has naturally evolved to provide the first line of defense against pathogens. It does so by transferring antibodies for pathogens encountered during pregnancy by the mother to the child. These antibodies in human milk confer protection against respiratory and gastrointestinal infections, and fights inflammatory diseases like asthma, atopy, diabetes, obesity, and inflammatory bowel disease, all while providing nutrition to the baby.

Mothers, Child, Mummy, Lovely, Family, Togetherness
Mothers Child Image credit: Satya Tiwari via Pixabay. Licensed under Pixabay License.

By composition, one of the building blocks of milk are special sugars called human milk oligosaccharides (HMO). HMOs cannot be digested by infants reaching the intestine and colon, instead the gut microbiome uses them as an energy source. HMOs have shown to improve gut health by feeding these beneficial bacteria. Cow’s milk is similar to human milk except that it contains significantly less HMOs. The lack of HMOs can cause gastrointestinal problems and a compromised immune system if used as a substitute for human milk. Geographic location, environment, and the mother’s genetics all have a significant effect on the types of HMOs found in human milk. These acclimatized HMOs help fight pathogens in the baby’s local environment. Feeding benefits the mother as well. It burns extra calories helping lose pregnancy weight. It helps in bonding with your baby releasing the hormone, oxytocin, and it lowers the risk of breast and ovarian cancer and also, osteoporosis.

Human milk is the perfect food for your baby with balanced sugars, fat, vitamins and proteins. No wonder, the World Health Organization (WHO) recommends exclusive breastfeeding in the first 6 months of infancy. If breastfeeding is not possible, infant formulas with the same HMOs and nutrient composition are available. It really is wonderful how nature has evolved human milk to not only boost gut health and immunity of the child but also to promote good health of the mother.

To learn more about the wonders of human milk and gut health, be sure to attend the upcoming Athens Science Café on November 21, 2019. Dr. David Mills, Professor in the Department of Food Science and Technology at University of California, Davis will be sharing his perspective on this wonderful discussion.

Screen Shot 2019-11-16 at 7.49.19 PM.png

 

About the Author

unnamed-3-2Ankita Roy is a Ph.D. Student in the Department of Plant Biology at the University of Georgia working with bean roots. She plays mommy to two kittens and can whip up a curry to fire your taste buds in no time. True to her cooking skills, she enjoys trying out new cuisines to satisfy her passion for everything flavorful. She is an executive member of the Indian Student Association. You can reach her at ankita.roy@uga.edu. More from Ankita Roy

The science behind high insulin prices

0
Among many great things about life in Canada, I can walk into a pharmacy and purchase my insulin... at 1/10 the cost in the US.

You probably know or love someone who suffers from diabetes mellitus. In fact, recent CDC reports estimate that nearly 10% of Americans have diabetes, and as many as a third of Americans are pre-diabetic and undiagnosed. So, there is a reason the cost of healthcare—and in particular, insulin, the lifesaving drug used to treat diabetes—has been a popular topic in the news recently. Annual insulin costs have been skyrocketing, creating dangerous conditions for diabetics. In March 2017, the death of Shane Patrick Boyle raised a lot of eyebrows when he died from diabetic ketoacidosis after his GoFundMe failed to reach the $50 goal for his $750 monthly supply of insulin. As recently as July 2019, a Minnesota man died a similar death. Even the presidential race has highlighted the subject: recently, presidential candidate Bernie Sanders bussed a dozen Americans to Canada to purchase insulin at one-tenth of the price. However, many people don’t know what insulin actually is or why it is so difficult to produce competitively.

19915327758_0a8339533e_b.jpg
Enter a caption“Insulin” by Open Grid Scheduler / Grid Engine is licensed under CC0 1.0

 

What IS insulin, anyway, and what does it do?
The energy we need to survive is obtained by breaking down glucose and absorbing it into the bloodstream. However, the amount of glucose in our blood can hurt us if it gets too high, leading to hyperglycemia. Luckily for us, insulin is a peptide (protein) hormone that promotes the storage of glucose into our cells, removing the glucose from our blood. This allows us to store it as energy, or “brings our sugar down.”

However, in patients with diabetes, the disease causes their blood sugar to remain too high. Without this insulin, your body will not be able to store or use glucose as fuel, effectively making you starve. This can lead to ketogenesis—an emergency energy production in which your body breaks down fat into ketone bodies. These ketones are acidic, and an acute build-up of these ketone bodies in the bloodstream can lead to ketoacidosis—the condition that killed Shane Patrick Boyle. In the long term, hyperglycemia may lead to microscopic vascular damage and eventually organ failure.
So why is insulin so expensive?

Insulin is unique. Normally, the most expensive drugs are the ones that (1) can only be sold to a few people and therefore few people share the cost, or (2) are new, and no one has had a chance to make a competitor yet. However, diabetes is the 7th leading cause of death in the United States, and the scientist who discovered insulin sold the patent to the University of Toronto for $1 in 1929 because, in his words, “insulin belongs to the world, not to me.”

So why the rising price?

It is a perfect storm of business and biological complexity. First, the three largest manufacturers of insulin—Eli Lilly, Novo Nordisk, and Sanofi—represent 96% of the total insulin market as of 2018. Then, once you corner the market, you can set the price. For instance, insulin prices rose three-fold during the last decade in which the current director of Health and Human Services director, Alex Azar, was a senior executive at Eli Lillly, including when he served as president of the company. This pattern is exacerbated by the fact that drug prices in the US are negotiated by a convoluted web of private payers.

Second, insulin is not a small-molecule drug, but a large, complex biological molecule. Therefore a safe, identical copy (or biosimilar) cannot be easily made, making it difficult to compete in this established market.

Lastly, the price has been kept high by a process known as “evergreening” of patents. Normally, a drug patent only lasts 20 years. However, companies can essentially reset the clock on their patent, as long as they change their product slightly. As new insulin products enter the market, older (and potentially cheaper) versions are discontinued. Thus, a low-cost generic can never arise. For instance, even though Banting sold his patent for a dollar, this was a patent on using insulin from mammals, whereas insulin is currently made from biosynthetic analogs.

Moving forward

However, there may be some good news on the horizon for those of us whose lives depend on insulin. In July, Azar announced that Trump administration plans to allow Americans to legally import prescription drugs from Canada in an effort to reduce prices. There are also biohackers working to try to develop open-source insulin manufacturing protocols to help combat the effects of evergreening patents. However, these do not address the systemic problems that allows insulin prices to soar in the first place.

A study by Imperial College London found that a more reasonable price for an insulin analog would be somewhere between $78-130 per person per year if more competition could simply enter the market. Going forward, it is important to keep up with news on the cost of insulin and who it affects as consumers and voters.

 

13220826_1307652565916236_939053157207835834_nMike Choromanski is the former President of UGA’s Cellular Biology Graduate Student Association and a Ph.D. student studying Neuroscience and Cellular Biology. He attended Armstrong State University where he obtained a B.S. in Cell Biology with minors in Neuroscience and Philosophy while serving as an editor for his college newspaper, The Inkwell. Before teaching at UGA, he organized STEM trecks and taught environmental science for Philmont Scout Ranch. In his spare time, he loves to hike, cook and play video games, and competes on UGA’s fencing team.

Saving the world’s seeds, ex situ

0

The imposing structure of the Svalbard seed bank is familiar to many. This “doomsday” vault (ahem, already breached by climate change) is humanity’s last resort for preserving the seeds of our crops and plants. But how did this bastion of biodiversity arise?

Svalbard_Global_Seed_Vault_-_panoramio.jpg
Svalbard Global Seed Vault. Image credit: Dag Endresen via Wikimedia Commons. Licensed under CC BY 3.0.

Nikolai Vavilov, a 20th century Russian agronomist and geneticist, established the first modern seed bank in Leningrad in 1921. He is best known for establishing the centers of origin for the world’s cultivated plants. These are the geographical locations where the world’s major crops were domesticated, critically containing the wild relatives and greatest amount of genetic diversity for these plants. So while Vavilov traveled the globe (primarily on mule), he collected seeds to bring back to his native Russia in an effort to create a genetic repository to combat global hunger. He ended up collecting more seeds than any other person in history, eventually amassing some 250,000 entries in Leningrad. In a gutting twist, he became a martyr for plant genetics, dying in jail from starvation after being imprisoned for espousing Mendelian ideals while anti-Mendelian concepts were favored in Stalin’s Russia. 

800px-Vavilov_in_prison.jpg
Central Archive of the Federal Security Service of the Russian Federation (Moscow) via Wikimedia Commons. Licensed under article 1259 of Book IV of the Civil Code of the Russian Federation No. 230-FZ of December 18, 2006.

After Vavilov’s imprisonment, a dedicated staff of scientists maintained the seed bank at the Institute of Plant industry. During the Siege of Leningrad in 1941, they barricaded themselves inside the seed vault to protect this valuable biodiversity from both the German army and starving Soviet citizens. A dozen scientists starved to death guarding the seeds, which could have sustained them. Ensuring the food security of future generations was deemed more important than their own livelihoods.
Fast-forward to the 21st century and Vavilov’s original seed bank is now accompanied by upwards of 1400 other seed banks around the globe. But, the path to seed bank proliferation is a complicated one. With improved understanding of plant genetics in the 20th century came the advancement of high-yielding, stable crop cultivars, particularly hybrid varieties. These modern plant varieties are highly uniform and in the case of hybrid seed, genetically homogenous, which differs from historically predominant landrace varieties. As a contrast, landraces are traditional varieties that have adapted to their localized environment through domestication and are much more genetically variable- this is what Vavilov was amassing in his collection. However, the modern varieties were so high-yielding that they were spread throughout the globe as a means of combating hunger during the Green Revolution. Scientists quickly began to note that the new cultivars were effectively displacing local varieties of crops, causing variety extinction and contributing to genetic erosion, or the loss of genetic diversity. Genetic diversity is key to increased food security; landraces and wild relatives of cultivated crops often possess crucial traits such as disease or pest resistance, which can be used to improve the germplasm of current cultivars. So while plant breeders were capitalizing on the genetic diversity of landrace varieties to develop high-yielding, stable cultivars, the subsequent proliferation of those cultivars was wiping out that very same genetic diversity that they relied on. The imperative of seed banks soon became maintenance of crop biodiversity, rather than the straightforward catalog of diversity that Vavilov had conceived. Cue the Svalbard Global Seed vault.

800px-Vavilov-center.jpg
Vavilov’s centers of origin. Image credit: Daphne Mesereum via Wikimedia Commons. Licensed under CC BY 3.0.

Further complicating things is the idea of ownership of this banked biodiversity. For most of history, the world was functioning under a “common heritage” assumption of genetic resources, meaning that they were a public good that could be availed of freely. Meanwhile, over the past few decades, the idea of ownership of genetic materials shifted with the offering of patents for stable cultivars and genetically engineered material. So by the late 20th century we had a free flow of plant genetic resources (PGR) to breeders, but a flow of patented, costly seed coming out of breeders. Much of the Earth’s plant biodiversity exists in the developing world (remember Vavilov’s centers of origin?) while much of modern plant breeding infrastructure exists in the developed world. To many in the developing world, this reeked of imperialism and was hotly protested.  After years of pressing this case, some national sovereignty over PGR was granted with the 2001 International Treaty on PGR for Food and Agriculture. This established a shared benefits dimension to PGR, in which countries receive a portion of the profits from anything derived from their PGR.

The implications of this biodiversity privatization are nuanced. Often, we are still left with costly seed that may be beyond the reach of low-income farmers. In certain economically precarious contexts, this cost is associated with the perturbing phenomena of farmer suicides. This protectionist institution may not be the same one that Vavilov and his associates died for.

 

About the Author

Tara Conway Tara Conway is an M.S. student in Crop and Soil Sciences, where she is working towards the development of a perennial grain sorghum. She is originally from Chicago, IL. Her work experience spans from capuchin monkeys to soap formulating. You can reach her at tmc66335@uga.edu, where she would like to know which bulldog statue in town is your favorite. Hers is the Georgia Power one due to its peculiar boots. More from Tara Conway.

Scouting for the Next Top Model (Organism)

0

Here’s a valid question: if it’s a human condition or disease we’re interested in, why do we study flies, plants or bacteria? It’s a question that researchers often have to answer: whether it be for grant funding or to their in-laws over Thanksgiving dinner. Certainly, no one wants to hear—or vote for— tax dollars aimlessly squandered on projects that have “nothing to do with the public good.”  Without anyone understanding the importance of basic science research, misperception of “trivial” research reflects in budget cuts of major research institutions. So, why is it important to study non-human organisms?

What it takes to be a top model

Traditionally, model organisms are a group of non-human species that are widely studied to better understand biological phenomena—which include mechanisms of human diseases. Model organisms should be easy to maintain, cost-effective, readily available, and short-lived. This broad definition of a model organism allows for many species to be good candidates to study. Yet, there are only some notable top model organisms (E. Coli, yeast, fruit flies, frogs, zebrafish, mice, worms, corn, and Arabidopsis, just to name a few.)

The industry standard of a top model

The rise of the model organism came out of a 20th century shift from descriptive biology to the study of underlying mechanisms. Researchers wanted a simple organism that could be readily studied and help answer big questions. If their model was too big or complex, their studies may take too long or not fully answer their main question. So, there was a selective bias for small and simple models whose genome could be easily manipulated.

Historically, corn and bacteria elucidated a large part of the central dogma (DNA>RNA>protein), and flies, worms, and mice revealed critical developmental processes. Because these experiments with model organisms were comparatively faster and cheaper than that in primates, it was no surprise that the research generated in these systems dominated their respective fields. The sheer amount of discoveries generated from major model organisms called for the creation of large-scale databases and the advent of strain collection. Robust methodologies and genetic tool development also accrued as more researchers used these major model organisms within their fields. 

The traditional definition of a model organism is no longer sufficient in grouping today’s limited set of “top model” organisms. The definition of a model organism has shifted since the 20th century, adding one more criterion: an organism with accumulated, well-practiced resources and methodologies.

Cover Art_Model Organisms_2.png
From left to right: Fruit fly, E. coli bacteria, roundworm, mouse, corn, zebrafish, Arabidopsis rockcress. Some of the longest-standing models in the industry. Credit: multiple sources (modified) via Flickr. Licensed under: CC BY-NC-SA 2.0, CC BY 2.0, CC BY-SA 2.0, or CC BY-NC 2.0.

Representation, representation, representation

Although there have been great strides in the field of genetics, we are still limited by the few model organisms that we study. There is a great range of biological phenomena that goes unexplained, simply because the current top model organisms do not have the analogous function or gene. Thus, there is a need to study more nontraditional model organisms to fill in these gaps of knowledge.

Understandably, the task of establishing a new model system is daunting—given that these new model systems will be going up against well-established model systems with a long history and a wealth of resources. Even though the development of a model organism requires a lot of time and money, it is still more time- and cost-effective than studying the same genes in non-human primates or humans. Fortunately, recent genetic tools, like genomic sequencing and CRISPR gene-editing techniques, allow individual labs to feasibly study the genome of a model organism candidate. 

 The push for more diverse models isn’t just coming from scientists. In 2018, the US National Science Foundation funded $10-million to projects that specifically develop nontraditional model organisms. So far, there have been some promising results. In March 2019, a research group at the University of Georgia successfully used CRISPR to create the first genetically-modified reptile (Anolis sagrei). Another research group at Columbia University successfully injected CRISPR components in the embryos of the Hawaiian bobtail squid (Euprymna scolopes) and the dwarf cuttlefish (Sepia bandensis), two species that uniquely reflect their neurobiological activity through the camouflage of their skin.

AshleyRasys_Lizards.png
Brown anole (left) and genetically-modified albino anole (right). An upcoming reptilian model. Credit: Ashley Rasys.

Akin to the rise of diverse models in the fashion industry, the scientific community is making strides towards more diverse model organisms. However, these are only preliminary results from ongoing casting calls, and the search for new model organisms is still underway. Only time will tell which fresh-faced species waiting behind the curtain will transform into stellar model organisms, ready to strut the runway.

 

 

IMG_20171224_165206_371.jpg Kathy Bui is a Ph.D. student in the Department of Cell Biology at the University of Georgia. She is currently working on CRISPR-gene editing in Drosophila melanogaster and developing split fluorescent protein technology. When she is not studying or working in the lab, she is watching America’s Next Top Model or pro-wrestling; both bring her equal amounts of joy.

Plant Cells, an Unculturable Mystery

0

The simplest unit in biology is the cell. This central tenet has remained true since the coining of the term ’cell’ in 1665 by Robert Hooke. Cells have enabled multicellular organisms to conquer every part of the planet, enabling cell line specialization and the formation of more complex organisms. Multicellularity allowed organisms to thrive by allowing specialization. Instead of being acclimated to a single environment, diverse cell types in a single organisms allowed organisms to cope with more complex environments. 

Due to the all-important nature of cells in biology, biologists have spent massive amounts of time and resources in making cells easier to study for some very important reasons. Cell lines – which are homogeneous blocks of identical cells – offer massive advantages to biologists, as they give researchers the tools to ask very specific questions about how organs or cells function. For instance, let’s say you’re a researcher for a pharmaceutical company working on a new drug to cure liver disease (yay you!). However, before you take that drug to mouse trials, and way, way before you take it to human trials, you want to test if it even has the desired effect on a biological entity that resembles a liver. Makes sense right?

You may be left wondering, “how does one propagate cells?” Honestly, the answer is simpler than you might think. In most organisms, plants excepted (you’ll find out why in a minute), the way cell culture generally works is you take a sample of your tissue of interest, put that tissue in a petri dish with a nutrient rich broth that enables growth, and boom! You’re off to the races. Those liver cells you just cultured will stay that – liver cells! This technique is older than most people think, with the first cell culture being done by Robert Harrison in 1907 who was working on frog neural fibers

34948768633_2c9e0b34e1_k
Enhanced image of Human HeLa cells in culture. Each blue dot is the nucleus of an individual cell. Don’t they all look happy? Credit: Panorama of HeLa cells by National Institutes of Health (NIH) via Flickr. Licensed under: CC BY-NC 2.0.

It’s worth noting that cell lines today come in thousands of varieties, with various species, cell types, and disease states available. There are also a variety of companies that will actually generate a cell line you’re interested in if it doesn’t exist (consider heart cells for your next valentines day gift). 

While everything I’ve laid out so far sounds great, there is one system that doesn’t have the advantage of cell lines, which has greatly hindered science in a particular realm: plant biology. But this isn’t due to lack of trying, rather, plants have a few odd cellular characteristics which make cell culture nigh on impossible.

Plant tissues are complicated. For instance, plant tissues are made up of a myriad of cell types that all seem to operate independently of one another. In leaf tissue alone there are around 10 different types of cells. This poses a serious issue. In mammal cells, we can take a tissue sample from the liver and get liver cells! But plants, take a tissue sample from a leaf, and try to culture it? You don’t get anything resembling leaf cells. And while you may think there are simple solutions to this problem, say something like “why not just isolate one or two of the cells from the leaf and propagate them?” Well, that’s where another unique feature of plant biology comes into play.

36519555693_8d07330867_k
An example of a plant leaf and all the various cell types it encompasses. Each color here represents a different cell type. For example, the red cells here are xylem and phloem cells which transport water and sugar throughout the plant. Click the link below to get a full list and description of the cell types labeled here (there’s a lot). Credit: “Herbaceous Dicot Stem: Dermal Tissues in” by bccoer via Flickr. Licensed under: Public Domain

The identity of a plant cell is uniquely linked to their cell wall. In plants, the lignin-filled cell walls are the fundamental feature that differentiates their cells from animal cells. It gives plant cells their rigid box like shape, and makes them all but immovable in plant tissues (fun fact, your cells move more than you think). What researchers have discovered is that plant cells are so intimately connected to their cell walls that the minute you remove the plant cell from these walls (like you would if you tore apart leaf tissue in the example above), you’re actually fundamentally changing the identity of the cell.The equivalent example here would be if you were removed from your apartment, and you totally changed as a person.

This attribute of plant cell types being intimately linked to their cell wall makes culturing plant cells impossible. While this remains an issue in the field of plant biology, this phenomena has an odd, but advantageous benefit for plant biology. But, more on that next time. 

 

pablo_authorJohn Pablo Mendieta is a graduate student pursuing a PhD in bioinformatics and genomics at the University of Georgia. His specific interest lie at the intersection of agriculture, and genetic technologies. From Boulder Colorado, he enjoys the outdoors, science fiction, programming, and hip hop. You can email him at john.mendieta@uga.com or follow him on twitter @Pabster212.

Malaria: From Miasma to Elimination

0

Life on Earth is full of dynamic and complex interactions between organisms. Some of these interactions are mutualistic, where all parties benefit from the relationship. Others are commensalistic, where one organism benefits and the other isn’t really affected. Then there are the parasites, organisms that live and prey on others causing them harm. 

Parasites are everywhere and they come in all different shapes and sizes. There are single celled organisms, creepy crawly worms, and nasty bugs too. Lice infest our hair, tapeworms infest our intestines, and on occasion, brain eating Amoeba eat our brains. But of all the parasites that affect humans, the most feared and most deadly are single celled microorganisms from the genus Plasmodium that cause malaria.

The Plasmodium cells that cause malaria are transmitted by female mosquitoes from the genus Anopheles. These mosquitoes actually pick up the disease from biting infected humans resulting in a continuous cycle of transmission. In 2016, Malaria was estimated to have infected 216 million people, killing around half a million. Understanding how Plasmodium works to cause disease in humans is critical for developing effective treatments for those infected. 

image1
Colorized electron micrograph showing malaria parasite (right, blue) attaching to a human red blood cell. The inset shows a detail of the attachment point at higher magnification. Image Credit: National Institute of Allergy and Infectious Diseases, National Institutes of Health via Flickr. Licensed Under: Public Domain.

Plasmodium feeds on red blood cells. Upon first infection, the parasite travels to the liver where it begins to establish an infection. Then, Plasmodium cells infiltrate red blood cells where they reproduce asexually before reaching a critical mass, causing the red blood cells to burst. Red blood cells are generally invisible to the immune system, meaning that the malaria parasite basically hides from the immune system by residing in the red blood cells which then clot to prevent destruction by the spleen.

There are a number of different treatments for Malaria, the oldest being quinine, an extract from the cinchona tree native to South America. You might be familiar with quinine as it is the compound responsible for the bitter taste of tonic water. In fact, the antimalarial properties of quinine was directly responsible for the creation of the Gin & Tonic. During the British occupation of India in the 1850s, British soldiers were given several daily rations of tonic water to prevent malaria infections. They often mixed the tonic water with gin as tonic water isn’t too pleasant to drink on it’s own, and thus one of the most famous mixed drinks in the world was born out of the need to combat malaria. There are of course newer and better antimalarial drugs that exist, but quinine is still used as a secondary treatment for malaria.

Much time has passed since the British occupation of India and malaria is still one of the biggest parasitic threats to human populations. Come to the Athens Science Cafe on September 26th at Little King’s to hear more about Plasmodium and how we can move towards a world without Malaria from David Peterson, faculty in UGA’s Center for Tropical and Emerging Global Diseases!

image2

About the Author

Screen Shot 2018-10-08 at 2.36.17 PM Max Barnhart is a graduate student studying plant biology and genomics at the University of Georgia. Growing up in Buffalo, NY he is a diehard fan of the Bills and Sabres and is also an avid practitioner of martial arts, holding a 2nd degree black belt in Taekwondo. He can be contacted at maxbarnhart@uga.edu or @MaxBarnhart1749.

Preventing the Next Epidemic: Scientists Take a Closer Look at Rift Valley Fever

0

In 2015, Zika virus resulted in a global public health emergency. The epidemic caused severe brain defects in thousands of Brazilian newborns after the virus was transmitted to pregnant mothers via infected mosquitoes. The rapid emergence of disease caught everyone by surprise, and with little understanding of the virus pathogenesis it left scientists unprepared to prevent and treat disease in affected infants.

Like Zika, infection with RVF virus can go unnoticed during pregnancy and cause  catastrophic, often lethal, damage to the fetus. RVF was first reported in livestock by veterinary officers in Kenya’s Rift Valley in the early 1910s. RVF viral disease is most commonly observed in cattle, buffalo, sheep, goats, and camels, with the ability to infect and cause illness in humans. Outbreaks of RVF can have major societal impacts, including significant economic losses and trade reductions. In an effort to prevent history from repeating itself, scientists are now working to develop effective vaccines for Rift Valley fever (RVF).

Different types of vaccines for veterinary use are available to prevent RVF; however, they all have their drawbacks. The killed vaccines are not practical in routine animal field vaccination because of the need for multiple injections. Live vaccines only require a single injection, but because this is still a live virus it is known to cause birth defects and abortions in sheep and only provide a low level of protection in cattle. A weakened version of the virus has been developed to create a live-attenuated Clone 13 viral vaccine, which was recently licensed in South Africa with more than 19 million doses already used in the field. The Clone 13 vaccine performed well in controlled animal trials; however, a major hurdle for vaccine efficacy comes down to cold chain. A recent study demonstrated that the Clone 13 virus is stable for more than 12 months when stored at 4℃, but the virus is unstable at temperatures above 22℃. This temperature storage issue is not unique to RVF vaccines, and remains an ongoing battle when vaccinating in hot climates served by poorly developed transport networks.

DhP1whtWkAA6ASS.jpg-large
Image by Kenya Red Cross via Twitter

The World Health Organization (WHO) considers RVF a potential public health emergency and calls for accelerated research and development due to the lack of approved treatments for animals or humans. Although this mosquito-borne, zoonotic disease has been reported only in Africa and the Middle East, the mosquito that transmits the virus also ranges from Europe to the Americas. From 2000 to 2018, 4830 cases of severe RVF in humans were reported to the WHO, including 967 related deaths. Epidemiology in human pregnancy is severely lacking, but among herds of livestock, RVF outbreaks lead to widespread miscarriage and stillbirth affecting more than 90% of pregnant livestock.

Dh-nNqqWsAEKRyp
Following an outbreak of in , team has been working in to help manage the disease. So far, 85 cases, including three patients who were readmitted, have been reported in the county, with six having died since the beginning of the outbreak. Image by MSF East Africa via Twitter

In a recent study, researchers from the University of Pittsburgh Center for Vaccine Research discovered how the virus targets the placenta. These studies provide important information for the development of human vaccine. The researchers showed that in pregnant rats with no signs of clinical disease, RVF virus is vertically transmitted from mother to fetus through the placenta, resulting in a high rate of stillbirths.

Screen Shot 2019-04-04 at 6.34.24 PM
Image by Medical Xpress via Twitter

The group also exposed human tissue samples obtained from pregnant women in their second trimester to RVF virus, and then monitored viral levels every 12 hours. They found high virus levels in the placenta, including in a layer of cells called the syncytiotrophoblast. This makes up the outer layer of cells that actively invades the uterine wall and establishes an interface between maternal blood and embryonic fluid, allowing exchange of material between the mother and the embryo. A growing body of evidence suggests that the unique structure of the syncytiotrophoblast facilitates the placenta’s protective function.

But, here is the real kicker. The syncytiotrophoblast is typically resistant to infection by diverse pathogens, including Zika virus, raising a major red flag that RVF virus may be an even more frightening threat. Essentially RVF virus takes the expressway to get into the placenta as opposed to the windy back roads of its Zika virus counterpart.

While having these research models in place is an important step for combating RVF, the path towards a safe and efficacious vaccine for humans is still under construction. Ultimately, the prevention of a RVF epidemic will require a One Health approach assessing the interaction between the environment, animal health, and human health to inform risk mitigation and prevention measures.

Featured image: Image by Wellcome Trust via Twitter

andersonLydia Anderson is a Dual DVM-Ph.D. graduate student at the University of Georgia and currently serves as an Associate Editor for Athens Science Observer. Since completing her Ph.D. in Infectious Diseases, she has been working on her DVM at the College of Veterinary Medicine with an emphasis in public health and translational medicine. She plans to use her training to help address the questions and challenges facing One Health due to emerging and zoonotic infectious diseases. When she is not busy learning how to save all things furry and playing with test tubes, Lydia can be found either freestyle cooking for her friends and family or binge watching Netflix with her rescue pup, Luna. More from Lydia Anderson.

Lost in Translation

0

The year is 2019; the place, your local grocery store.  You, the unwary consumer, wander the aisles on your weekly shopping excursion.  Reaching for the milk, you hesitate; “non-GMO” is emblazoned across one milk carton.  Meanwhile another label holds no such distinction. It does not assure you, the consumer, that its contents are free of “harmful” GMOs. You are struck with indecision. What to do?

Well, what if I told you all dairy milk is non-GMO, and there are currently no genetically modified dairy cattle in use by the dairy industry? What that non-GMO milk label actually means is that the cows who produced the milk were fed a diet supplemented with only non-GMO grain. However, a literature review has shown that in numerous analyses of animal by-products, DNA fragments from genetically modified feed have never been detected in eggs, milk or meat from animals which consumed those GMO feeds.

14479832417_d3f87cc894_z
“#6” by James Loesch. Licensed under CC BY 2.0

With the continually rising popularity of organic and clean living, a plethora of packaging publicizing products as a panacea to a puzzled populace has become a persistent problem – whew!  What I mean to say is that the hype surrounding healthy living has given rise to companies utilizing buzzwords such as ‘non-GMO’, ‘organic’, ‘vegan’ and, my personal favorite, ‘superfood’ to sell products to consumers.  This sentence makes a lot more sense than the one before it, right? Buzzwords used to market products to consumers can sometimes have a bit of the same effect; it sounds fancy and smart, but what is it actually telling you?  Here, we will explore the dichotomy between the marketing and actual meaning behind some common buzzwords.

59ccdc3467311.5a37d6b9a9b31 (1)
“* Vigilant Eats : Superfood//” by Eric Kass. Licensed under CC BY-NC-ND 4.0

There is no federal oversight or regulation of the term ‘superfood’.  This means that the Food and Drug Administration (FDA), does not manage how companies use the term in marketing their products.  Superfoods are generally assumed to possess high levels of vitamins, minerals, antioxidants or in some way benefit human health.  However, many products are labeled according to the shifting tides of the latest health crazes, often without any scientific basis. Without any regulated standard as to when a product may be labeled as a superfood, consumers have no guarantee beyond the manufacturer’s claim that said product has any elevated health benefits.

One of the first known uses of the term superfood was in the US after World War I.  The United Fruit Company utilized the term as a marketing strategy to promote the sale of bananas, one of their major imports.  By running an enthusiastic marketing campaign centered around the espoused virtues of bananas, including informational pamphlets on the health benefits of bananas, the United Fruit Company seeded a major health craze in the early 20th century.

Much like ‘superfood’, the United States does not employ a precise definition of what constitutes a Genetically Modified Organism (GMO).  Rather, the FDA and the Environmental Protection Agency (EPA) oversee whether a product should be labeled as GMO or non-GMO.  However, with no consensus as to what constitutes a GMO, the definition and subsequent regulation are murky at best.  Further complicating the matter; according to US regulations, all organic products must be non-GMO, however not all non-GMO products are organic.  In addition, since the definition of ‘organic’ is ‘process-based’ in the US, the presence of detectable GMO residues alone does not necessarily constitute a violation of the [organic] regulation”.

To better understand how complex these definitions can be, let us revisit our friend the banana.  Organic bananas are available in most grocery stores alongside bananas not certified as organic (conventional).  However, most bananas (organic and conventional alike) currently under mass production are essentially clones. Bananas today are extraordinarily different from their wild progenitors, who were smaller, starchier, and full of large, inedible brown seeds.  Through selective breeding a banana with much sweeter flesh and small, infertile seeds was developed: the Cavendish banana. Clones, in this case small rhizomes produced naturally by a mature plant, are one of the only ways to obtain new individuals in the face of infertile seeds.  ‘Clones’ do not fall under the definition of what constitutes a non-organic product or GMO, therefore bananas grown without assistance of certain herbicides or pesticides are labeled as organic.

183278839_0e70e1b900_z
“bananas” by liz west. Licensed under CC BY 2.0

The next time you head to your local grocery store, consider this: many terms used to espouse alleged superior health benefits or increased safety are subject to unclear and subjective definitions.  Just because a product is labeled as a “superfood,” doesn’t mean it has superpowers. Stay informed, eat healthy, and happy shopping.

About the Author

 Photo Jul 07, 3 16 12 PM Megan Buland is a graduate student in the Warnell School of Forestry & Natural Resources at UGA, where she studies forest health and microbial community ecology. When not visiting field sites or working under the flow hood, Megan is passionate about environmental communication and education, and exploring in nature. She enjoys rock climbing and hiking and loves her dog, Madra. You can reach Megan at megan.buland@uga.edu. More from Megan Buland.

Men control the reproductive rights of plants too

0

When confronted with the imprecise notion of “sustainability” in agriculture, most people’s thoughts drift to ideas of ecologically-mindful land management practices. I’ll dub these concepts “the classics”: rotate your crops, use less fertilizer and pesticides, always employ cover cropping. While these ideas are not wrong, they are incomplete in that they tend to omit some of the larger social contexts of sustainability, and agriculture is a realm in which the natural and social sciences are inextricably linked. Thus, agricultural systems are subject to the social structures and power dynamics of innumerable human societies and unsurprisingly, gender comes into play.  One particularly insidious way in which women the world over are marginalized is at agriculture’s foundation, within plant breeding and crop development. Is an agricultural system sustainable if there are inequities in who dictates which crops are developed?

2629345354_807c94e05d_o
Workers at a flower farm. Image credit:  World Bank Photo Collection via Flickr. Licensed under CC by-nc-nd 2.0.

At its crux, plant breeding strives to improve the genetic makeup of a plant for human consumption through the selection of a trait of interest. It is well-documented that there are differential crop trait preferences among men and women in the developing world. These differences arise when gender dynamics result in men and women interacting with the food system in functionally different ways; one classic example is when women are responsible for food preparation, while men are responsible for selling crops at market. Ergo, women tend to care about a wider “basket” of traits, with a greater focus on post-harvest traits pertaining to food processing, food use, nutrition, and familial food security. Conversely, men tend to have a narrower focus, caring more about crop traits pertaining directly to yield, crop productivity, and market orientation.

This is typified in one participatory plant breeding program for white pea bean in Ethiopia. While both genders were found to be concerned with traits pertaining to yield and drought tolerance, only women cared about bean cooking time and suitability for culinary purposes. Women were also more likely to prioritize an early harvest, a trait pertinent to familial food security, as this is the first crop to become available after the seasonal drought. Having different preferences is not in itself a problem, but issues arise when gender power dynamics influence who gets to exert their preferences.

nypl.digitalcollections.510d47df-a1bb-a3d9-e040-e00a18064a99.001.w
Women often work in the fields. Image credit: The New York Public Library Digital Collections via Schomburg Center for Research in Black Culture. Licensed under: Public Domain.

While women produce more than half of the world’s food, they’re frequently excluded from formal plant-breeding networks, agricultural organizations that have regional decision-making power, seed markets, and agricultural extension services. This all contributes to a general under-representation of female-preferred crop varieties in the developing world. While women are frequently able to act upon their trait preferences in spaces deemed “feminine”, such as the home garden or subsistence plot, their preferences are often omitted from the larger, more productive plots of land used for cash crop production. In an increasingly globalized and urban food economy, the prominence of industrial, cash crops on our plates is ever-growing, and implicit in that is the deterioration of female-preferred varieties.

In one example from rural Mali men supplanted women and their traditional leaf and vegetable crops from stream gardens to plant non-traditional crops for market. One male farmer explained, “men in the community became more aware of the potential value of the low-lying stream areas and eventually displaced women in the cultivation of these areas. He said that they began to clear the areas and then proceeded to fence and claim them as their holdings. After all he said, ‘There was money to be made!’” Along with this shift in garden ownership came a reduction in the nutritional value of the community’s meals. It is particularly alarming that the gender socialized to care about familial nutrition and food preparation is the one often excluded from crop variety development, as it is widely accepted that women are critical to global food security.

2802217601_93e2e2de4b_o
Farmer with a buffalo near Yangshou. Image credit:  Andy Siitonen via Flickr. Licensed under CC by 2.0.

The solution to this issue is simple in principle: consciously include women in plant breeding so that both genders’ preferences are represented.  Breeding programs that do so have led to crop varieties that are more widely-accepted and quickly adopted, greatly improving the efficiency of breeding efforts and ultimately leading to increased food security. In reality, this involves eliminating the global gender gap, which is a significant undertaking that organizations such a CGIAR’s Gender & Breeding Initiative are actively attempting to tackle. A classic sustainable agriculture recommendation is to plant a diversity of crops to increase the resilience of your farm. An ideological complementation to that is a push for a diversity of voices in the selection of those crops to ensure the resilience of our global food system.

About the Author

image1 Tara Conway is an M.S. student in Crop and Soil Sciences, where she is working towards the development of a perennial grain sorghum. She is originally from Chicago, IL. Her work experience spans from capuchin monkeys to soap formulating. You can reach her at tmc66335@uga.edu, where she would like to know which bulldog statue in town is your favorite. Hers is the Georgia Power one due to its peculiar boots.

The Secret World of Plant Chemistry: Plant Communication

0

Part II of the series exploring plant chemistry through different lenses.

Plants are the perfect embodiments of natural selection – they can’t just get up and move; so whatever adversity they face, they generally have to stick it out. It leaves the strongest individuals to survive while the weaker ones perish. This situation warrants some extreme (and creative!) adaptations. For example, Venus Flytraps evolved into their famously carnivorous lifestyle because their ancestors were bound to nutrient-deficient soil and eventually formed a mouthlike structure to catch their nutrients. Cacti’s cylindrical shape were molded by harsh desert conditions – the conical structure allows for the least amount of surface area to be exposed to the sun, thereby reducing the amount of heat and water stress they experience. But there’s an invisible adaptation that plants have developed over their evolutionary journey: communication. Perhaps not communication in the way that we’re familiar with, but plants have an intricate system of relaying critical messages; and those messages are right under our noses.

image2
Illustration by Vincent Warger. Used with permission.

Inaudible alarm systems

Think of the distinct smell of freshly cut grass. That smell is due to tiny molecules called volatile organic compounds (VOCs), which are being released into the air once the leaf tissue breaks. These VOCs act as a signal that can travel to other neighboring plants, relaying a range of messages. To put it as a chemical ecologist once explained to me, “Freshly cut grass is the smell of plant screams.” These screams aren’t into the void, they actually elicit responses.

The “screams” that neighboring plants are “hearing” are like a chemical alarm to other plants nearby. Some emit signals to tell their neighbors about an impending attack, allowing the plant receiving the signal to amp up its defense mechanisms in hopes of a better chance at survival. Plants can even call on insects to do the fighting for them. In an example of well-tuned coevolution, some plants can recognize the saliva of their insect attacker. That recognition in turn produces a specific VOC response, which calls on the attacker’s predator. This interaction is commonly seen with parasitic wasps and caterpillars – the caterpillar’s chewing triggers a VOC from its leafy lunch, attracting the deadly wasp, making a lunch of itself for the wasps.

On a less morbid note

Apart from warning signals and calls for help, VOCs are responsible for the delightful smell of flowers. Of course flowers didn’t evolve just to please our olfactory senses (or did they?), but a flower’s scent is an amalgamation of VOCs that act as a chemical billboard for pollinators. Pollinators can discern complex mixes of VOCs from specific plants and track them down over long distances. This is especially useful for plants that rely on a specific pollinator to reproduce. For example, a species of Magnolia tree has been found to release a very specific compound that only seems to attract the beetle that pollinates it. Since these chemical signals are often specific to a given pollinator species, it could explain why plants pollinated by bees and butterflies smell different to us compared to plants pollinated by bats and moths.

17125644084_2f1ee18542_o
Southern magnolias release chemical signals to attract a specific pollinator beetle. Image credit: Rob Bertholf via Flickr. Licensed under: CC BY 2.0.

The complex world of plant chemical ecology is just starting to unravel, as scientists not only look at how plants communicate with each other but how we can use their evolutionary adaptation to our advantage. These VOCs are so effective that their uses in agricultural settings are starting to be explored – possibly leading to a more sustainable way to protect crops from natural enemies. So remember, when you smell freshly cut grass or the sweet wisteria that is just starting to bloom, you’re smelling the finely-tuned product of evolution and a quick whiff of the secret world of plant chemistry!

About the Author

IMG_4016 Simone Lim-Hing is a Ph.D student in the Department Plant Biology at the University of Georgia studying the host response of loblolly pine against pathogenic fungi. Her main interests are chemical ecology, ecophysiology, and evolution. Outside of the lab and the greenhouse, Simone enjoys going to local shows around Athens, cooking, and reading at home with her cat, Jennie.

Big Science, Small Satellites

0

Is it a star? A moon? A comet even? No, it’s a satellite! NASA broadly defines a satellite as a moon, planet, or machine that orbits a planet or star. More specifically, “natural” satellites include the Earth, which revolves around the Sun, and the moon which revolves around the Earth. On the other hand, there are almost 5,000 “man-made” satellites that are currently in Earth’s orbit. These satellites are mainly utilized to facilitate communication, navigation, and observation for weather prediction, GPS, rescue operations, phone calls, and even establish a home in space with the International Space Station. Although we typically imagine satellites to be enormous structures made by highly experienced engineers and scientists, there are also smaller satellites in space that have been launched by everyday citizens and curious students.

800px-Phonesat-balloon-test.743380main
PhoneSat in space Image Credit: NASA Ames Research Center via wikimedia commons. Licensed under: Creative Commons CC0 License.

The CubeSat was developed by professors Bob Twiggs (Stanford University) and Jordi Puig-Suari (California Polytechnic State University) in 2000, when they wanted to make space research and satellite development more accessible to students. They adapted the model of successfully launched picosatellites (1 kg or ~2.2 lb weight) to develop a standard 10 cm or ~3.94 in (1U) picosatellite that weights up to 1.33 kg (~3 lbs).

A typical CubeSat is powered by solar panels that surround a frame which protects the main processing units and payload (as shown in Figure 2). The payload is the variable component of CubeSats, differing based on the main purpose of the satellite, whether it will be tracking temperature levels, measuring radiation levels, or taking images of the Earth’s oceans. Although initially met with criticism from the Space community, the power and potential of CubeSats proved successful with its first launch in 2003. Also known as the QuakeSat, this CubeSat was used to predict earthquake activity. The device stayed in orbit for 1.5 years, and collected signature data about eight earthquakes around the world.

ArduSat3
ArduSat (Arduino based CubeSat) Structure Image Credit: Peter Platzer via wikimedia commons. Licensed under: Creative Commons BY-SA 3.0.

Since their inception, CubeSats have gained increasing global popularity. In fact, the National Science Foundation’s Division of Atmospheric and Geospace Sciences set up a CubeSat-based research program in 2008 that financially supported CubeSat research. The program, along with NASA’s CubeSat Launch Initiative, motivated the rise of CubeSat development both within and outside of academia due to the ease and affordability of building these devices. There is even a smartphone-based CubeSat known as the PhoneSat, funded by NASA, that aims to build these nanosatellites using readily available components. As of January 2019, about 1030 cubesats have been launched into space with increasing numbers each year. In the next two years we will even see the launch of the University of Georgia’s (UGA) very own CubeSats.

Founded in 2016 by three students with the goal of educating and providing resources to students on the design and engineering of satellites, the Small Satellite Research Laboratory (SSRL) at UGA works on the development of CubeSats. They have two ongoing projects funded by NASA and the Air Force to build CubeSats that act as ocean color sensors, and image and motion-detect coastal regions, respectively. The satellites are set to launch in 2019 and 2020. If you are interested in learning more about CubeSats or the SSRL lab, please attend the Science Café on April 23rd at Little King’s, where there will be speakers from UGA’s SSRL.

big science, small satellites_ (3)

About the author:

chaitanya Chaitanya Tondepu is a Ph.D. Candidate in the Integrated Life Sciences program at the University of Georgia. Other than science, her favorite pastimes are dancing, hanging out with friends and family, exploring, crafting, and eating delicious food. You can email her at chaitanya.tondepu@uga.edu. More from Chaitanya Tondepu

Science Warning! Annihilation

0

Science Warning! Is a series about the science behind some of our favorite SciFi stories. Today we take a look at Annihilation starring Natalie Portman.

As a biologist, I find watching Annihilation a thrilling experience. The movie so expertly blends science-fiction and horror into a narrative where the rules of life are twisted to create a world that feels truly unique. Natalie Portman stars as Lena, a biologist with a rough military past out to avenge her husband by leading a group of ultra badass women scientists on a suicide mission into the Shimmer, an alien veil emanating from a lighthouse that changes the DNA of whatever steps inside. Annihilation is about our biology, at least vaguely, and although the scientific aspects of this movie are a bit of a stretch, some of the concepts discussed are great stepping stones from which we can learn about some real biology.

St._Marks_Lighthouse
St. Marks Lighthouse in Florida, the inspiration for Annihilation. Image Credit: Reweaver33 via Wikimedia Commons Licensed under: CC BY-SA 4.0.

The Biological Species Concept

Early on in our journey through the Shimmer, Lena and her team are attacked by a vicious alligator like creature with teeth like a shark. One member of the team hypothesizes that maybe this creature is some sort of crossbreed. Lena quickly shuts down this argument by claiming, “No, different species can’t crossbreed.”

This isn’t entirely accurate. Actually, different species crossbreed all of the time. In fact, there are some pretty amazing hybrid species that are relatively common in agriculture and in the wild. Ligers are crosses between male lions and female tigers and are, surprisingly, the largest type of felines in the world! Mules are hybrids produced by female donkeys and male horses. Mules make great work animals because they are stronger than a horse of comparable size with the tame disposition of a donkey. Different plant species readily hybridize all of the time! Sweet Corn, Tangelos, Pluots, and Plumcots are just a few of the hybrid foods we can find at the grocery store. The world of plants is crazy and there are so many hybrid species out there that it would be impossible to list them all. Heck, even ancient humans and neanderthals would hybridize and produce viable offspring, and the evidence for this is present in all of our DNA!

NSK-ZOO-liger
A liger held in captivity at Novosibirsk Zoo. Image Credit: Restle via Wikimedia Commons Licensed under: Public Domain.

But what defines a species? This is actually a really controversial question in biology. There are many competing definitions of what makes a species, but the predominant method of defining a species comes from the biological species concept. The biological species concept defines a species solely as a population of interbreeding individuals that are reproductively isolated from other groups of organisms, meaning that there is some barrier that prevents breeding between different populations. Under this species definition, organisms that look almost nothing alike but readily interbreed with each other are considered to be the same species.

Hold up though, we were just discussing that there are tons of different species that can crossbreed, are they not really species then? Well under the biological species concept, no. However, not all species have been classified according to the biological species concept. In my opinion, Charles Darwin had the best take when he said, “I look at the term species as one arbitrarily given for the sake of convenience to a set of individuals closely resembling each other.”

Essentially, a species is whatever somebody decides a species is, and because taxonomists were classifying species for hundreds of years before the development of the biological species concept, we have tons of species that would not be classified as such by now. Imagine, if a shark and alligator really could interbreed to create the monster in Annihilation, would you really consider sharks and alligators to be the same species? This is a wildly unrealistic example, but it does appropriately address some of the debate surrounding the biological species concept.

Now the next time you watch Annihilation with your friends, you can pause the movie, correct Lena, and annoy everybody else with educated rambling about the biological species concept and interbreeding. Just make sure to suspend your disbelief for the rest of the movie. Discussing the science behind science fiction is fun, but just because a movie might not be spot on scientifically, that doesn’t mean it should ruin our enjoyment of the film. So until next time, happy viewing!

About the Author

Screen Shot 2018-10-08 at 2.36.17 PM Max Barnhart is a graduate student studying plant biology and genomics at the University of Georgia. Growing up in Buffalo, NY he is a diehard fan of the Bills and Sabres and is also an avid practitioner of martial arts, holding a 2nd degree black belt in Taekwondo. He can be contacted at maxbarnhart@uga.edu or @MaxBarnhart1749.

Vinyl Pressing: A Lost (and Found) Art

0

From providing a soundtrack for our road trip to elongating an awkwardly silent elevator ride, music finds its way into every niche of our lives. It is a luxury that many of us not only enjoy, but hold a deep emotional connection to. Today, a selection of mediums to listen to our favorite songs is widely available – our phone, the radio, a cassette tape, a CD – but those mediums were built on the foundation of the record player.

A Brief History

The record player, originally called the phonograph, was the first device where audio could be recorded and played back. It was invented by none other than Thomas Edison, the inventor of the light bulb among other things. Edison’s prototype was born in 1877 out of tinfoil, a cylinder, two needles (one for recording and one for playing it back), and a hand crank. To test his newly minted contraption, he recited “Mary had a little lamb” into the mouthpiece. When the cylinder was rotated back, his voice came out just as it went in. You can actually hear the recording here. The phonograph garnered the attention of the world, being the first contraption to record and play music – a convenience we often take for granted. With the phonograph as the foundation, Edison’s prototype evolved into the single-needle modern turntable we use today.

How it works

Though Edison was deemed a wizard after his invention, the mechanisms on how a record player works are surprisingly straightforward. When Edison spoke into the mouthpiece to recite that nursery rhyme, the needle took those discrete vibrations and physically etched them into the tin foil wrapped around the cylinder. To play it back, the second needle followed the etched grooves, relaying those vibrations to a diaphragm where the sound was then amplified into a flaring horn. The modern record player takes after this same concept – the needle translates the tiny, unique grooves that are pressed into the vinyl record. That needle relays the message to a coil, which is turned into an electrical signal, allowing us to listen to that signal through speakers.   

11005685633_2c4c930f9a_o
Thomas Alva Edison with his 1877 invention, the phonograph. Image credit: ciriana_85, via Flickr. Licensed under: CC BY-NC-SA 2.0.

Closer to home

The way records are produced today is not that far from Edison’s original design, either. In brief, recorded music is translated from a digital signal into vibrations, where a needle etches a lacquer as it spins. An impression is made from the lacquer, where it can be copied and mass produced by vinyl and a hydraulic press.

This process is still ubiquitous today, even more so with the vinyl resurgence in the past decade – it’s also more local than you think. Right here in Athens, Kindercore presses vinyl for artists. Starting as a record label in the 1990s, Kindercore sought to embrace and expand the local music scene. Signing notable artists such as of Montreal and Dressy Bessy, the Kindercore record label became known around the world. Today, they have narrowed their focus to providing high quality vinyl pressings. To learn more about Kindercore and the science (and art!) of how we can enjoy music in its most grounded form, check out their Science Café Event on March 28th at Little Kings Shuffle Club.

Science_cafe_march

About the Author

IMG_4016 Simone Lim-Hing is a Ph.D student in the Department Plant Biology at the University of Georgia studying the host response of loblolly pine against pathogenic fungi. Her main interests are chemical ecology, ecophysiology, and evolution. Outside of the lab and the greenhouse, Simone enjoys going to local shows around Athens, cooking, and reading at home with her cat, Jennie. More from Simone Lim-Hing.

The Sugar Code: Representing Glycans

0

14803357412_0a6f3468ff_z
Lucky Charms. Photo by Sarah Mahala Photography (CC BY 2.0)

Hearts, stars, horseshoes, clovers and blue moons, pots of golden rainbows and me red balloons! If you’ve ever eaten Lucky Charms cereal, you probably know this jingle and the tiny shapes of marshmallows it references. Interestingly enough, glycobiologists, or biologists who study the sugars that make up those tasty mallows, have their own Lucky Charm code for the carbohydrates they study.

Carbohydrates are diverse and come in many different forms – each with a unique chemical makeup and properties. The sugar code, with its current twelve shapes and nine colors, evolved as a way for glycobiologists to represent the complex chemical structure of sugar chains in presentations and figures. But to someone new to the field, this crazy collection of colored shapes may seem strange and unfamiliar. I remember being thrown for a loop by the colorful models in glycobiology papers when I started out as a lab technician at the Complex Carbohydrate Research Center. Once I realized there was meaning and intention behind the selection of those colors and shapes, the symbols seemed logical. So for this blog, my goal is to decipher the sugar code for you.

Colors

Each symbol in the sugar code represents a different monosaccharide or single sugar unit. The color of the symbol represents the basic structure of each sugar. The same color is used for different monosaccharides with the same stereochemistry, or spatial arrangement of atoms. For example, every symbol with a yellow color has an arrangement of atoms like galactose. Here is a list some colors and the basic sugar stereochemistry they are associated with:

Green – Mannose stereochemistry

Blue – Glucose stereochemistry

Yellow – Galactose stereochemistry

Red- Fucose stereochemistry

So if you see a blue square you know that sugar has an arrangement of atoms similar to glucose because of its color, but what about its shape?

Shapes

Every symbol in the sugar code also has an associated shape, which tells you something about the composition of the sugar’s functional groups, or collections of atoms attached to the sugar’s carbon skeleton. The most basic shape is a circle, which represents a hexose sugar. The other shapes indicate some kind of modification to this basic hexose structure. For example, a square indicates an N-acetyl group is attached to one of the carbons. Here is a list of some shapes in the sugar code and the functional groups they are associated with:

Circle – Hexose sugar

Square – Hexosamine with N-acetyl group

Diamond – Hexuronate with an Acidic group

Triangle – Deoxyhexose sugar

There are various exceptions to these general rules for color and shapes, but for the most part knowing these standards will help you apply some meaning to the symbols drawn in glycobiology figures.

Putting It All Together

Both the shape and color of a unit in the sugar code imparts meaning about the chemical composition of that sugar. With this new lens, let’s take a look at a common sugar we come across in daily life: lactose.

Sitting in your fridge right now is likely some milk. This milk, if it comes from cows, contains lactose. Lactose is a disaccharide of two sugar units, a galactose and a glucose. So how would we draw this sugar using the sugar code?

Galactose and glucose are both hexoses, so their shape will be a circle. Galactose will be yellow and glucose will be blue. A glycobiologist would represent this disaccharide as a yellow circle linked to a blue circle. Ta-da! You’re basically a glycobiologist in training, and you didn’t even know it.

Screen Shot 2019-03-12 at 4.45.44 PM
A comparison of the complex chemical structure (top) and the symbolic sugar code (bottom) of lactose. Image created by the author and colored according to the Symbol Nomenclature for Glycans.

Why Have a Code?

The sugar code enables glycobiologists to more effectively and efficiently communicate with one another and the public. Part of being a good scientist, or in this case a good glycobiologist, is being an effective communicator of our research. The symbols in the sugar code allow us to do just that. However, it’s important for us glycobiologists to remember that not everyone we talk to about our science has encoded deep meaning to our sugar code.

Hopefully, you’re feeling less confused about the sugar code after reading this blog. Just think of the sugar symbols as emojis for glycobiologists! Similar to how it took my grandma some time to jump on the emoji bandwagon, it may take some time to use the sugar code effectively. But like everything, it just takes practice! Now if only Apple would include sugar code emojis in their next software release… a glycobiologist can dream, can’t she?!

About the Author:

Stephanie HalmoStephanie M. Halmo is a former middle school science teacher turned graduate student, actively pursuing her Ph.D. in biochemistry from the University of Georgia. Stephanie currently serves as an Assistant Editor for Athens Science Observer. In her spare time she likes to dance, volunteer at local schools and tie-dye anything she can get her hands on. You can connect with Stephanie on Twitter and Instagram @shalmo or by email: shalmo27@uga. More from Stephanie M. Halmo.

 

The Cold Truth About Cryopreservation

0

15139976356_cf57f3c331_z
Large “Cryostats” filled with liquid nitrogen and cold hopefuls. Image credit: Hawaiian Sea via flickr. Licensed under CC BY-NC-ND 2.0

Recently,  I was in the lab doing some routine work with cells. In order to start growing my own stock of cells I took a small vial out of a tank of liquid nitrogen, where it is stored at around -150°C (-238°F). Then I quickly thawed it to body temperature (37°C, or 98.6°F) and transferred it to a new dish where it began to grow. At some point during this process, I realized I had no idea why this actually worked. Is that scene in Return of the Jedi, where Han Solo gets thawed and is (mostly) fine, real? If I hop into this vat of liquid nitrogen, will you be able to pull me out in a hundred years? Armed with years of scientific training, I set off to find answers through careful research (i.e. Googling stuff that I don’t understand).

4458349419_071bff3d93_z
Han Solo frozen in carbonite. Image credit: FJ Fonseca via flickr. Licensed under Creative Commons License (CC BY-NC-ND 2.0)

A Brief History of Cryopreservation

The storage of biological material at ultra-cold temperatures, known as cryopreservation, is very real and routine in the research field. Scientists have been able to revive cryopreserved cells since the late 1940s. This was first accomplished by the Parkes group, who revived rooster sperm that had been frozen at -80°C. The technique has been crucial for maintaining important research cell lines, such as the famed HeLa cells and many engineered cell types.  Essentially, many of the chemical reactions that age and degrade cells can be slowed to a near halt at low enough temperatures. This “freezes” the cells in time.  The key to being able to revive the frozen cells lies in the addition of cryoprotectants.

Cryoprotectants are usually small molecules like glycerol or dimethyl sulfoxide (DMSO) that are able to diffuse into the cell and prevent the formation of ice crystals, which can destroy cells as they freeze. The expansion of ice rips apart the cells while also increasing salt concentrations to dangerous levels in the surrounding liquid.

Now, the big question is: can we freeze and revive an entire person? Believe it or not, research efforts for this are already underway. The cryopreservation of entire humans is called cryonics, and the first human to be cryonically frozen was James Bedford in 1967, whose body remains frozen to this day. James Bedford was a psychologist that suffered from an advanced form of kidney cancer. He opted to have his body cryopreserved upon his death in the hope that one day the technology would exist to revive him. Since then, many more people have paid large sums of money to be cryonically frozen upon their death by private companies, including baseball great Ted Williams. Many of these people suffered from advanced cancer or other incurable diseases and looked to cryopreservation as a final resort. The idea is that if they can stay frozen long enough, technologies will emerge that will allow them to be successfully treated when they are revived in the distant future.

Criticism of Cryonics: Cold Corpses Under Fire

To say that this approach is controversial would be greatly underselling it. One particularly scorching opinion from a neuroscientist states that, “those who profit from this hope [of being revived] deserve our anger and contempt.” Since there is no evidence that any brain activity is preserved after cryonic freezing, many believe that companies selling the idea are simply preying and profiting on the desperation of people trying to avoid death. One company called Alcor is funded using their clients’ life insurance, which deprives the families of much-needed funds after the death of a relative. Funerals are not cheap! Regardless of whether the technology to revive cryonically frozen humans will ever exist, it’s likely that most attempts at cryonics were botched. Many of the frozen bodies could be irreparably damaged due to formation of ice crystals or long intervals between death and freezing.

14545792457_d38b0b43f3_z
Liquid nitrogen tank. Image credit: Howard Stanbury via flickr. Licensed under Creative Commons License (CC BY-NC-SA 2.0)

Will cryopreservation of live humans ever be possible? It depends on who you ask, but there are a few that say the chances are low, but not impossible”.There have been cases of relatively simple organisms being revived after long-term freezing, but nothing as complex as a mammal. In my opinion, upcoming generations will have enough problems without having to worry about keeping Great-Great-Grandpa on ice. You’re better off spending the time, money and resources helping someone who’s still warm-blooded.

About the Author

Trevor3 Trevor Adams is a Ph.D. Student in the Integrated Life Sciences program at the University of Georgia. He is interested in how the molecular bits of life shape our world. His hobbies include hiking, reading, and hanging out with his cat Bustelo. More from Trevor Adams.

 

Cystic Fibrosis and Your Genes

0
Image credit: Caroline Davis2010 via Flickr.

Disease alters lives in permanent and often heartbreaking ways. Most people have a story about how they have been affected by disease, either firsthand, through a family member, or looking from the outside in on another person’s life. In a world where tragedy is at the forefront of our personal lives via news stories, gofundme pages, and the like, it is almost impossible not to be touched by disease. In the harsh reality where many diseases end in an individual’s death, why does disease itself not die off too?

While there are various causes that lead to disease, one important contributing factor may be your genes. A gene is made up of DNA and comprises basic units of heredity that are then transferred from a parent to offspring to determine some characteristic of the offspring. Therefore, genetic diseases can be passed down from parent to offspring because each parent gives a copy of his/her genes to his/her child. If the copies from both parents are identical for a given gene, then the child is considered homozygous for that gene; but if the two copies are different, then the child is considered heterozygous. This concept is better known as genetic inheritance.

Screen Shot 2019-02-04 at 10.17.58 AM
Image Credit: Science in the Classroom via Twitter

Cystic Fibrosis (CF) is an ideal model for studying genetic inheritance as it is associated with a single gene; therefore, it is a relatively straightforward example. CF causes damage to the lungs and digestive system via mucous secretions that cause obstructions in these organ systems leading to inflammation, tissue damage, and disseminated destruction throughout the body. This mucous that causes physical damage of the airway also predisposes patients to developing secondary bacterial infections, which can result in respiratory failure.

So what do our genes have to do with developing CF?

CF is considered a recessive genetic disease, meaning a person must receive one bad copy of the gene that is associated with the disease from each parent in order to develop the disease. This would be a homozygous individual. If an individual only gets one mutant copy of the gene, then they are heterozygous. Heterozygous individuals can also be called carriers because they carry one copy of the bad gene even though they do not show symptoms.

There are an estimated 20,000 genes in the human genome, and when just one of those genes has a mistake in it, or a mutation, CF can result. Many of the different mutations that result in CF cause a significant portion of the DNA sequence to be altered, resulting in a change in gene function. When one gene’s function changes, many processes are altered functionally from their original purposes if they were involved with that mutated gene.

 

unnamed-2
DNA Model. Image credit: Caroline Davis2010 via Flickr.

These changes in the DNA sequence can be both inherited or acquired. Whether or not that mutation persists in the population is then determined by mechanisms of evolution. Natural selection is a process of evolution in which individuals better adapted to their environment have higher reproductive success. Due to the possession of advantageous traits, these individuals have a higher rate of survival, resulting in their offspring having a similar, high rate of survival. More of these offspring survive and pass on their advantageous traits. CF results in traits that are not advantageous for an individual’s survival. Due to this, natural selection acts against CF.

So if CF is selected against, why does it persist in the population? Carriers of the disease do not develop symptoms because they do not develop CF. These individuals experience no negative effects from carrying a bad copy of this gene and assist in the “survival” of this genetic disorder. They pass on their single copy of the bad gene to any children that they have. This pattern continues until a homozygous individual is born.

While carrier individuals are asymptomatic, there are still ways to genetically determine your carrier status. Genetic testing is an accessible screening process that tests your genes for the presence of a mutant copy, meaning the bad gene. These tests are sometimes covered by insurance and sequence the specific genes in question in the patient’s DNA. Genetic testing serves as the greatest preventative tool to allow individuals to make informed decisions when planning for their family’s future. A person with a family history of CF should strongly consider undergoing a genetic test to screen for CF when planning to have children.

About the Author:

unnamed

Guest writer Callan Russell is a third-year student at the University of Georgia pursuing her Bachelor’s degree in genetics and a minor in music. Callan studies the molecular basis for epigenetic inheritance within the Schmitz Laboratory at UGA, but in her spare time likes to play trombone, volunteer with Extra Special People, serve at Athens Church, and play in the Redcoat Band at UGA football games. She plans on attending graduate school to study genetic counseling upon completion of her Bachelor’s degree. You can email her at callan.russell@uga.edu.

 

Building Strength from the “Floor” Up

0

Better posture. Better sex. Better poop?

If these happen to be part of your New Year’s resolutions (and if they aren’t, they should be), did you realize that working on your pelvic floor can help improve all three of these areas? If your answer is no, or if you’re wondering what the heck is my pelvic floor, then keep reading! My good friend Dr. Nidhi Patel PT, DPT, is an Athens native and UGA alumna. Now she works for the University of Georgia’s Health Center and is very passionate about pelvic floor physical therapy. She has talked my ear off about the importance of maintaining a strong pelvic floor, so I’ve asked her to share some wisdom on the topic.

A healthy foundation starts with a strong floor

image1
Hammock. Photo credit: Michelle Dookwah.

So what’s the importance of maintaining a strong pelvic floor? The pelvic floor is the layer of muscles that span the bottom of the pelvis, supporting the pelvic organs – in women, those would be the bladder, bowel, and uterus, and in men, just the bladder and the bowel. “Think of your pelvic floor like a hammock,” says Dr. Patel, “where one end is connected to the pubic bone and the other at your tailbone – a taut hammock equals nice tight muscles, a weak hammock means loose muscles.” In addition to literally holding up your pelvic organs, these muscles are required for other functions as well. Dr. Patel explains, “Just remember the 3 S’s of pelvic floor function – Support, Sexual function, and Sphincteric control.” If I had a dollar for every time Nidhi has talked to me about pelvic floor health and constipation, I’d be a millionaire with the most regular bowel movements! From its name alone, it may sound like strengthening your pelvic floor will only affect things in the pelvic region, but that’s far from the truth. Everything in the body is interconnected somehow, and the same goes for the pelvic floor.

The pelvic floor – way more than just Kegels

Most people have heard of pelvic floor muscles in relation to Kegel exercises. Kegels are often touted as easy exercises to tighten the pelvic floor muscles in women, and, in turn, provide better control of the vaginal wall muscles. They’re so discreet and simple that we’re often told to just do them anytime and anywhere – in the car at a red light or while doing the dishes. I could even be doing them right now, as I write this very sentence! But there’s so much more to strengthening the pelvic floor than just doing Kegels – and in some cases, Kegels may do more harm than good. That’s why it’s important to see a professional who can help provide you with information on what exercises best suit the needs of your pelvic floor.

image4
Deep central stability system diagram. Photo Credit: Ann Wendal Used with Permission.

The pelvic floor is part of the ”4 deep core muscles”. These include the diaphragm (under your lungs), the pelvic floor (in the pelvis), the transverse abdominis (surrounding your spine), and the multifidi (back muscles). These all work together to give you what’s called optimal core stabilization. Correct alignment of your core, think “ribs over pelvis”, is an important aspect of proper posture.

The diaphragm and pelvic floor should be working in sync. “You can picture it like an umbrella, with things working optimally in all directions,” says Dr. Patel “inhaling expands the diaphragm, like opening the umbrella, which then pushes down on the pelvic floor.”  When these muscles aren’t able to work in conjunction any more, think after a surgery, postpartum, or after a trauma – such as sexual abuse – these muscles may need to learn how to communicate and work in sync with each other again.

image3
Core Activation GIF. Photo Credit: Jenny Burrell Used with permission.

Muscles not communicating? Maybe it’s time to talk to your PT

Signs of pelvic floor dysfunction include symptoms like leaking pee when you laugh or cough (common amongst postpartum women), lower back pain, urinary urgency or frequency, incomplete bladder voiding, pain with sex, constipation, or pain in your tailbone when you sit! These symptoms are common to both women AND men, but they don’t necessarily have to be something you just have to live with.

Do these symptoms sound familiar to you? If so, it’s time to visit a pelvic floor specialist. Pelvic floor physical therapy can teach your  muscles to talk to each other again and help regain proper function of the pelvic floor.

Want to learn more about your floor and core? Come check out this week’s Athens Science Café with Dr. Nidhi Patel and Dr. Teresa Morneault PT, DPT, WCS from the University of Georgia Health Center at Little Kings Shuffle Club this Thursday, January 24th at 7pm.

image2

About the Author

IMG_0452 (6) Michelle Dookwah currently serves as an Assistant Editor for Athens Science Observer and is a graduate student at the University of Georgia Complex Carbohydrate Research Center, where she studies rare neurological disorders using patient stem cells. She’s pretty passionate about science and science communication. However, she also enjoys numerous activities in her free time, including reading, listening to podcasts and audiobooks, hiking, baking, and obsessing over her labradoodle named Goose! More from Michelle Dookwah.

A CURE for the Growing Demand of STEM Undergraduate Research Opportunities

0
ACC student Jacob Savell, right, from GEOL 1445 - Introduction to Oceanography class work inside a lab at Petroleum and Geosystems Engineering Department at UT Austin on Tuesday, June 27, 2017. The students are part of the Summer Undergraduate Research Experience Course (SUREC).

Many scientists agree that their love for scientific research began with their undergraduate research experiences. To fulfill the need for 1 million more STEM majors by 2020, university STEM programs are faced with the task of providing the multitude of students entering their programs with unique undergraduate research experiences. The demand for these transformative research experiences keeps growing, but how can we increase the supply?

What is the CURE? Ramping Up from 1:1 to 1:Many

CUREs, or course-based undergraduate research experiences, directly address the limited supply of research experiences available to STEM undergraduate students by simultaneously increasing the number of students involved in research while reducing the burden on faculty to mentor students one-on-one.

37567684942_1eab8ce190_o.jpg
Summer Undergraduate Research Experience Course (SUREC). Image credit: Austin Community College via Flickr. Licensed under: CC BY 2.0.

So what makes a CURE different from the typical apprenticeship model where an expert researcher mentors a novice researcher one-on-one? The goals of a CURE are similar to the  goals of the apprenticeship model. Specifically, they provide students the opportunity to:

  • contribute to original research relevant to the broader public
  • formulate hypotheses
  • investigate research questions with unknown results (no cookbook labs here!), and
  • communicate the results of this iterative process of research to the broader public through #scicomm

However, CUREs are distinct from the apprenticeship model in that students work together collaboratively and iteratively alongside a faculty mentor during a designated class time. This means the CURE is experienced by multiple students, even an entire course full of students, at the same time. With this structure, one instructor can mentor many students at once, and the time invested by students is in class rather than outside of class. Additionally, by being on a university’s course list, CUREs are offered to a broader range of students rather than to just those who self-select to enter the apprenticeship model.

What makes CUREs effective?

CUREs have been successfully implemented in a variety of contexts, and their effectiveness has been demonstrated repeatedly. CUREs increase graduation rates and the completion of STEM degrees. There is also evidence that CUREs result in strong motivational and learning gains for students who experience them.

So what aspects of these experiences make them so meaningful and effective? The answer still isn’t entirely clear, but researchers have some hypotheses about why these CUREs are so impactful. First, because CUREs occur during dedicated class time, they may reduce the stress of balancing research and a full course load for students. Additionally, moving research experiences into required coursework may mitigate some barriers to research experiences, making research more inclusive. Second, CUREs can provide more of an opportunity for students to develop a sense of ownership over their projects, which may be a possible contributor to persistence in STEM. Lastly, since CUREs can be offered as introductory-level courses, compared to research internships that often occur later in an undergraduate’s career, they may have the ability to influence students’ career paths earlier on. While the outcomes of certain CUREs are well-studied, more research is needed to tease apart the specific aspects of these experiences that make them so impactful.

Want more?

Has this cure for the growing demand of STEM undergraduate research opportunities piqued your interest? If so, be sure to attend the next Athens Science Café at Little Kings Shuffle Club on Thursday, December 13th at 7pm. UGA’s Dr. Erin Dolan will be there to discuss this novel approach to providing mentorship and research experiences for all undergraduate students.

unnamed

About the author:

image00 Stephanie M. Halmo is a former middle school science teacher turned graduate student, actively pursuing her Ph.D. in biochemistry from the University of Georgia. Stephanie currently serves as an Assistant Editor for Athens Science Observer. In her spare time she likes to dance, volunteer at local schools and tie-dye anything she can get her hands on. She is currently ASO’s News Editor. You can connect with Stephanie on Twitter and Instagram @shalmo or by email: shalmo27@uga. More from Stephanie M. Halmo.

Frosty the Microbe

0

‘Tis the season for stories of wintery magic. From Elsa and Frozone to their mythical grandfather, Jack Frost, there’s no cooler gift than the power to let it snow at will, or shock a pond skate-worthy with a single touch.

Little do we realize that these chilly abilities aren’t limited to the realm of holiday lore. If a microbiologist were writing the legends, they’d call Jack Frost by his scientific name: Pseudomonas syringae. Known for generations as the artist who sprinkles leaves with glitter on crisp winter mornings and blankets the landscape with snow, they’d add that he also happens to be about two and a half microns tall, and a well-studied plant pathogen.

snowflakes-on-handrail.jpg
Image credit: Sara2 via Wikimedia Commons. Public Domain.

If you look closely at P. syringae’s ice powers – or Elsa’s, for that matter – you’ll discover they really aren’t magic at all. They simply involve taking creative advantage of ice nucleation, and the fact that water can be super cool (bear with me).

Your local weatherman will tell you that water freezes into ice at 0° Celsius (or 32° Fahrenheit), at which point its molecules begin to rearrange into an orderly lattice configuration. What he might not mention is that pure water doesn’t just snap into ice as soon as it hits this temperature. Small droplets of pure water can remain liquid all the way down to -48°C, and the water in some plant cells can dip -4 to -12°C below its freezing point before it changes phase from liquid to solid. In the meantime, it exists as a supercooled liquid.

P. syringae has a clever way to get at the juicy insides of those plant cells: if they freeze the water within, the cells expand and burst open. But though the molecules in supercooled liquids are cold enough to form a crystal, they need something to get them started. That something is called a nucleator, and it is usually some small particle, like a dust speck (or snowflake). Scientists aren’t exactly sure how nucleators trigger crystal formation, or why they are even necessary. But the prevailing theory is that a nucleator provides a template for the water molecules to organize themselves around as they begin to form a lattice.

P. syringae has evolved the perfect protein to serve as this template, making it one of the best ice nucleators on the planet. While most living things avoid freezing solid (with some notable exceptions), ice is P. syringae’s secret weapon.

But no icy hero gets famous by making plantcicles. Frosty the Microbe has another trick up its sleeve – one that recent studies suggest allows it (and other ice-nucleating bacteria) to play a significant role in global precipitation patterns.

Snowflake_macro_photography
A bacterial escape pod. Image credit: Alexey Kljatov via Wikimedia Commons. Licensed under CC BY 4.0.

Through a collection method that involved sticking petri dishes out the window of a small plane while flying into a cloud (clearly, microbiology is not for the faint of heart), microbiologists have discovered P. syringae in the sky. The environment hundreds of feet above the ground is a harsh one, with frigid temperatures, lack of nutrients, and little protection from damaging UV radiation. But while it’s easy for a tiny microbe to get swept up by a gust of wind and carried high into the air, it’s much harder to come back down.

In the face of certain death, P. syringae have evolved a solution that would make Elsa herself jealous. Researchers believe their special proteins can nucleate their own personal snowflake from the moisture in the cloud, which carries them gently back to earth. Ice crystals serve as both escape pod and dispersion method – in other words, P. syringae have harnessed snowfall as transportation. Beat that, Ice Queen.

If you’re dreaming of a white Christmas, don’t look for a mythical being or a Disney princess. Ask Santa for a microscope.

About the Author

Rosemary WillisRosemary Wills is an undergraduate at UGA majoring in Plant Biology and Science Education. When she’s not writing, coding, or spending time with family, she enjoys growing plants in her windowsill and crocheting science-related things. More from Rosemary Wills.

 

Science Behind a Paywall

0

Science Behind a Paywall

Science – Aiming to solve the world’s problems and share its knowledge with you, all for the low price of $39.95, per journal article that is.

Scientific journal articles are essentially like newspapers for scientists, updating the community on the latest findings, methods, and events happening all over the world. Yet access to most scientific journals is incredibly limited to the general public, the tech industry, politicians, and essentially ANYONE not in the academic realm. And even if you are in academics, you’re not necessarily swimming in a sea of free knowledge; you’re still limited to what you can access, leaving researchers all over the globe with content knowledge gaps restricting the growth of research.

Screen Shot 2018-09-09 at 6.47.35 PM
Image Credit: Paywall: The Business of Scholarship, a documentary shedding light on the $25.2 billion a year for-profit academic publishing industry.

For years scientific organizations have been pointing to these large gaps in accessing knowledge amongst the scientific community. In 2001, the World Health Organization (WHO) showed that 56% of research institutions in low-income countries had no subscriptions to international scientific journals. While some steps have been taken to rectify this situation, most low- and middle- income countries (LMICs) still don’t have access to current content published in scientific journals. How can we expect universities in LMICs or even in the U.S. to subscribe to all journals when “the cost of subscribing to all research journals has risen by 300% above inflation since 1986 while academic library budgets have only risen by 79% total?” asked Noah Berlatsky of The Atlantic.

“Scholarship must be open in order for scholarship to happen.”  – Brian Nosek, Director, Center for Open Science, Professor at University of Virginia

In addition to the rise of journal costs, subscriptions to scientific journals operate like a cable subscription service, giving libraries little input into what they’re signing up for. It’s like having a package filled with mediocre or no-name channels on Directv, and getting access to your favorite channels, like HBO, is an extra added fee.

So each year when it’s time to renew or pick a subscription package university libraries end up signing contracts with publishers having little leverage over the scientific journals that come in their “package”. Not to mention the publishers can remove or give access to any scientific journals at any time without consent. And to top it all off the prices libraries pay to buy journal subscriptions are usually hidden by a non-disclosure agreement, allowing the publishers to name their price to any given institution. Therefore even the basic cost for publishers to produce their content is unknown.

Given all this, why do we continue to argue about making science Open Access (OA) to all? Do we not conduct research to help solve humanity’s problems around the world – to cure diseases, to diminish poverty, to fight pollution, to advocate for world health, etc.?

profit-margin-colored_s600x600
Image Credit: The Right to Research Coalition. Data Source: MIT Libraries

The Price of Prestige

“Academic publishing journals are a $10 billion dollar a year revenue producing industry.” –  Heather Joseph, Executive Director SPARC. Scientific journals have a larger profit margin (30-40%) than some of the biggest tech companies, including Apple, Google, and Amazon. So convincing these publishers to freely and openly share their content is not an easy task.

The thing is, humanity’s issues are not solved in a bubble. Open access will allow scientists across the globe to work on world issues in tandem, regardless of the wealth of their home country. Together, we have a hope of solving these decades-long problems. We must build on current and past knowledge to make progress towards open access. “To solve these issues you need to make sure everyone has access, not just rich countries, not just Ph.Ds,” says Cable Green, Director of Open Education Creative Commons. When we don’t share our science, the probability of us solving world problems becomes increasingly unlikely.

Creating a Way to Democratize Information

All the frustrations and lack of access have created a movement across Europe for total open access. This progressive open-access initiative is working to make Open Access a reality by 2020.

oabenefits
The Benefits of Open Access. Image Credit: WhyOpenResearch.com by Danny Kingsley and Sarah Brown

This Open Access initiative has the power to create some amazing benefits in the scientific community:

  • Increases Reproducibility – Allowing all researchers to have equal access to scientific findings allows others to conduct scientifically relevant experiments and increase reproducibility in the field. Researchers around the globe must be able to reproducibly get the same results and draw the same conclusions to avoid making premature statements about a drug’s efficacy, for example, or before implementing ideas in the public realm.
  • Decreasing Plagiarism – With open data sets, it’s easier to use another academic’s data and give proper attribution.
  • Detecting Fraud – With open data sets researchers can verify the accuracy of the data used in the study and determine when data has been fabricated.
  • Low-Income Countries Can View Work – Allowing all researchers to access data provides more equal opportunity for relevant science to happen in any country.
  • Public Can Access Data – The public can read the content that their tax dollars are paying for and make their own informed decisions about the scientific content being produced.
  • Research can Influence Policy  – If science is hidden behind paywalls we can’t expect policymakers to have access to relevant content to make decisions.
  • Higher Quality Work – Allowing scientific peers to access all work increases the value of peer review and hold scientific work to a higher standard.

By 2020 European universities could be operating in an Open Access world. Will the U.S. be next or will we continue to pay for scientific journal access like our cable subscriptions?

“By 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in complaint Open Access Journals or on compliant Open Access Platforms.” – The main principle of cOALition S

Making scientific progress and creating a space for innovation requires integrative and collaborative research. When scientists share their findings with other researchers and allow them access to appropriate and relevant data, science progresses. When researchers have limited access to current findings and knowledge, they perform experiments with gaps and missing elements in their scientific background. Knowledge should not be hidden behind paywalls. Until we have widespread Open Access we will continue to propagate this global problem – scientists working to solve the world’s issues with missing pieces.

Featured Image Credit: Amy (Bio Lab) Via Flikr Creative Commons

29890_15Amanda Shaver is a Ph.D. Candidate in the Dept. of Genetics at the University of Georgia studying metabolomics in C. elegans. She enjoys dancing, photography, and playing with her dog Mr. Peabody. Amanda currently serves as Editor-in-Chief for the Athens Science Observer and is on the Athens Science Café Programming Board. You can email her at Amanda.shaver@uga.edu or follow her on Twitter @AOShaver. More from Amanda Shaver.

 

As American as Pumpkin Pie

0

pumpkinmenagerie
Image Credit: Photo by Robert Zunikoff on Unsplash Licensed under CC by 4.0

Thanksgiving: an American holiday uniquely focused on food and family. Not grounded in religion, nor patriotic beliefs, it’s distinctly American in the best way. And yet, on this lovely day, we fail to realize the ancestral fruit of our continent right in the midst of our dinner tables. A fruit which has been unjustly usurped of its cultural legacy by another, more recent addition to the continent. The two fruits I am referring to are apples and pumpkins, known for their central roles in two classic American desserts, pumpkin and apple pie. While apples may seem to be the utmost American fruit – i.e. “American as apple pie” – recent advances in genetic technologies have allowed scientists to reconstruct the history of both crops, calling into question: which is the truly American fruit?

Tracing Back Our Food to its Roots

Recently, scientists from two different labs used DNA to trace back the ancient origins of apples and pumpkins to their ancestral homelands by looking at each species’ overall genetic diversity.

Genetic diversity simply refers to differences in DNA among members of the same species. For instance, while dogs of the same breed have low genetic diversity (similar DNA), dogs from different breeds have higher genetic diversity (different DNA).

This allows us to identify where species originated because geographic regions with the most overall genetic diversity tend to be where species initially evolved. When animals or plants travel long distances, they experience a ‘genetic bottleneck‘, where only a few organisms from the original group leave and reproduce. This increases similarity in DNA between all future organisms from this migrant population, lowering genetic diversity. So, by using genetic diversity as a proxy for ancestry, scientists were able to trace back the origins of both pumpkins and apples.

The Pilgrimage of Apples

Scientists analyzing the genetic sequence of a diverse range of apples pinpointed their origin to Kazakhstan, in the Tian Shan mountains. Ancient traders spread apples to Europe via the silk road. Apples continued making headway into Europe, and finally America with the colonization of New England in 1620.

map
The route Apples took before making their way into the Americas. Map Data: Google, 2018.

Even when apples reached the Americas, they were too small, bitter, and sour to be eaten as food. They were solely used to brew alcoholic cider – a common practice to circumvent the lack of clean drinking water. So while apples arrived in the Americas with these immigrants, apple pie was still a distant dream.

Although figures like Johnny Appleseed (yes, he was a real person) and apples in general stand large in American culture, they weren’t an intimate part of early American food tradition. Newly arrived American immigrants found a different, more colorful source of calories and dessert: the pumpkin.

Mendieta_PumpkinPie1
The Herefordshire Pomona. v. 2 (1876-1885). Art by Edith E. Bull. Digitized by Cornell University Library, Mann Library. Image via Biodiversity Heritage Library

Everyone Hail the Pumpkin Savior

Scientists interested in identifying the origins of pumpkins began analyzing the genetic diversity of Cucurbita Pepo (a group of plants containing pumpkins) and Cucurbita Maxima (a closely related group of squashes). Scientists were thus able to pinpoint an origin somewhere in the American southwest. This makes sense considering how frequently pumpkins were discussed in Native American oral tradition, and their central role in the three sisters cultivation method.

Initially grown for seeds, pumpkin flesh eventually became more edible as humans selected for better-tasting pumpkins. Over time, pumpkins became an intimate part of Native American food – stewed, roasted, and even preserved as a type of jerky for longer storage. It should be noted that these pumpkins were not the classic jack-o’-lantern orange we know today. Rather, they came in a menagerie of greens, oranges and reds, with a profusion of textures and shapes.

Native Americans were not the only pumpkin lovers. Colonists quickly realized their importance as a staple food, storing and eating them year round. Pumpkins became such a massive part of pilgrim cuisine that they even wrote poems to the glory of pumpkins.

Stead of pottage and puddings and custards and pies
Our pumpkins and parsnips are common supplies,
We have pumpkins at morning and pumpkins at noon,
If it were not for pumpkins we should be undoon.

(Source)

So, when you inevitably end up taking a bite of pumpkin pie this Thanksgiving, savor the taste of a traditional food rooted on this continent. Think not of the apple with its compelling journey from Asia, but the noble gourd. Who knows? Maybe upon cultural reflection the old adage will shift, and we’ll once again embrace the American fruit: “As American as pumpkin pie.”

About the Author

pablo_authorJohn Pablo Mendieta is a graduate student pursuing a PhD in bioinformatics and genomics at the University of Georgia. His specific interest lie at the intersection of agriculture, and genetic technologies. From Boulder Colorado, he enjoys the outdoors, science fiction, programming, and hip hop. You can email him at john.mendieta@uga.com or follow him on twitter @Pabster212.

This is What a Scientist Looks Like: Transgender

0
The corner of 17th and Church Street, NW, will be the site of the first transgender pride flag crosswalk in the United States, the second in North America (the first is in Alberta, Canada)

When you think of a scientist, who do you imagine?

If you’ve been following my #ThisIsWhatAScientistLooksLike series, perhaps you picture a #STEMinist. Right on! But have you considered the multifaceted nature of gender? For this installment, I’d like to introduce you to Lynn Conway and Ben Barres, both world-renowned scientists known for speaking out about transgender issues in science.

Breaking the Gender Dichotomy

The belief that humans fall into two distinct and complementary genders where each have natural roles in life is known as heteronormativity. This heteronormative view of the world pops up all over the place: at gender reveal parties, in the toy section of Target, and even in science.

Why is this a problem? The gender dichotomy is not an accurate representation of the spectrum of people living, breathing, and doing science on this Earth! When we expand the public perception of who actually does science, our discipline becomes more inclusive and welcoming to diverse groups of people. So let’s meet two individuals who expand our knowledge of who a scientist can be:

Lynn Conway

Lynn Conway is a famous computer scientist known for her pioneering work in computer chip design. She created internet-based platforms for the development and testing of circuit designs, ultimately co-authoring a book on the subject that became a standard resource in chip design courses during the 1980s.

Stephanie2
Lynn Conway. Image credit: Charles Rogers via Wikimedia  (CC BY-SA 2.5)

After graduating from Columbia with a Master’s in Electrical Engineering in the 1960s, Conway was recruited to work at IBM. At this time Conway was a man married to a woman and they had a family together. It was also during this time that Conway began her transition from male to female. After revealing her intention to transition, IBM fired Conway in 1968. Conway took a new name and started a new life.

Once Conway established herself as a woman, she kept her past private and started working as a contract computer programmer. She quickly worked her way up in the field and was recruited by Xerox as a research fellow – a job that ultimately led to her pioneering work in chip design. It was only as others started to take credit for her early work at IBM that she revealed her gender transition to the world by adding a section dedicated to it on her personal website.

Ever since, Lynn has acted as an activist for transgender people, especially in the technology sector. One of her greatest accomplishments in this role was in 2013, when Conway and one of her colleagues succeeded in obtaining transgender inclusion in the Institute of Electrical and Electronic Engineers Code of Ethics.

Ben Barres

If you’re a neurobiologist, the name Ben Barres may fire some synapses. Dr. Barres was the first to grow and culture glial cells – the non-nerve cells in the brain. Along with his students, Dr. Barres went on to show that the overlooked glia dictate the life and death of synapses, or junctions between neurons, in the brain.

stephanie3
Ben Barres. Image credit: Myelin Repair Foundation via Flickr. (CC BY 2.0) 

Aside from his intellectual merit and contributions to neurobiology, you may also know the name Ben Barres from his 2006 opinion piece in Nature. In this work, Ben argues against the comments of certain academics who said innate differences in aptitude (rather than discrimination) was the reason behind fewer women making it to the upper echelons of academia. As the first openly transgender person to be admitted to the National Academy of Science, Ben Barres had a unique insight to provide: at birth, Ben was assigned female, however, at the age of 43, Ben transitioned to male.

“By far, the main difference that I have noticed is that people who don’t know I am transgendered treat me with much more respect. I can even complete a whole sentence without being interrupted by a man.” – Barres in his 2006 Nature essay

Sadly, in December 2017, Dr. Barres died from pancreatic cancer at the age of 63. A beloved mentor to all his students and a role model for many transgender scientists, his passing was a great loss to the scientific community. His life and work, as a champion for both glial cells and historically marginalized groups in the sciences (women, minorities, and LGBTQ+ people), has left a mark. A mark that says it’s okay to be different, especially in science.

Towards an Inclusive Future

Individuals like Lynn Conway and Ben Barres used their clout in their respective fields to de-stigmatize transgender scientists. Perhaps with their stories and transgender activism brought into the spotlight, science can become a more inclusive and welcoming place to all.

Featured image credit: Ted Eytan via Flickr. (CC BY-SA 2.0)

Stephanie Halmo Stephanie M. Halmo is a former middle school science teacher turned graduate student, actively pursuing her Ph.D. in biochemistry from the University of Georgia. Stephanie currently serves as an Assistant Editor for Athens Science Observer. In her spare time she likes to dance, volunteer at local schools and tie-dye anything she can get her hands on. She is currently ASO’s News Editor. You can connect with Stephanie on Twitter and Instagram @shalmo or by email: shalmo27@uga. More from Stephanie M. Halmo.

The Magic Of Curries: A Spicy Science

0

Have your lunch breaks gotten boring? How about an enticing curry to spice up your taste buds? Curries like korma, rogan josh, jalfrezi and tikka masala are more than just food – they are an experience. An explosion of sweet, savory, spicy, and sour flavors all at once – each bold in its own right, but at perfect harmony when in the quintessential blend of a curry. Colonization of the subcontinent took curry, the heart and soul of Indian cuisine, to Britain where the dish gained such prominence that today Chicken Tikka Masala is Britain’s national dish, over English icons, like fish and chips. Such is the love for curries. Moreover, curries are delicious, healthy and fun to cook. Science explains why:

What makes curry so delicious?

Culinary science generally favors dishes with ingredients that have overlapping flavor profiles. A study found that Indian cuisine is an exception to this culinary dogma. In Indian cuisine, the more the flavor profiles of ingredients overlap, the less chance that they will appear together in a recipe.

Ankita-2
Chicken Tikka Masala. Photo credit: Chan Walrus via Pexels

The heart of a curry resides in its blend of spices and herbs.  The magical flavors of these spices are known, but are you aware of the plethora of health benefits accompanying these natural ingredients?

Health benefits of spices in curry

Turmeric is an essential ingredient of curry, and is also the major culprit behind a nasty curry stain. Tumeric makes up for this though, as one of its components, curcumin, is heart-healthy, normalizing the levels of bad fats in our heart. Antioxidants help fight cancer by protecting cells against damage from harmful molecules known as free radicals. The curcumin in turmeric is a natural antioxidant which impedes cancer progression.

The benefits of garlic can fill up an entire book; so vastly researched are the compounds found in its oils like ajoene, alliin, and allicin. These compounds help fight high blood pressure, or hypertension. The components of garlic’s oil are a rich source of vitamin A for good skin and sharp eyes, vitamin C for better immunity, phosphorus for stronger bones and teeth, potassium for building muscles, and essential amino acids which are the building blocks of proteins in our body.

Cumin’s distinctive earthy, acrid flavor contribution to curry is a result of cumin oils, which actually have antimicrobial properties. Another staple in curries, coriander seeds can help control the sugar levels in diabetic patients. Herbs like curry leaves and basil possess high medicinal values. They can be prescribed for the treatment of conditions like diabetes and obesity.

Ankita-3
Spices. Photo credit: Joe mon bkk via Wikimedia commons (CC BY-SA 4.0) 

The cooking chemistry of curry

While we have glimpsed the goodness of the many strong herbs and spices that make up curry, it should be noted that these ingredients cannot be consumed fresh. Instead, we add them to a curry to reap their benefits. Additionally, authentic curry is not simply the blend of all these spices; rather, preparing curry is an orderly step-by-step process.

Each of the curry’s spices and herbs is a water-insoluble, organic compound, whose flavor and health benefits may be destroyed by heat. Hence, spices are roasted in oil at controlled temperatures during the cooking of curry. Whole spices go first into the hot oil. They will crackle and burst, infusing their aroma all around, indicating that the flavors and benefits have been extracted into the curry. Ground spices are added later in the cooking process, as to not let them burn due to their high surface area of contact with oil. Curry base is prepared according to the water content of the ingredients: onion, garlic and ginger go in first, followed by green chili and tomato. Tomato is added last because its cell wall breaks down upon cooking and the exuded water helps to provide a nice texture. Next, vegetables or meat are added to the base. Add atop a bowl of rice and you’ve got yourself a delicious curry dish!

Make your own curry at home

Curry can be one of the healthiest recipes you could include in your daily diet when you prepare it at home. Unfortunately, restaurants often serve us the fattened versions with calorie-loaded cream. Now that you have been introduced to curry’s flavors and cooking science, I would suggest ditching those overpriced options at the restaurants. Instead, be your own chef at home by stepping into the magical world of curry. It is definitely worth a try; after all, curries are more than guilty pleasures reserved for your occasional visits to ‘The Taste of India’.

Featured image credit: pixabay licensed under CC0

unnamed-3-2 Ankita Roy is a Ph.D. Student in the Department of Plant Biology at the University of Georgia working with bean roots. She plays mommy to two kittens and can whip up a curry to fire your taste buds in no time. True to her cooking skills, she enjoys trying out new cuisines to satisfy her passion for everything flavorful. She is an executive member of the Indian Student Association. You can reach her at ankita.roy@uga.edu. More from Ankita Roy

Battle of the sexes… for evidence-based healthcare

0

Ladies, how many times have you rolled your eyes while your male friend languishes on the couch, suffering from a serious case of the “man flu” while the rest of us manage to take a decongestant and go about our day? Fellas, does it ever seem like every woman you know is constantly complaining of migraines? Well, what if this isn’t all in our heads?

Although we often assume that our opposite-sex counterparts experience health and illness the same way we do, medical science says this is not always the case. However, basic and clinical research has not always been conducted in an all-inclusive manner, leading to widespread biases in our medical knowledge. While we are shifting away from this paradigm, it has no doubt contributed to a pattern of suboptimal care and missed diagnoses for many patients throughout the decades, particularly for women.

gender-312411_1280
Image Credit: Clker-Free-Vector-Images via Pixabay. Licensed under CC0.

What makes us different?

Let’s start with the obvious: females and males have physical differences in terms of sex organs and a myriad of secondary sex characteristics such as facial hair or breasts. In addition, the two sexes differ on a cellular and molecular level in ways that are invisible to the naked eye. In humans, every cell that makes up the female body contains two X chromosomes, while all male cells carry one X and one Y chromosome (the exception in each case being germ cells). These chromosomes carry different sets of genes, so to some extent, males and females are “coded” differently. Females also undergo regular and substantial fluctuations in their hormones throughout their fertility cycle, which males do not experience.

Mice_X_Y_chromosomes
X (red) and Y (green) chromosomes in male (XY) or female (XX) mouse embryonic stem cells. Image credit: Wikimedia Commons Licensed under CC by 2.0

Until recently, it was assumed that these differences were only relevant in the context of reproduction, their scope limited to sex-specific disorders such as endometriosis or prostate cancer. For scientists trying to study heart disease, a female heart functions the same as a male heart, so why would it matter which one they used as a test subject?

Gender bias in research

Indeed, medical research has routinely been conducted using mostly male subjects, whether lab animal or human. Male subjects were considered to be more “controlled”; females were supposedly more confounding as a consequence of their hormones. The underlying assumption was that the data gathered from males would apply to females anyway. As such, females were underrepresented, if not outright excluded, in medical studies and clinical trials.

We now know that this assumption is fundamentally flawed. It turns out that a homogenous group of males does not make a good proxy for the entire population. An often cited example involves Ambien, a prescription sleep aid. For 20 years, it was universally prescribed using the guidelines that were established during trials that skewed heavily male. After two decades of complaints from women about lingering next-day impairment, evidence emerged that Ambien is actually metabolized more slowly in the female body. The recommended dosage has since been cut in half for female patients.

headache-1540220_1920
Ambien was the first ever drug with different suggested dosages for males and females. Image Credit: stevepb via Pixabay. Licensed under CC0.

Incidence of disease

It’s not just the way we respond to drugs; our different physiologies also play a role in how we experience illness. For instance, there is evidence that men have weaker immune systems than women, so not only are men more susceptible to infection, they also experience symptoms with greater severity and take longer to recover. However, there is a tradeoff; women are at far greater risk of developing autoimmune disorders, which cause a person’s immune system to become overactive and attack their own body.

As these discrepancies become more widely observed, many scientists are working to develop standards of best practice for including sex as a relevant factor in the study of disease models, epidemiology, and medical interventions. Despite these efforts, as of 2012, it was found that 22% of animal-based research studies published across five major surgical journals did not report sex at all, and of those that did, 80% still used exclusively male subjects.

Bridging the gaps

Fortunately, progress is being made. In 1993, the National Institutes of Health mandated that women must be included in all government funded health research. Between 2010 and 2012, an average of 43% of clinical trial participants for newly approved drugs were female, up from less than 20% in the 90s. For the first time in modern medicine, we are becoming sensitive to the differences in men’s and women’s health. But after over a century of downplaying these inconsistencies, we have a lot of catching up to do. In the meantime, maybe we can have a little more sympathy this man-cold and flu season.

About the Author

Jennifer Kuraz Jennifer Kurasz is a graduate student in the Department of Microbiology at UGA, where she studies the regulation of RNA repair mechanisms in Salmonella. When not in the lab, she prefers to be mediocre at many hobbies rather than settle on one. She greatly enjoys her women’s weightlifting group, cooking, painting, meditation, craft beer, and any activity that gets her outdoors. She can be contacted at jennifer.kurasz25@uga.edu. More from Jennifer Kurasz.

 

Mirror, mirror on the wall, who is the fairest (or youngest) of them all?

0

Have you ever been scared of grey hair, skin wrinkles, baldness, or even worse, dementia? Voltaire once remarked, “What most persons consider as virtue, after the age of 40 is simply a loss of energy”. Nothing is as unnerving as the fact that we all have limited time on this beautiful planet. For centuries, humans have searched for the Philosopher’s Stone in order to achieve eternal life and vitality. Anti-aging products have become one of the most sought-after substances, not just in the glamour world, but among ordinary people.

Aging is a slow, natural, and irreversible process associated with changes in biological, physiological, psychological and social processes. While greying hair or wrinkles are some of the visible, rather benign consequences of aging, the more serious ones are declines in sensory functions and daily activities, as well as increased susceptibility to disease, frailty, or memory loss. Aging is by far the biggest contributor to death, killing more than two-thirds of the nearly 150,000 people that die every day across the globe.

14509482644_f4bf46dbed_o
No I can’t remember! Image credit: Neil Moralee via Flickr. Licensed under CC BY-NC-ND 2.0.

The cause of such a complex phenomenon as aging is not completely understood. Among the plethora of theories proposed to explain it, perhaps the most well-known one is immunosenescence, which refers to the gradual decline in immunity. The immune system plays a critical role in maintaining our health as it protects us from infections and disease. As age naturally advances, a person’s capacity to respond to infections and develop long-term immune memory, especially by vaccination, decline considerably. The thymus is an important organ of the immune system, where T cells mature. As a person becomes older, the thymus shrinks in size and the production of T cells is reduced. The diminished function of mature lymphocytes, such as B cells and T cells weakens immunity. Macrophages, which ingest foreign cells, and destroy bacteria, cancer cells and other antigens, are created more slowly as age advances. This slowdown may be one reason behind cancer being more common among older people. Moreover, autoimmune disorders become more common with aging as the immune system appears to be less tolerant of the body’s own cells and normal tissue is mistaken for foreign tissue.

Shinya_yamanaka10
Dr. Shinya Yamanaka. Image credit: National Institute of Health via Wikimedia commons. Licensed under Public domain.

What’s the secret weapon to fight aging? Epigenetics seems to be the best bet to date. In particular, “DNA methylation”, a mechanism used by cells to control gene expression (i.e. whether, and when, a gene is turned on or off) plays a key role in age-related immunity, as discussed in this paper and elsewhere. Recently, researchers have successfully employed epigenetic reprogramming as a possible approach to preclude and reverse aging. Using middle-aged mice with a genetic mutation responsible for Hutchinson-Gilford progeria syndrome, which causes rapid aging in children, they activated four transcription factors known as “Yamanaka factors”. These factors can convert mature cells back into stem cells and were named after Japanese stem cell scientist Shinya Yamanaka, who won the Nobel Prize in Physiology or Medicine for discovering them. The approach rejuvenated damaged muscles and the pancreas, resulting in a 30% increase in life-span of the mice. Because the Yamanaka factors reversed changes made to gene regulators, this study adds weight to the scientific argument that aging is largely a process of epigenetic changes – alterations that dial gene activities up or down – including those that influence immunity.

Beyond waiting for the next scientific discovery, there are many proven ways seniors can strengthen their immune system and extend their life-span. Timely vaccinations, stress avoidance, meditation, good sleep and hygiene are few of them. Of course, in the words of director-actor Woody Allen, “You can live to be a hundred if you give up all things that make you want to live to be a hundred,” but with the current advancements in Genomics and Epigenetics, the day doesn’t seem too far when one might not need to give up guilty pleasures to live a long and healthy life.

To learn more about the association between aging and immune system, please be sure to attend the upcoming Athens Science Café on November 8, 2018. Dr. Nancy Manley, distinguished research professor and head of the Department of Genetics at UGA, will be sharing her perspective and expertise about this very interesting topic.

About the author:

Debkanta-Chakraborty-570x570 Debkanta Chakraborty is a Ph.D. Candidate in the Institute of Bioinformatics at the University of Georgia studying plant genetics and evolution. He has an interesting career arc, having pursued undergraduate studies in Electronics and Communication Engineering, a Masters in Biochemistry, and is now focusing on Genomics and Bioinformatics. Other than Biosciences, he is deeply passionate about Number Theory and Computational Geometry. In the leisure time which is a rarity in a graduate student’s life, he loves singing, playing an instrument called harmonium, participating in HQ trivia and watching movies. He hails from Durgapur, India and has an avid interest in traveling and watching tennis. He calls himself the biggest fan of Roger Federer. You can email him at debkanta@uga.edu. More from Debkanta Chakraborty.

CSI Athens: Crime Scene Science

0

Every contact leaves a trace.

Locard’s Exchange Principle’, the underlying premise of modern forensic science, describes a perpetrator’s involuntary act of leaving traces behind in a crime scene in exchange for taking some sort of trackable evidence with him. Traces, including blood, saliva, fabric, dirt, prints, and weapons, are meticulously collected by the crime scene investigator (CSI) during an active investigation. The CSI will carefully survey and document the physical condition of the scene while also collecting photographs, sketches, and evidence. They then ensure safe packaging and delivery of the evidence to a laboratory for further analysis by forensic scientists. The results are relayed back to the active case detective (criminal investigator) and/or police officers involved to continue the investigation.

7170656948_3da84a1a62_o
Forensic Science Investigator. Image Credit: West Midlands Police via Flickr. Licensed under CC BY-SA 2.0.

While the duties of CSI end at the crime scene, forensic scientists conduct the necessary experiments to process physical evidence. The various evidence is usually analyzed through specialty units such as latent print unit, firearms unit, and chemistry unit, using tools and techniques (i.e. spectroscopy, digital forensics, PCR, autopsies) to isolate, identify, and relate evidence back to the crime.

Technological and scientific breakthroughs within forensic science have significantly transformed the outcome of many criminal investigations. One such breakthrough was the Uhlenhuth or precipitin test, created by Paul Uhlenhuth in 1901. The test identifies which species a blood sample came from by using a biochemical separation technique. The test was first used by Uhlenhuth to convict Ludwig Tessnow, the Mad Carpenter, for the murder and dismemberment of two boys in Germany. After witnesses mentioned seeing Tessnow the day of the murder with bloodstains on his shirt, Uhlenhuth used his test to confirm that the stains originated from human blood. Tessnow was found guilty and executed for his murders.

Cases similar to Tessnow’s are often left cold and unsolved until advancements in forensic science and technology prove useful enough to revisit the investigation years, even decades, later. Recently, the Golden State Killer, a 42-year-long case, was brought to rest with the use of DNA tracing technology and genealogy databases. The same approach is being used to revisit the Zodiac Killer investigation. In addition to more efficient versions of current technology, developing techniques such as microbiome identification and forensic epigenetics can ideally help resolve historically infamous cold cases like that of Jack the Ripper and Mad Butcher of Kingsbury Run.

To learn more about the science and people behind criminal investigations be sure to attend the upcoming Athens Science Café on October 25, 2018! William Edison and Aja Carnell will be sharing their perspectives and expertise about their careers in CSI.

About the author:

chaitanya Chaitanya Tondepu is a Ph.D. Candidate in the Integrated Life Sciences program at the University of Georgia. Other than science, her favorite pastimes are dancing, hanging out with friends and family, exploring, crafting, and eating delicious food. You can email her at chaitanya.tondepu@uga.edu. More from Chaitanya Tondepu

 

Raising the Dead: The Science of Frankenstein

0

Frankenstein's_monster_(Boris_Karloff)
Boris Karloff as Frankenstein’s Monster from the 1935 film Bride of Frankenstein. Image credit: Wikimedia Commons Licensed under CC by 4.0

It’s that time of year again. The weather is starting to get a little cooler, the leaves are changing color, and flannel shirts are now socially acceptable to wear. It is finally Fall and Halloween is right around the corner.

One of the most iconic stars of Halloween parties and the horror fiction genre is Frankenstein’s Monster. Mary Shelley’s novel Frankenstein: The Modern Prometheus, written in 1818, tells the tale of the mad scientist Dr. Victor Frankenstein and his quest to bring a dead body back to life. As a scientist, the story of Dr. Frankenstein and his creation always resonated with me. It is a chilling commentary on the dangers of scientific experimentation and ethics. After all, raising the dead is not a task one should take lightly. That being said, what helped bring Frankenstein’s Monster to life was based upon some very real science. So in honor of Halloween and Dr. Frankenstein, let’s explore some of the ways science has tried to bring the dead back to life.

Galvanism, the “Animal Electricity”

When Mary Shelley was writing Frankenstein in the early 19th century, the theory of Galvanism was just beginning to take off. The field of Galvanism was named after Luigi Galvani, who in 1786, demonstrated that electricity could be used to stimulate muscle movement in frog legs. Galvanism is now considered to be the precursor to modern day electrophysiology. Giovanni Aldini, the nephew of Galvani, famously demonstrated the concept of Galvanism to a public audience in 1803 when he used electrical stimulation to “reanimate” the corpse of the recently executed criminal, George Forster. The Newgate Calendar was there to report on this demonstration:

“On the first application of the process to the face, the jaws of the deceased criminal began to quiver, and the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process the right hand was raised and clenched, and the legs and thighs were set in motion.”

plate4
Plate 4 from Aldini’s Essai théorique et expérimental sur le galvanisme, avec une série d’expériences (1804). Image credit: Giovanni Aldini via Wellcome Collections Licensed under CC by 4.0.

A Mentally Stimulating Subject

Aldini tricked and treated people to a display of a dead body being shocked into life, but would this also work on the brain? Well, Aldini experimented a bit here too. He couldn’t shock a dead brain back to life, but he did experiment with transcranial stimulation on the living. The thinking was that electrical stimulation could alter someone’s behavior, or perhaps even cure neurological disorders.

One of Aldini’s most detailed accounts of these experiments came from his 1801 travels throughout Europe where he met Luigi Lanzarini, a 27 year old farmer who had been admitted to a psychiatric hospital as a patient suffering from “melancholy madness” (now known as major depression). Aldini spent weeks with Lanzarini conducting sessions of progressively more intense transcranial stimulation until, miraculously, Lanzarini was shocked back into sanity! Even though the understanding of the human brain and how it worked was very primitive at this point, Aldini had hit on something solid. Transcranial stimulation is a technique used successfully to this day to treat patients with mental disorders.

plate5
Plate 5 from Aldini’s Essai théorique et expérimental sur le galvanisme, avec une série d’expériences (1804). Image credit: Giovanni Aldini via Wellcome Collections Licensed under CC by 4.0.

Is the science behind reanimation dead in the water?

It seems that Aldini’s mad science experiments may have inspired some modern imitators. Today, the Philadelphia based company Bioquark is preparing to launch the ReAnima project, an experimental trial in South America that aims to reanimate the brains of 20 deceased individuals. Bioquark claims that through a combination of stem cell therapeutics and electrical stimulation they can generate new functioning neurons in a dead brain. The scientific community has called out Bioquark’s research as totally bogus (because it is), but that being said, it’s pretty fun to read about!

For better or worse, we probably aren’t going to bring the dead back to life like Dr. Frankenstein anytime soon. But when Mary Shelley conceived her horror story back in the early 19th century, Galvani and Aldini were demonstrating foundational work in the fields of science and medicine. It has been 200 years since then; who knows what will be possible 200 years from now? By then, it might even be possible to reanimate the dead entirely…

About the Author

Max Barnhart is a graduate student studying plant biology and genomics at the University of Georgia. Growing up in Buffalo, NY he is a diehard fan of the Bills and Sabres and is also an avid practitioner of martial arts, holding a 2nd degree black belt in Taekwondo. He can be contacted at maxbarnhart@uga.edu or @MaxBarnhart1749.

 

A Warmer Climate Means Stronger Hurricanes

0

Screen Shot 2018-10-08 at 2.19.16 PM
Hurricane Florence as seen by GOES East satellite Wednesday, Sept. 12, 2018. Image Credit: NOAA via NESDIS. Licensed under Creative Commons 2.0.

The start of the 2018 hurricane season and recent presidential controversy have brought hurricanes back into the public eye. The first major hurricane of the 2018 season, Hurricane Florence, made landfall the morning of Friday, September 14th on the Atlantic coast of the Carolinas. To date, Florence has caused an estimated $38 to $50 billion dollars in damages, and has directly resulted in the deaths of 50 people.

 

28206171997_294f72a955_k (1)
Flooding. Image Credit: Texas Military Department via Flickr. Licensed under Creative Commons 2.0.

While recovery efforts unfold across affected areas, the destruction of coastal communities has begun to feel routine. Last year, Hurricanes Harvey, Irma, Jose, and Maria led to an estimated $200 billion dollars worth of damages in Houston, TX, Puerto Rico, Florida Keys, and many other communities in the Gulf). Furthermore, 9 out of the top 10 most active hurricane seasons have been within the last 22 years.

This unprecedented level of hurricane activity isn’t the only recent weather trend that has been observed. Seventeen out of the eighteen warmest years on record have occurred since 2001, which 97% of climate scientists attribute to human activity. These trends beg the question, is there a link between the increase of hurricane activity and climate change?

The birth of a Hurricane

Hurricanes form over warm ocean waters. What starts as a normal thunderstorm over the Atlantic Ocean can increase in intensity when warm water evaporates and rises. After this warm, saturated air reaches a certain altitude, it will begin to cool and sink back down towards the surface of the ocean. This sinking action condenses the storm and creates a low pressure vacuum at its center, which causes the storm to begin spinning and results in high winds and torrential rain. Once the sustained wind speeds reach 74 mph, the storm is strong enough to be classified as a Hurricane. (For a good, although slightly outdated, video explanation of the Hurricane formation process, click here.)

The arguments against a link between hurricane activity and climate change

There are three counter-arguments skeptics are likely to use when arguing against a link between hurricane severity and climate change:

  1. There was a period of relative inactivity between Hurricane Katrina in 2005 and Hurricane Sandy in 2012. In the hurricane seasons between 2005 and 2012, only in 2010 was there any significant hurricane activity observed and this was well after the start of the climate change debate.
  2. Coastal communities in the U.S. are growing rapidly, so the damage from Hurricanes now is more costly than it was in the past due to an increase in valuable property along the coast.
  3. There has not been an increase in the number of Hurricanes per year since 1950.

Although these counter-arguments are factually correct, they focus on the impact hurricanes have on humans rather than the science of hurricane climatology as a whole. These premises are not a valid argument against a link between hurricane activity and climate change because they fail to provide sufficient evidence showing that hurricane activity has not been impacted by climate change.

How climate change impacts hurricane activity

Hurricanes are becoming more powerful and this power increase has been linked to several aspects of climate change. Average ocean surface temperatures and air temperatures have risen by about 1.5℉ over the past century. This seemingly menial rise in temperature makes for an enormous impact during Hurricane formation. Warmer air can hold more moisture. This directly adds more “fuel” to the Hurricane, which results in increased rainfall and flooding once the storm makes landfall. Furthermore, as the ocean has warmed there has also been an increase in sea level. The increased sea level gives storm surges a physically higher starting point for flooding, resulting in increased flood levels and flooding further inland.

Moving Forward

While Hurricanes may not become more frequent, they are most certainly becoming more powerful as a result of rising temperatures in the ocean and the air. There is a significant body of evidence to show that climate change has the potential to make hurricanes more dangerous. With better technology, we can hopefully be more prepared for these storms in the future; but for now, we must accept that Hurricane seasons will only be getting worse as we move further into the 21st century.

 

 

Screen Shot 2018-10-08 at 2.36.17 PM

Max Barnhart is a graduate student studying plant biology and genomics at the University of Georgia. Growing up in Buffalo, NY he is a diehard fan of the Bills and Sabres and is also an avid practitioner of martial arts, holding a 2nd degree black belt in Taekwondo. He can be contacted at maxbarnhart@uga.edu or @MaxBarnhart1749.

Porn is Changing Your Brain

0

Porn is changing your brain. Even with occasional use, porn begins to physically and functionally alter your brain, decreasing its volume and normal activity.

A Startling Trend: Erectile Dysfunction (ED)

In the past decade, there has been a sharp increase in men suffering from erectile dysfunction (ED) during partnered sex. Researchers have also found sharp increases in rates of delayed ejaculation, decreased enjoyment of sexual intimacy, decreased sexual and relationship satisfaction, and decreased desire for sex with partners. Most surprising was the age group in which these sharp increases of nearly 30% were observed: men under 40.

After psychologists controlled for the standard causes of ED, they concluded that porn’s unique properties of self-reinforcement, limitless and unrivaled stimulus, and potential for easy escalation to more extreme material were potent enough to explain the sharp rise of ED seen in the “millennial” generation. Researchers believe that the conscious and subconscious expectations porn introduces are so unrealistic that a disconnect forms between online content and real-life romantic, partnerships. This makes real-life sex less arousing, leading to a diminished ability to perform.

8447210394_14c5de252d_o
Prescription Impossible. Image Credit: See-ming Lee via Flikr. Licensed under Creative Commons 2.0.

Porn is an unrivaled stimulus

The combination of 1) the unrivaled novelty and 2) sexual stimulation being biologically prioritized due to our innate, biological need to breed makes porn a unique activator of the brain’s reward system. The brain encourages itself via a chemical “reward” system, which is controlled by neurotransmitters, such as dopamine. Today’s porn viewer can maintain high levels of sexual arousal, and associated dopamine, for extended periods due to the unlimited novel content of accessible porn on the web. With these two characteristics combined, porn is uniquely stimulating and has been argued to be an unrivaled stimulus by some studies looking at impulsivity and addiction.

Porn usage is self-reinforcing

The brain’s reward system encourages a person to remember and repeat biologically critical behaviors, such as eating, socializing, and sex. In this way, the brain’s reward system reinforces our desire to eat ice cream or go out with friends.  Because porn is a considerable activator of the brain’s reward system, it becomes a self-reinforcing activity, where an individual can incentivize their own behavior because they have the ability to reward themselves.

With this subconscious abuse of one’s reward system, repeated use of porn leads to abnormal activation of the brain’s reward system. This has been seen in studies where sexual satisfaction with partners, as measured by affection, physical appearance, sexual curiosity, and sexual performance negatively correlates with repeated porn viewing. This occurs as a porn viewer’s tolerance to porn increases fairly quickly and leads the user to need more extreme material to be sufficiently aroused. When a viewer has conditioned their sexual arousal to the highly stimulating porn, sex with desired real partners may register as “not meeting expectations”, resulting in the user being desensitized to real sex, touch, and pleasure and thereby unable to sustain an erection. This large body of data shows that sexual arousal can be altered in those that watch porn.

2200707001_b8ae19db09_o
SEX. Image credit: Tom Magliery via Flikr. Licensed under Creative Commons 2.0

Moving Forward

As a neuroscientist and someone who strongly believes in sexual liberation, I have fought the notion that porn has negative effects. I have begrudgingly changed my stance and have accepted that porn changes your brain and negatively impacts romantic relationships. My bias blinded me to the sound science behind porn’s effects on the brain. The findings from peer-reviewed, academic journals are so stark that after discovering them and checking their research methods and analysis, I cannot deny their findings. I also realize that I conflated the sexual liberation that comes with being able to talk openly about erotic topics with the erotic stimulus itself. Maybe you can relate to these mindsets.

Unfortunately, being aware of the phenomenon does not eradicate the problem. However, it has been found in many studies that discontinuation of watching porn for just three weeks can reverse impotence and other sexual arousal issues in research patients. It is important for me to state that I am not suggesting that porn should be filtered or banned. I think sex education is the answer; we should educate young people that sex can be fun, enjoyable, and safe– that sex is okay. However, it is vital that we realize the depiction of sex in porn isn’t harmless fun or free of consequence.

Screen Shot 2018-03-31 at 1.51.09 PM Madelaine Wendzik currently serves as an Associate Editor for the News and Policy Team at Athens Science Observer and is a Ph.D. student in the Neuroscience Program at the University of Georgia studying neuroinflammation and immune response in pediatric traumatic brain injury. She enjoys board games, downloading one too many podcasts, and anything to do with white chocolate macadamia nut cookies. You can email her at MWendzik@uga.edu or follow her on twitter @SciPolicyGirl. More from Madelaine Wendzik.

 

Swimming the ladder

0

The annual upstream migration of salmons to their spring spawning habitat is fairly well-known. However, most people may not know that this behavior is common among other fish species as well, including sturgeons, American shad, and American eels. The distance that fish travel during migration can vary widely – some fish do not need to migrate very far to reach their spawning habitats to reproduce, while salmons can travel for up to 200 miles to reach theirs. A bit too far for a booty call, if you ask me.

16335491222_704b504f80_o
Coho Spawning. Image Credit: Bureau of Land Management Oregon and Washington via Flickr. Licensed under CC BY 2.0.

Unfortunately, there are currently barriers in the streams and rivers around the world that can hinder fish migration and have negative impacts on fish populations. For example, if a fish’s migratory route is obstructed, they cannot reach their ideal spawning ground, and thus may forego producing offspring altogether. As a result, the number of new members being recruited into the population will decrease, and the population abundance will decline over time.

Common types of fish migration barriers include culverts, dams, and levees. There are currently more than 4,600 dams in Georgia alone, and the robust redhorse is an example of a fish species that occurs in Geor