When someone asks Eva Dyer what she does for a living, she has a short and simple answer: “I try to teach machines how to understand the brain.”
As the principal investigator of the Neural Data Science Lab — or NerDS Lab — at the Georgia Institute of Technology she leads a diverse team of researchers in developing machine learning approaches to analyze and interpret massive, complex neural datasets. At the same time, they are designing better machines, inspired by the organization and function of biological brains.
In other words, they’re moving knowledge back and forth, from machine learning to neuroscience and from neuroscience to machine learning, and their work as computational neuroscientists is drawing national attention, acclaim, and support.
In the summer of 2020, Dyer was one of three researchers in the U.S. to secure a McKnight Technological Innovations in Neuroscience Award. Then she received the big news later in the year that she’d won a BRAIN Award — Brain Research through Advancing Innovative Neurotechnologies — from the National Institutes of Health (NIH).
The three-year grant is her lab’s first R01 (Research Project Grant) from the NIH and part of her continuing collaboration with the lab of Keith Hengen at Washington University in St. Louis, “where they are collecting these really large-scale neural datasets of free behavior in mice that provide an ideal ground for testing methodologies that we’re developing,” said Dyer, an assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Tech and Emory University.
Since winning the BRAIN Award, she’s been busy with her research, publishing papers, and presenting the work at major conferences, including the virtual International Conference on Machine Learning in July. And this month, Dyer’s lab earned a prestigious spot as an oral presenter at NeurIPS, the conference on Neural Information Processing Systems.
The work they’re presenting — about a new set of tools in self-supervised learning, a method of machine learning that more closely imitates how humans classify objects — is the NerDS Lab’s latest contribution in addressing one of the biggest challenges in neuroscience: finding simplified representations of neural activity that allow for greater insights into the link between the brain and behavior.
As neural datasets continue to grow in size and complexity, Dyer’s work is driving scientific understanding of the brain. It may also help solve a global energy problem.
Sound of Music
Dyer’s deep interest in the brain is a direct consequence of her love of music. Originally from Atlanta, she grew up in Seattle surrounded by the arts, a multi-instrumentalist whose favorites were the oboe, piano, and voice. As an undergrad at the University of Miami, she double majored in audio engineering and physics and worked as an assistant sound designer on a documentary film focused on global accessibility to clean drinking water.
“My involvement in music, particularly jazz, really led me to the kind of work I’m doing now,” Dyer said. “I was inspired by music and how we perceive sound. When I’m listening to a musician play, what’s going on in my brain? How am I processing that? When I have an emotional reaction to music, where is that coming from?”
She became fascinated with signal processing, a subfield of electrical engineering that focuses on analyzing signals, such as sound, images, or scientific measurements — or, say, voltage spikes in neurons. Dyer leveraged that interest at Rice University, where she earned her Ph.D. in electrical and computer engineering and dove deeper into neuroscience.
Then she joined the lab of computational neuroscientist Konrad Kording at Northwestern University as a postdoctoral researcher. There, she developed a cryptography-inspired strategy for neural decoding, research that landed in the journal Nature Biomedical Engineering and made headlines.
But that was only scratching the surface. Dyer has moved on.
“It’s been rewarding to take those initial ideas and build out a number of new methodologies,” said Dyer, who came to Georgia Tech in 2017. “Here, I’m fortunate to be in the midst of a vibrant machine learning and AI community.”
That community is well-represented at NeurIPS (Dec. 6-14), considered the world’s largest gathering of machine learning researchers. More than 20,000 are expected to participate in this year’s virtual event, and Georgia Tech researchers will give more than 40 talks or poster presentations.
The Way Forward
As a computational neuroscientist, Dyer blends multiple disciplines, including electrical engineering, computer science, and physics, using mathematical tools and theories to understand how brains process information. That’s the simple version. Probing a single neuronal circuit means probing a collection of hundreds of thousands of neurons of many different cell types, because brains are heterogenous.
“An active area of research in my lab is trying to uncover the functional properties of individual cell types, to ultimately build AI systems of the future,” said Dyer, who envisions artificial neural networks that will more closely resemble the workings of biological brains. “We think that heterogeneity will have an important use in AI. But we still haven’t figured out what that is.”
The main goal of her lab’s vigilance in studying all of that brain data, in comparing all of those numbers, is to figure out how the coordinated activity of large collections of neurons change in the presence of something like disease or addiction.
“We think these tools we’re developing will give us the ability to suss that out,” Dyer said.
Along the way, she also is considering the large environmental footprint of artificial intelligence, which gets back to that idea of fixing an energy problem: “AI and computation consume a lot of global energy right now; it’s a real problem.”
Unlike biological neurons, artificial neural networks are basically talking all of the time, wasting energy. It’s an area where she sees promise in her work translating the structure and behavior of the brain to artificial neural networks.
“Neurons in the brain only fire when they have something to say — they’re saying nothing, then they spike. It’s sort of a binary event,” Dyer said. “What if we build spiking neural networks, where artificial neurons only fire when they have to? That would be a way forward. We could create a next-generation, energy-efficient computer infrastructure using brain-inspired principles.”