
Johns Hopkins University scientists have harnessed artificial intelligence to visualize and track synaptic changes in live animals, with the aim of improving our understanding of changes in brain connectivity in humans due to learning, aging, injuries and illnesses. Using machine learning, they were able to improve the clarity of the images, allowing them to observe thousands of individual synapses and their changes in response to new stimuli.
Artificial intelligence makes it easier to visualize neural connections in the brains of mice.
Johns Hopkins scientists have harnessed artificial intelligence to create a technique that allows for the visualization and monitoring of changes in the strength of synapses, the connection points through which nerve cells in the brain communicate in living organisms. The technique, as described in Methods of natureit could, according to the researchers, pave the way for a better understanding of how these connections in the human brain evolve with learning, age, trauma and disease.
If you want to learn more about how an orchestra plays, you have to observe individual musicians over time, and this new method does that for synapses in the brains of living animals, says Dwight Bergles, Ph.D., Diana Sylvestre and Charles Homcy Professor in the Solomon H. Snyder Department of Neuroscience at the Johns Hopkins University (JHU) School of Medicine.
Bergles co-authored the study with colleagues Adam Charles, Ph.D., ME and Jeremias Sulam, Ph.D., both assistant professors in the department of biomedical engineering, and Richard Huganir, Ph.D., Bloomberg Distinguished Professor at JHU and Director of the Solomon H. Snyder Department of Neuroscience. All four researchers are members of the Johns Hopkins Kavli Neuroscience Discovery Institute.
Thousands of SEP-GluA2-tagged synapses (green) surrounding a poorly-tagged dendrite (magenta) before and after XTC image resolution enhancement. Scale bar 5 microns. Credit: Xu, YKT, Graves, AR, Coste, GI et al. Methods Nat
Nerve cells transfer information from one cell to another by exchanging chemical messages at synapses (junctions). In the brain, the authors explain, different life experiences, such as exposure to new environments and learning skills, are thought to induce changes in the synapses, strengthening or weakening these connections to enable learning and memory. Understanding how these tiny changes occur across the trillions of synapses in our brains is a daunting challenge, but it’s crucial to discovering how the brain works when it’s healthy and how it’s altered by disease.
To determine which synapses change during a particular life event, scientists have long searched for better ways to visualize the changing chemistry of synaptic messaging, necessitated by the high density of synapses in the brain and their small tract sizes that make them extremely difficult to visualize even with new state-of-the-art microscopes.
We had to go through challenging, blurry and noisy imaging data to extract the portions of the signal we need to see, says Charles.
To do this, Bergles, Sulam, Charles, Huganir and their colleagues turned to[{” attribute=””>machine learning, a computational framework that allows the flexible development of automatic data processing tools. Machine learning has been successfully applied to many domains across biomedical imaging, and in this case, the scientists leveraged the approach to enhance the quality of images composed of thousands of synapses. Although it can be a powerful tool for automated detection, greatly surpassing human speeds, the system must first be trained, teaching the algorithm what high-quality images of synapses should look like.
In these experiments, the researchers worked with genetically altered mice in which glutamate receptors the chemical sensors at synapses glowed green (fluoresced) when exposed to light. Because each receptor emits the same amount of light, the amount of fluorescence generated by a synapse in these mice is an indication of the number of synapses, and therefore its strength.
As expected, imaging in the intact brain produced low-quality pictures in which individual clusters of glutamate receptors at synapses were difficult to see clearly, let alone to be individually detected and tracked over time. To convert these into higher-quality images, the scientists trained a machine learning algorithm with images taken of brain slices (ex vivo) derived from the same type of genetically altered mice. Because these images werent from living animals, it was possible to produce much higher quality images using a different microscopy technique, as well as low-quality images similar to those taken in live animals of the same views.
This cross-modality data collection framework enabled the team to develop an enhancement algorithm that can produce higher-resolution images from low-quality ones, similar to the images collected from living mice. In this way, data collected from the intact brain can be significantly enhanced and able to detect and track individual synapses (in the thousands) during multiday experiments.
To follow changes in receptors over time in living mice, the researchers then used microscopy to take repeated images of the same synapses in mice over several weeks. After capturing baseline images, the team placed the animals in a chamber with new sights, smells, and tactile stimulation for a single five-minute period. They then imaged the same area of the brain every other day to see if and how the new stimuli had affected the number of glutamate receptors at synapses.
Although the focus of the work was on developing a set of methods to analyze synapse level changes in many different contexts, the researchers found that this simple change in environment caused a spectrum of alterations in fluorescence across synapses in the cerebral cortex, indicating connections where the strength increased and others where it decreased, with a bias toward strengthening in animals exposed to the novel environment.
The studies were enabled through close collaboration among scientists with distinct expertise, ranging from molecular biology to artificial intelligence, who dont normally work closely together. But such collaboration, is encouraged at the cross-disciplinary Kavli Neuroscience Discovery Institute, Bergles says. The researchers are now using this machine learning approach to study synaptic changes in animal models of Alzheimers disease, and they believe the method could shed new light on synaptic changes that occur in other disease and injury contexts.
We are really excited to see how and where the rest of the scientific community will take this, Sulam says.
Reference: Cross-modality supervised image restoration enables nanoscale tracking of synaptic plasticity in living mice by Yu Kang T. Xu, Austin R. Graves, Gabrielle I. Coste, Richard L. Huganir, Dwight E. Bergles, Adam S. Charles and Jeremias Sulam, 11 May 2023, Nature Methods.
DOI: 10.1038/s41592-023-01871-6
The study was funded by the National Institutes of Health.
The experiments in this study were conducted by Yu Kang Xu (a Ph.D. student and Kavli Neuroscience Discovery Institute fellow at JHU), Austin Graves, Ph.D. (assistant research professor in biomedical engineering at JHU), and Gabrielle Coste (neuroscience Ph.D. student at JHU).
#Synaptic #secrets #revealed #Scientists #artificial #intelligence #observe #changing #brain #connections
Image Source : scitechdaily.com