Tag Archives: neuroscience

We have math neurons and singing neurons?

According to the two items I have here, the answer is: yes, we have neurons that are specific to math and to the sound of singing.

Math neurons

A February 14, 2022 news item on ScienceDaily explains how specific the math neurons are,

The brain has neurons that fire specifically during certain mathematical operations. This is shown by a recent study conducted by the Universities of Tübingen and Bonn [both in Germany]. The findings indicate that some of the neurons detected are active exclusively during additions, while others are active during subtractions. They do not care whether the calculation instruction is written down as a word or a symbol. The results have now been published in the journal Current Biology.

Using ultrafine electrodes – implanted in the temporal lobes of epilepsy patients, researchers can visualize the activity of brain regions. © Photo: Christian Burkert/Volkswagen-Stiftung/University of Bonn

A February 14, 2022 University of Bonn press release (also on EurekAlert), which originated the news item, delves further,

Most elementary school children probably already know that three apples plus two apples add up to five apples. However, what happens in the brain during such calculations is still largely unknown. The current study by the Universities of Bonn and Tübingen now sheds light on this issue.

The researchers benefited from a special feature of the Department of Epileptology at the University Hospital Bonn. It specializes in surgical procedures on the brains of people with epilepsy. In some patients, seizures always originate from the same area of the brain. In order to precisely localize this defective area, the doctors implant several electrodes into the patients. The probes can be used to precisely determine the origin of the spasm. In addition, the activity of individual neurons can be measured via the wiring.

Some neurons fire only when summing up

Five women and four men participated in the current study. They had electrodes implanted in the so-called temporal lobe of the brain to record the activity of nerve cells. Meanwhile, the participants had to perform simple arithmetic tasks. “We found that different neurons fired during additions than during subtractions,” explains Prof. Florian Mormann from the Department of Epileptology at the University Hospital Bonn.

It was not the case that some neurons responded only to a “+” sign and others only to a “-” sign: “Even when we replaced the mathematical symbols with words, the effect remained the same,” explains Esther Kutter, who is doing her doctorate in Prof. Mormann’s research group. “For example, when subjects were asked to calculate ‘5 and 3’, their addition neurons sprang back into action; whereas for ‘7 less 4,’ their subtraction neurons did.”

This shows that the cells discovered actually encode a mathematical instruction for action. The brain activity thus showed with great accuracy what kind of tasks the test subjects were currently calculating: The researchers fed the cells’ activity patterns into a self-learning computer program. At the same time, they told the software whether the subjects were currently calculating a sum or a difference. When the algorithm was confronted with new activity data after this training phase, it was able to accurately identify during which computational operation it had been recorded.

Prof. Andreas Nieder from the University of Tübingen supervised the study together with Prof. Mormann. “We know from experiments with monkeys that neurons specific to certain computational rules also exist in their brains,” he says. “In humans, however, there is hardly any data in this regard.” During their analysis, the two working groups came across an interesting phenomenon: One of the brain regions studied was the so-called parahippocampal cortex. There, too, the researchers found nerve cells that fired specifically during addition or subtraction. However, when summing up, different addition neurons became alternately active during one and the same arithmetic task. Figuratively speaking, it is as if the plus key on the calculator were constantly changing its location. It was the same with subtraction. Researchers also refer to this as “dynamic coding.”

“This study marks an important step towards a better understanding of one of our most important symbolic abilities, namely calculating with numbers,” stresses Mormann. The two teams from Bonn and Tübingen now want to investigate exactly what role the nerve cells found play in this.

Funding:

The study was funded by the German Research Foundation (DFG) and the Volkswagen Foundation.

Here’s a link to and a citation for the paper,

Neuronal codes for arithmetic rule processing in the human brain by Esther F. Kutter, Jan Boström, Christian E. Elger, Andreas Nieder, Florian Mormann. Current Biology, 2022; DOI: 10.1016/j.cub.2022.01.054 Published February 14, 2022

This paper appears to be open access.

Neurons for the sounds of singing

This work from the Massachusetts Institute of Technology (MIT) according to a February 22, 2022 news item on ScienceDaily,

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

Pretty nifty, eh? As is the news release headline with its nod to a classic Hollywood musical and song, from a February 22, 2022 MIT news release (also on EurekAlert),

Singing in the brain

These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Neural recordings

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allows electrical activity to be recorded by electrodes placed inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of neuron activity.

“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient’s electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

Music in the brain

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

“This way of combining ECoG and fMRI is a significant methodological advance,” McDermott says. “A lot of people have been doing ECoG over the past 10 or 15 years, but it’s always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers say.

The researchers now hope to learn more about what aspects of singing drive the responses of these neurons. They are also working with MIT Professor Rebecca Saxe’s lab to study whether infants have music-selective areas, in hopes of learning more about when and how these brain regions develop.

Here’s a link to and a citation for the paper,

A neural population selective for song in human auditory cortex by Sam V. Norman-Haignere, Jenelle Feather, Dana Boebinger, Peter Brunner, Anthony Ritaccio, Josh H. Mcdermott, Gerwin Schalk, Nancy Kanwisher. Current Biology, 2022 DOI: 10.1016/j.cub.2022.01.069 Published February 22, 2022.

This paper appears to be open access.

I couldn’t resist,

Literature and your brain (the neuroscience of it)

This guy (Angus Fletcher) is a little too much the evangelist for my taste but the ideas supporting the book he has authored and is promoting in this video are in line with a lot of thinking about vision and memory both of which can be described acts of creativity. In this case, Fletcher is applying these ideas to literature, which he describes as an act of co-creation,

A May 3, 2021 news item on phys.org announces the publication of Fletcher’s book,

If you really want to understand literature, don’t start with the words on a page—start with how it affects your brain.

That’s the message from Angus Fletcher, an English professor with degrees in both literature and neuroscience, who outlines in a new book a different way to read and think about stories, from classic literature to pulp fiction to movies and TV shows.

Literature wasn’t invented just as entertainment or a way to deliver messages to readers, said Fletcher, who is a professor at The Ohio State University.

Stories are actually a form of technology. [emphasis mine] They are tools that were designed by our ancestors to alleviate depression, reduce anxiety, kindle creativity, spark courage and meet a variety of other psychological challenges of being human,” Fletcher said.

“And even though we aren’t taught this in literature classes today, we can still find and use these emotional tools in the stories we read today.”

Here’s more about the book and ideas supporting it in a May 3, 2021 Ohio State University (OSU) news release (also on EurekAlert) by Jeff Grabmeier and Aaron Nestor, which originated the news item,

Fletcher explains these concepts in his book Wonderworks: The 25 Most Powerful Inventions in the History of Literature.

For example, in a chapter about fighting loneliness, he discusses how reading The Godfather by Mario Puzo may help. A chapter on feeding creativity talks about the virtues of Alice in Wonderland and Winnie-the-Pooh. Looking for the best way to make your dreams come true? For that, Fletcher proposes the TV show 30 Rock.

Wonderworks doesn’t ignore the classics: The book discusses how reading Shakespeare can help us heal from grief, Virginia Woolf can assist readers in finding peace of mind, and Homer can support those needing courage.

Fletcher said his neuroscience background very much influences the approach to literature he takes in Wonderworks.

“When you read a favorite poem or story, you may feel joy, you feel a sense of empathy or connection. One of the things I do in the book is provide the scientific validation for the things we’ve long felt when we’ve read favorite books or watched movies or TV shows that we loved,” he said.

“From my neuroscience background and studies that I’ve done, I can see how literature’s inventions plug into different regions of our brain, to make us less lonely or help us build up our courage or do a variety of other things to help us. Every story is different and is, in effect, a different tool.”

Fletcher said to truly understand the power of literature requires a different way of approaching stories from what is offered by most traditional literature courses.

The usual method of teaching literature focuses on the words, asking students to look for themes, to consider what the author intended to say and mean.

But that’s not the focus at Project Narrative, an Ohio State program of which Fletcher is a member.

“At Project Narrative, we reverse the process. Instead of looking at the words first, we look first at what is going on in your mind. How does this story make you feel? We look at how people are responding to the characters, the plot, the world that the author created,” Fletcher said.

After examining how the story makes you feel, the second part of the process is to trace that feeling back to some invention of the story, whether it is the plot, a character, the narrator, or the world of the story.

The themes of the story, or what the author means to say, are less important in this approach to literature.

That means when you are looking for a book to stimulate your courage, you don’t have to look for a book that has “courage” in the title or even as one of its themes according to traditional literature analysis, Fletcher said.

“Courage comes from reading a work of literature that makes us feel like we’re participating in something bigger than ourselves. It doesn’t have to mention courage or have courage be one of its themes,” he said. “That’s not relevant.”

For example, you wouldn’t think of reading The Godfather to ward off loneliness. But Fletcher said it can have this effect, partly through its use of a specific operatic technique. In Wonderworks, Fletcher explains how some operas feature a period of dissonant and turbulent music that is eventually resolved by a sweet harmony.

“The clashing and discordant music is upsetting, but then the sweet relief of harmony comes and releases dopamine in our brain, bonding us to the music,” he said.

“Puzo does the same thing in The Godfather, by creating chaos and tension in a chapter and then just partly resolving it at the end, giving us this partial dopamine rush that bonds us to the characters and to the story and makes us feel like they are friends.”

And even though it may not be good to be friends with gangsters in real life, the dopamine rush that we get from befriending the Corleone family can help ward off loneliness, he said.

If you’re reading stories like The Godfather while isolated during the COVID-19 pandemic, it may even help ease the transition back to normal life when the world opens back up.

Neuroscientists have discovered that a part of the brain, called the dorsal raphe nucleus, helps us make friends, Fletcher said. It contains a cluster of dopamine neurons that are primed for short periods of loneliness and stand ready to encourage us to be sociable when we again meet people.

But if our isolation lasts weeks or months, like during the pandemic, that priming fades and our brain hunkers down in isolation – making it harder to re-connect with people.

“So what The Godfather and other stories can do is wake up the dorsal raphe nucleus and make it easier to rejoin society when the pandemic is over,” he explained.

Fletcher said the use of operatic techniques in The Godfather is just one example of how literature can be a form of technology.

And he hopes more people will want to figure out how these technological tools in literature really work in our brains.

“The idea behind the book is to give you a different way of reading, one that unlocks the extraordinary power of literature to heal your brain, give you more joy, more courage, whatever you need in your life.”

You can order “Wonderworks: The 25 Most Powerful Inventions in the History of Literature” from this Simon & Shuster (publisher) webpage,

A brilliant examination of literary inventions through the ages, from ancient Mesopotamia to Elena Ferrante, that shows how writers have created technical breakthroughs—rivaling any scientific inventions—and engineering enhancements to the human heart and mind.

Literature is a technology like any other. And the writers we revere—from Homer, Shakespeare, Austen, and others—each made a unique technical breakthrough that can be viewed as both a narrative and neuroscientific advancement. Literature’s great invention was to address problems we could not solve: not how to start a fire or build a boat, but how to live and love; how to maintain courage in the face of death; how to account for the fact that we exist at all.

Wonderworks reviews the blueprints for twenty-five of the most powerful developments in the history of literature. These inventions can be scientifically shown to alleviate grief, trauma, loneliness, anxiety, numbness, depression, pessimism, and ennui—all while sparking creativity, courage, love, empathy, hope, joy, and positive change. They can be found all throughout literature—from ancient Chinese lyrics to Shakespeare’s plays, poetry to nursery rhymes and fairy tales, and crime novels to slave narratives.

An easy-to-understand exploration of the new literary field of story science, Wonderworks teaches you everything you wish you learned in your English class. Based on author Angus Fletcher’s own research, it is an eye-opening and thought-provoking work that offers us a new understanding of the power of literature.

Should you be interested in Project Narrative, it can be found here.

Bio and neuro inspiration at Metro Vancouver’s (Canada) 2020 Zero Waste Conference (ZWC)

For anyone not familiar with Metro Vancouver (and before I launch into the 2020 Zero Waste conference [ZWC] news and discuss why this year is particularly interesting [to me, anyway]), here’s a description from the Metro Vancouver About Us webpage,

Metro Vancouver is a federation of 21 municipalities [including Vancouver, Canada], one Electoral Area and one Treaty First Nation that collaboratively plans for and delivers regional-scale services. Its core services are drinking water, wastewater treatment and solid waste management. Metro Vancouver also regulates air quality, plans for urban growth, manages a regional parks system and provides affordable housing. The regional district is governed by a Board of Directors of elected officials from each local authority.

2020 Zero Waste Conference (ZWC) celebrates 10 years?

Apparently, the organizers are planning some limited in-person participation for the 2020 edition of the Zero Waste conference (from the Aug. 7, 2020 ZWC blog posting) Note: Pay special attention to the second sentence in the first paragraph,

For the past 10 years, Metro Vancouver’s annual Zero Waste Conference has been at the forefront of Canada’s journey into the circular economy. This year, we are pleased to keep the engagement going online and with an in-person option for a limited number of participants (more to come).

The 2020 Zero Waste Conference promises the same insightful programming we’ve provided over the past decade, but in a new, virtual format. For the first time, conference participants will be able to hear from and connect with the thought leaders, innovators and change agents working to advance waste prevention and the circular economy in Canada – all from the comfort of their own homes or offices.

The COVID-19 pandemic and ongoing public health response may have resulted in some near-term setbacks for the zero waste movement. However, as we work together to ‘Build Back Better,’ it is essential that we critically examine our society’s relationships with products, packaging and waste, and garner the courage to create systems and build infrastructure that will enable a transition to a circular and zero waste economy, creating solutions that combine economic opportunity with benefits to wider society and the environment.

We are living through an era of unprecedented change and transformation. How do we apply our creativity and knowledge to craft a future for Canada that embraces new materials, new ways of doing business and new policies that not only prevent waste and promote circularity, but that help us move toward a more sustainable, healthy and equitable future?

We look forward to highlighting some of the best ideas from the last 10 years and presenting pioneering solutions that take us to a future most of us have only begun to dare dream is possible.

I imagine the option for in-person participation is contingent on the COVID-19 situation in the province of British Columbia and, specifically, the Metro Vancouver region. At the time of this writing, the number of cases in the province are rising steadily, again.

As for the question mark in the head for this subsection, it’s unusual for an organization to not make a big fuss of their 10th annual [anything] leading me to wonder why?

Now, onto the item that sparked my interest in the 2020 ZWC.

Suzanne Lee and growing your clothes

Here’s the August 27, 2020 ZWC notice (received via email) announcing a speaker’s proposed new paradigm for fashion,

Growing a New Paradigm:
Biofabrication Pioneer Suzanne Lee at #ZWC20

The textiles & fashion industry is one of the biggest polluters on earth, accounting for a staggering amount of carbon emissions, water consumption and ocean microplastics.

But what if we could produce durable and beautiful clothes with far less pollution and waste, using the processes at the heart of life itself?

We are pleased to welcome Suzanne Lee, material innovator and founder of Biofabricate, as morning keynote for the “Next Generation Materials” session.

“Biofabrication” uses microscopic organisms to reinvent the way we make everything from clothes to couches to buildings, and holds the promise for radically cutting emissions and eliminating waste.

Join us at the 2020 Zero Waste Conference to hear how Suzanne Lee and her colleagues are using fungi, bacteria, yeast and algae to revolutionize the fashion world from the ground up.

As Suzanne Lee says,

“Once you realize that these materials are better for the planet, animals and us, why would we go back to the toxic, polluting materials of the past?”

Join us on Friday, November 13th for the next phase of Canada’s zero waste journey.

Registration is now open for the 2020 Zero Waste Conference

REGISTER NOW

I haven’t stumbled across Lee’s work in the last few years but between 2010 and 2014, I featured her work here three times:

You can find out more about Suzanne Lee and her work here (Note: This website seems to consist of a single page with links to other sites associated with Lee) and you can find out more about Lee’s latest company, Biofabricate here.

ZWC 2020 opening keynote address from a ‘neuro guy’

I’ve not come across Dr. Beau Lotto before but according to an August 18, 2020 posting on the ZWC blog, he’s giving the opening keynote address,

Embracing Uncertainty to Spark Innovation – ZWC20 Keynote Beau Lotto

We find ourselves amid uncertain times, and for those of us passionate about systems change and innovation, these are also times of great opportunity. But how exactly do we meet goals like advancing waste prevention and expanding the circular economy in the face of all this uncertainty?

To help answer that question, we’re pleased to introduce you to this year’s Zero Waste Conference opening keynote: Dr. Beau Lotto.

Frontiers in Science of Uncertainty

#ZWC20 Keynote Beau Lotto is no stranger to uncertainty – in fact, that is his main focus as a neuroscientist and entrepreneur.

Through his presentations (including three TED Talks), masterclasses and a proprietary form of consultancy build on “experiential experiments,” Dr. Lotto teaches organizations and individuals how to apply scientific truths about perception to adapt and thrive in an ever-changing world.

His work probes how the human mind deals with the unknown and reveals fascinating and actionable implications for creativity, courage, emotional well-being and social connections.

Unlocking Our Creativity

How do we use the upheaval represented by COVID-19 as an opportunity to build back a more equitable and sustainable future?

The key, as Dr. Lotto said in a recent podcast interview, is to embrace uncertainty:

““Uncertainty is the only place you can go if you’re ever going to see differently the only place you can go if you’re going to be creative.”

As a researcher well versed in the circular economy and the challenges associated with global systems change, Beau Lotto brings a deep understanding of the importance of risk-taking and innovation.

We are pleased to welcome Dr. Lotto to #ZWC20 to set the stage and inspire us to embrace uncertainty and to step forward toward the future we want to bring about.  

How we proceed as a region – indeed, as a province, a country and continent – to address issues affecting our economy, environment and social make-up depends on our collective ability to be creative, innovative, and on our willingness to protect and nurture our communities.

We hope you will join us in the next phase of Canada’s zero waste journey.

You can find out more about Dr. Beau Lotto here.

This advertising video is largely comprised of a number of clips from various talks. He’s a dynamic speaker as opposed to being a quiet speaker,

Interesting, eh?

You can find out more about Metro Vancouver’s 2020 Zero Waste Conference here.

Repairing brain circuits using nanotechnology

A July 30, 2019 news item on Nanowerk announces some neuroscience research (they used animal models) that could prove helpful with neurodegenerative diseases,

Working with mouse and human tissue, Johns Hopkins Medicine researchers report new evidence that a protein pumped out of some — but not all — populations of “helper” cells in the brain, called astrocytes, plays a specific role in directing the formation of connections among neurons needed for learning and forming new memories.

Using mice genetically engineered and bred with fewer such connections, the researchers conducted proof-of-concept experiments that show they could deliver corrective proteins via nanoparticles to replace the missing protein needed for “road repairs” on the defective neural highway.

Since such connective networks are lost or damaged by neurodegenerative diseases such as Alzheimer’s or certain types of intellectual disability, such as Norrie disease, the researchers say their findings advance efforts to regrow and repair the networks and potentially restore normal brain function.

A July 30, 2019 Johns Hopkins University School of Medicine news release (also on EurekAlert) provides more detail about the work (Note: A link has been removed),

“We are looking at the fundamental biology of how astrocytes function, but perhaps have discovered a new target for someday intervening in neurodegenerative diseases with novel therapeutics,” says Jeffrey Rothstein, M.D., Ph.D., the John W. Griffin Director of the Brain Science Institute and professor of neurology at the Johns Hopkins University School of Medicine.

“Although astrocytes appear to all look alike in the brain, we had an inkling that they might have specialized roles in the brain due to regional differences in the brain’s function and because of observed changes in certain diseases,” says Rothstein. “The hope is that learning to harness the individual differences in these distinct populations of astrocytes may allow us to direct brain development or even reverse the effects of certain brain conditions, and our current studies have advanced that hope.”

In the brain, astrocytes are the support cells that act as guides to direct new cells, promote chemical signaling, and clean up byproducts of brain cell metabolism.

Rothstein’s team focused on a particular astrocyte protein, glutamate transporter-1, which previous studies suggested was lost from astrocytes in certain parts of brains with neurodegenerative diseases. Like a biological vacuum cleaner, the protein normally sucks up the chemical “messenger” glutamate from the spaces between neurons after a message is sent to another cell, a step required to end the transmission and prevent toxic levels of glutamate from building up.

When these glutamate transporters disappear from certain parts of the brain — such as the motor cortex and spinal cord in people with amyotrophic lateral sclerosis (ALS) — glutamate hangs around much too long, sending messages that overexcite and kill the cells.

To figure out how the brain decides which cells need the glutamate transporters, Rothstein and colleagues focused on the region of DNA in front of the gene that typically controls the on-off switch needed to manufacture the protein. They genetically engineered mice to glow red in every cell where the gene is activated.

Normally, the glutamate transporter is turned on in all astrocytes. But, by using between 1,000- and 7,000-bit segments of DNA code from the on-off switch for glutamate, all the cells in the brain glowed red, including the neurons. It wasn’t until the researchers tried the largest sequence of an 8,300-bit DNA code from this location that the researchers began to see some selection in red cells. These red cells were all astrocytes but only in certain layers of the brain’s cortex in mice.

Because they could identify these “8.3 red astrocytes,” the researchers thought they might have a specific function different than other astrocytes in the brain. To find out more precisely what these 8.3 red astrocytes do in the brain, the researchers used a cell-sorting machine to separate the red astrocytes from the uncolored ones in mouse brain cortical tissue, and then identified which genes were turned on to much higher than usual levels in the red compared to the uncolored cell populations. The researchers found that the 8.3 red astrocytes turn on high levels of a gene that codes for a different protein known as Norrin.

Rothstein’s team took neurons from normal mouse brains, treated them with Norrin, and found that those neurons grew more of the “branches” — or extensions — used to transmit chemical messages among brain cells. Then, Rothstein says, the researchers looked at the brains of mice engineered to lack Norrin, and saw that these neurons had fewer branches than in healthy mice that made Norrin.

In another set of experiments, the research team took the DNA code for Norrin plus the 8,300 “location” DNA and assembled them into deliverable nanoparticles. When they injected the Norrin nanoparticles into the brains of mice engineered without Norrin, the neurons in these mice began to quickly grow many more branches, a process suggesting repair to neural networks. They repeated these experiments with human neurons too.

Rothstein notes that mutations in the Norrin protein that reduce levels of the protein in people cause Norrie disease — a rare, genetic disorder that can lead to blindness in infancy and intellectual disability. Because the researchers were able to grow new branches for communication, they believe it may one day be possible to use Norrin to treat some types of intellectual disabilities such as Norrie disease.

For their next steps, the researchers are investigating if Norrin can repair connections in the brains of animal models with neurodegenerative diseases, and in preparation for potential success, Miller [sic] and Rothstein have submitted a patent for Norrin.

Here’s a link to and a citation for the paper,

Molecularly defined cortical astroglia subpopulation modulates neurons via secretion of Norrin by Sean J. Miller, Thomas Philips, Namho Kim, Raha Dastgheyb, Zhuoxun Chen, Yi-Chun Hsieh, J. Gavin Daigle, Malika Datta, Jeannie Chew, Svetlana Vidensky, Jacqueline T. Pham, Ethan G. Hughes, Michael B. Robinson, Rita Sattler, Raju Tomer, Jung Soo Suk, Dwight E. Bergles, Norman Haughey, Mikhail Pletnikov, Justin Hanes & Jeffrey D. Rothstein. Nature Neuroscience volume 22, pages741–752 (2019) DOI: https://doi.org/10.1038/s41593-019-0366-7 Published: 01 April 2019 Issue Date: May 2019

This paper is behind a paywall.

September 2019’s science’ish’ events in Toronto and Vancouver (Canada)

There are movies, plays, a multimedia installation experience all in Vancouver, and the ‘CHAOSMOSIS mAchInesexhibition/performance/discussion/panel/in-situ experiments/art/ science/ techne/ philosophy’ event in Toronto. But first, there’s a a Vancouver talk about engaging scientists in the upcoming federal election. .

Science in the Age of Misinformation (and the upcoming federal election) in Vancouver

Dr. Katie Gibbs, co-founder and executive director of Evidence for Democracy, will be giving a talk today (Sept. 4, 2019) at the University of British Columbia (UBC; Vancouver). From the Eventbrite webpage for Science in the Age of Misinformation,

Science in the Age of Misinformation, with Katie Gibbs, Evidence for Democracy
In the lead up to the federal election, it is more important than ever to understand the role that researchers play in shaping policy. Join us in this special Policy in Practice event with Dr. Katie Gibbs, Executive Director of Evidence for Democracy, Canada’s leading, national, non-partisan, and not-for-profit organization promoting science and the transparent use of evidence in government decision making. A Musqueam land acknowledgement, welcome remarks and moderation of this event will be provided by MPPGA students Joshua Tafel, and Chengkun Lv.

Wednesday, September 4, 2019
12:30 pm – 1:50 pm (Doors will open at noon)
Liu Institute for Global Issues – xʷθəθiqətəm (Place of Many Trees), 1st floor
Pizza will be provided starting at noon on first come, first serve basis. Please RSVP.

What role do researchers play in a political environment that is increasingly polarized and influenced by misinformation? Dr. Katie Gibbs, Executive Director of Evidence for Democracy, will give an overview of the current state of science integrity and science policy in Canada highlighting progress made over the past four years and what this means in a context of growing anti-expert movements in Canada and around the world. Dr. Gibbs will share concrete ways for researchers to engage heading into a critical federal election [emphasis mine], and how they can have lasting policy impact.

Bio: Katie Gibbs is a scientist, organizer and advocate for science and evidence-based policies. While completing her Ph.D. at the University of Ottawa in Biology, she was one of the lead organizers of the ‘Death of Evidence’—one of the largest science rallies in Canadian history. Katie co-founded Evidence for Democracy, Canada’s leading, national, non-partisan, and not-for-profit organization promoting science and the transparent use of evidence in government decision making. Her ongoing success in advocating for the restoration of public science in Canada has made Katie a go-to resource for national and international media outlets including Science, The Guardian and the Globe and Mail.

Katie has also been involved in international efforts to increase evidence-based decision-making and advises science integrity movements in other countries and is a member of the Open Government Partnership Multi-stakeholder Forum.

Disclaimer: Please note that by registering via Eventbrite, your information will be stored on the Eventbrite server, which is located outside Canada. If you do not wish to use this service, please email Joelle.Lee@ubc.ca directly to register. Thank you.

Location
Liu Institute for Global Issues – Place of Many Trees
6476 NW Marine Drive
Vancouver, British Columbia V6T 1Z2

Sadly I was not able to post the information about Dr. Gibbs’s more informal talk last night (Sept. 3, 2019) which was a special event with Café Scientifique but I do have a link to a website encouraging anyone who wants to help get science on the 2019 federal election agenda, Vote Science. P.S. I’m sorry I wasn’t able to post this in a more timely fashion.

Transmissions; a multimedia installation in Vancouver, September 6 -28, 2019

Here’s a description for the multimedia installation, Transmissions, in the August 28, 2019 Georgia Straight article by Janet Smith,

Lisa Jackson is a filmmaker, but she’s never allowed that job description to limit what she creates or where and how she screens her works.

The Anishinaabe artist’s breakout piece was last year’s haunting virtual-reality animation Biidaaban: First Light. In its eerie world, one that won a Canadian Screen Award, nature has overtaken a near-empty, future Toronto, with trees growing through cracks in the sidewalks, vines enveloping skyscrapers, and people commuting by canoe.

All that and more has brought her here, to Transmissions, a 6,000-square-foot, immersive film installation that invites visitors to wander through windy coastal forests, by hauntingly empty glass towers, into soundscapes of ancient languages, and more.

Through the labyrinthine multimedia work at SFU [Simon Fraser University] Woodward’s, Jackson asks big questions—about Earth’s future, about humanity’s relationship to it, and about time and Indigeneity.

Simultaneously, she mashes up not just disciplines like film and sculpture, but concepts of science, storytelling, and linguistics [emphasis mine].

“The tag lines I’m working with now are ‘the roots of meaning’ and ‘knitting the world together’,” she explains. “In western society, we tend to hive things off into ‘That’s culture. That’s science.’ But from an Indigenous point of view, it’s all connected.”

Transmissions is split into three parts, with what Jackson describes as a beginning, a middle, and an end. Like Biidaaban, it’s also visually stunning: the artist admits she’s playing with Hollywood spectacle.

Without giving too much away—a big part of the appeal of Jackson’s work is the sense of surprise—Vancouver audiences will first enter a 48-foot-long, six-foot-wide tunnel, surrounded by projections that morph from empty urban streets to a forest and a river. Further engulfing them is a soundscape that features strong winds, while black mirrors along the floor skew perspective and play with what’s above and below ground.

“You feel out of time and space,” says Jackson, who wants to challenge western society’s linear notions of minutes and hours. “I want the audience to have a physical response and an emotional response. To me, that gets closer to the Indigenous understanding. Because the Eurocentric way is more rational, where the intellectual is put ahead of everything else.”

Viewers then enter a room, where the highly collaborative Jackson has worked with artist Alan Storey, who’s helped create Plexiglas towers that look like the ghost high-rises of an abandoned city. (Storey has also designed other components of the installation.) As audience members wander through them on foot, projections make their shadows dance on the structures. Like Biidaaban, the section hints at a postapocalyptic or posthuman world. Jackson operates in an emerging realm of Indigenous futurism.

The words “science, storytelling, and linguistics” were emphasized due to a minor problem I have with terminology. Linguistics is defined as the scientific study of language combining elements from the natural sciences, social sciences, and the humanities. I wish either Jackson or Smith had discussed the scientific element of Transmissions at more length and perhaps reconnected linguistics to science along with the physics of time and space, as well as, storytelling, film, and sculpture. It would have been helpful since it’s my understanding, Transmissions is designed to showcase all of those connections and more in ways that may not be obvious to everyone. On the plus side, perhaps the tour, which is part of this installation experience includes that information.

I have a bit .more detail (including logistics for the tours) from the SFU Events webpage for Transmissions,

Transmissions
September 6 – September 28, 2019

The Roots of Meaning
World Premiere
September 6 – 28, 2019

Fei & Milton Wong Experimental Theatre
SFU Woodward’s, 149 West Hastings
Tuesday to Friday, 1pm to 7pm
Saturday and Sunday, 1pm to 5pm
FREE

In partnership with SFU Woodward’s Cultural Programs and produced by Electric Company Theatre and Violator Films.

TRANSMISSIONS is a three-part, 6000 square foot multimedia installation by award-winning Anishinaabe filmmaker and artist Lisa Jackson. It extends her investigation into the connections between land, language, and people, most recently with her virtual reality work Biidaaban: First Light.

Projections, sculpture, and film combine to create urban and natural landscapes that are eerie and beautiful, familiar and foreign, concrete and magical. Past and future collide in a visceral and thought-provoking journey that questions our current moment and opens up the complexity of thought systems embedded in Indigenous languages. Radically different from European languages, they embody sets of relationships to the land, to each other, and to time itself.

Transmissions invites us to untether from our day-to-day world and imagine a possible future. It provides a platform to activate and cross-pollinate knowledge systems, from science to storytelling, ecology to linguistics, art to commerce. To begin conversations, to listen deeply, to engage varied perspectives and expertise, to knit the world together and find our place within the circle of all our relations.

Produced in association with McMaster University Socrates Project, Moving Images Distribution and Cobalt Connects Creativity.

….

Admission:  Free Public Tours
Tuesday through Sunday
Reservations accepted from 1pm to 3pm.  Reservations are booked in 15 minute increments.  Individuals and groups up to 10 welcome.
Please email: sfuw@sfu.ca for more information or to book groups of 10 or more.

Her Story: Canadian Women Scientists (short film subjects); Sept. 13 – 14, 2019

Curiosity Collider, producer of art/science events in Vancouver, is presenting a film series featuring Canadian women scientists, according to an August 27 ,2019 press release (received via email),

Her Story: Canadian Women Scientists,” a film series dedicated to sharing the stories of Canadian women scientists, will premiere on September 13th and 14th at the Annex theatre. Four pairs of local filmmakers and Canadian women scientists collaborated to create 5-6 minute videos; for each film in the series, a scientist tells her own story, interwoven with the story of an inspiring Canadian women scientist who came before her in her field of study.

Produced by Vancouver-based non-profit organization Curiosity Collider, this project was developed to address the lack of storytelling videos showcasing remarkable women scientists and their work available via popular online platforms. “Her Story reveals the lives of women working in science,” said Larissa Blokhuis, curator for Her Story. “This project acts as a beacon to girls and women who want to see themselves in the scientific community. The intergenerational nature of the project highlights the fact that women have always worked in and contributed to science.

This sentiment was reflected by Samantha Baglot as well, a PhD student in neuroscience who collaborated with filmmaker/science cartoonist Armin Mortazavi in Her Story. “It is empowering to share stories of previous Canadian female scientists… it is empowering for myself as a current female scientist to learn about other stories of success, and gain perspective of how these women fought through various hardships and inequality.”

When asked why seeing better representation of women in scientific work is important, artist/filmmaker Michael Markowsky shared his thoughts. “It’s important for women — and their male allies — to question and push back against these perceived social norms, and to occupy space which rightfully belongs to them.” In fact, his wife just gave birth to their first child, a daughter; “It’s personally very important to me that she has strong female role models to look up to.” His film will feature collaborating scientist Jade Shiller, and Kathleen Conlan – who was named one of Canada’s greatest explorers by Canadian Geographic in 2015.

Other participating filmmakers and collaborating scientists include: Leslie Kennah (Filmmaker), Kimberly Girling (scientist, Research and Policy Director at Evidence for Democracy), Lucas Kavanagh and Jesse Lupini (Filmmakers, Avocado Video), and Jessica Pilarczyk (SFU Assistant Professor, Department of Earth Sciences).

This film series is supported by Westcoast Women in Engineering, Science and Technology (WWEST) and Eng.Cite. The venue for the events is provided by Vancouver Civic Theatres.

Event Information

Screening events will be hosted at Annex (823 Seymour St, Vancouver) on September 13th and 14th [2019]. Events will also include a talkback with filmmakers and collab scientists on the 13th, and a panel discussion on representations of women in science and culture on the 14th. Visit http://bit.ly/HerStoryTickets2019 for tickets ($14.99-19.99) and http://bit.ly/HerStoryWomenScientists for project information.

I have a film collage,

Courtesy: Curiosity Collider

I looks like they’re presenting films with a diversity of styles. You can find out more about Curiosity Collider and its various programmes and events here.

Vancouver Fringe Festival September 5 – 16, 2019

I found two plays in this year’s fringe festival programme that feature science in one way or another. Not having seen either play I make no guarantees as to content. First up is,

AI Love You
Exit Productions
London, UK
Playwright: Melanie Anne Ball
exitproductionsltd.com

Adam and April are a regular 20-something couple, very nearly blissfully generic, aside from one important detail: one of the pair is an “artificially intelligent companion.” Their joyful veneer has begun to crack and they need YOU to decide the future of their relationship. Is the freedom of a robot or the will of a human more important?
For AI Love You: 

***** “Magnificent, complex and beautifully addictive.” —Spy in the Stalls 
**** “Emotionally charged, deeply moving piece … I was left with goosebumps.” —West End Wilma 
**** —London City Nights 
Past shows: 
***** “The perfect show.” —Theatre Box

Intellectual / Intimate / Shocking / 14+ / 75 minutes

The first show is on Friday, September 6, 2019 at 5 pm. There are another five showings being presented. You can get tickets and more information here.

The second play is this,

Red Glimmer
Dusty Foot Productions
Vancouver, Canada
Written & Directed by Patricia Trinh

Abstract Sci-Fi dramedy. An interdimensional science experiment! Woman involuntarily takes an all inclusive internal trip after falling into a deep depression. A scientist is hired to navigate her neurological pathways from inside her mind – tackling the fact that humans cannot physically re-experience somatosensory sensation, like pain. What if that were the case for traumatic emotional pain? A creepy little girl is heard running by. What happens next?

Weird / Poetic / Intellectual / LGBTQ+ / Multicultural / 14+ / Sexual Content / 50 minutes

This show is created by an underrepresented Artist.
Written, directed, and produced by local theatre Artist Patricia Trinh, a Queer, Asian-Canadian female.

The first showing is tonight, September 5, 2019 at 8:30 pm. There are another six showings being presented. You can get tickets and more information here.

CHAOSMOSIS mAchInes exhibition/performance/discussion/panel/in-situ experiments/art/ science/ techne/ philosophy, 28 September, 2019 in Toronto

An Art/Sci Salon September 2, 2019 announcement (received via email), Note: I have made some formatting changes,

CHAOSMOSIS mAchInes

28 September, 2019 
7pm-11pm.
Helen-Gardiner-Phelan Theatre, 2nd floor
University of Toronto. 79 St. George St.

A playful co-presentation by the Topological Media Lab (Concordia U-Montreal) and The Digital Dramaturgy Labsquared (U of T-Toronto). This event is part of our collaboration with DDLsquared lab, the Topological Lab and the Leonardo LASER network


7pm-9.30pm, Installation-performances, 
9.30pm-11pm, Reception and cash bar, Front and Long Room, Ground floor


Description:
From responsive sculptures to atmosphere-creating machines; from sensorial machines to affective autonomous robots, Chaosmosis mAchInes is an eclectic series of installations and performances reflecting on today’s complex symbiotic relations between humans, machines and the environment.


This will be the first encounter between Montreal-based Topological Media Lab (Concordia University) and the Toronto-based Digital Dramaturgy Labsquared (U of T) to co-present current process-based and experimental works. Both labs have a history of notorious playfulness, conceptual abysmal depth, human-machine interplays, Art&Science speculations (what if?), collaborative messes, and a knack for A/I as in Artistic Intelligence.


Thanks to  Nina Czegledy (Laser series, Leonardo network) for inspiring the event and for initiating the collaboration


Visit our Facebook event page 
Register through Evenbrite


Supported by


Main sponsor: Centre for Drama, Theatre and Performance Studies, U of T
Sponsors: Computational Arts Program (York U.), Cognitive Science Program (U of T), Knowledge Media Design Institute (U of T), Institute for the History and Philosophy of Science and Technology (IHPST)Fonds de Recherche du Québec – Société et culture (FRQSC)The Centre for Comparative Literature (U of T)
A collaboration between
Laser events, Leonardo networks – Science Artist, Nina Czegledy
ArtsSci Salon – Artistic Director, Roberta Buiani
Digital Dramaturgy Labsquared – Creative Research Director, Antje Budde
Topological Media Lab – Artistic-Research Co-directors, Michael Montanaro | Navid Navab


Project presentations will include:
Topological Media Lab
tangibleFlux φ plenumorphic ∴ chaosmosis
SPIEL
On Air
The Sound That Severs Now from Now
Cloud Chamber (2018) | Caustic Scenography, Responsive Cloud Formation
Liquid Light
Robots: Machine Menagerie
Phaze
Phase
Passing Light
Info projects
Digital Dramaturgy Labsquared
Btw Lf & Dth – interFACING disappearance
Info project

This is a very active September.

ETA September 4, 2019 at 1607 hours PDT: That last comment is even truer than I knew when I published earlier. I missed a Vancouver event, Maker Faire Vancouver will be hosted at Science World on Saturday, September 14. Here’s a little more about it from a Sept. 3, 2019 at Science World at Telus Science World blog posting,

Earlier last month [August 2019?], surgeons at St Paul’s Hospital performed an ankle replacement for a Cloverdale resident using a 3D printed bone. The first procedure of its kind in Western Canada, it saved the patient all of his ten toes — something doctors had originally decided to amputate due to the severity of the motorcycle accident.

Maker Faire Vancouver Co-producer, John Biehler, may not be using his 3D printer for medical breakthroughs, but he does see a subtle connection between his home 3D printer and the Health Canada-approved bone.

“I got into 3D printing to make fun stuff and gadgets,” John says of the box-sized machine that started as a hobby and turned into a side business. “But the fact that the very same technology can have life-changing and life-saving applications is amazing.”

When John showed up to Maker Faire Vancouver seven years ago, opportunities to access this hobby were limited. Armed with a 3D printer he had just finished assembling the night before, John was hoping to meet others in the community with similar interests to build, experiment and create. Much like the increase in accessibility to these portable machines has changed over the years—with universities, libraries and makerspaces making them readily available alongside CNC Machines, laser cutters and more — John says the excitement around crafting and tinkering has skyrocketed as well.

“The kind of technology that inspires people to print a bone or spinal insert all starts at ground zero in places like a Maker Faire where people get exposed to STEAM,” John says …

… From 3D printing enthusiasts like John to knitters, metal artists and roboticists, this full one-day event [Maker Faire Vancouver on Saturday, September 14, 2019] will facilitate cross-pollination between hobbyists, small businesses, artists and tinkerers. Described as part science fair, part county fair and part something entirely new, Maker Faire Vancouver hopes to facilitate discovery and what John calls “pure joy moments.”

Hopefully that’s it.

Human Brain Project: update

The European Union’s Human Brain Project was announced in January 2013. It, along with the Graphene Flagship, had won a multi-year competition for the extraordinary sum of one million euros each to be paid out over a 10-year period. (My January 28, 2013 posting gives the details available at the time.)

At a little more than half-way through the project period, Ed Yong, in his July 22, 2019 article for The Atlantic, offers an update (of sorts),

Ten years ago, a neuroscientist said that within a decade he could simulate a human brain. Spoiler: It didn’t happen.

On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.” …

It’s been exactly 10 years. He did not succeed.

One could argue that the nature of pioneers is to reach far and talk big, and that it’s churlish to single out any one failed prediction when science is so full of them. (Science writers joke that breakthrough medicines and technologies always seem five to 10 years away, on a rolling window.) But Markram’s claims are worth revisiting for two reasons. First, the stakes were huge: In 2013, the European Commission awarded his initiative—the Human Brain Project (HBP)—a staggering 1 billion euro grant (worth about $1.42 billion at the time). Second, the HBP’s efforts, and the intense backlash to them, exposed important divides in how neuroscientists think about the brain and how it should be studied.

Markram’s goal wasn’t to create a simplified version of the brain, but a gloriously complex facsimile, down to the constituent neurons, the electrical activity coursing along them, and even the genes turning on and off within them. From the outset, the criticism to this approach was very widespread, and to many other neuroscientists, its bottom-up strategy seemed implausible to the point of absurdity. The brain’s intricacies—how neurons connect and cooperate, how memories form, how decisions are made—are more unknown than known, and couldn’t possibly be deciphered in enough detail within a mere decade. It is hard enough to map and model the 302 neurons of the roundworm C. elegans, let alone the 86 billion neurons within our skulls. “People thought it was unrealistic and not even reasonable as a goal,” says the neuroscientist Grace Lindsay, who is writing a book about modeling the brain.
And what was the point? The HBP wasn’t trying to address any particular research question, or test a specific hypothesis about how the brain works. The simulation seemed like an end in itself—an overengineered answer to a nonexistent question, a tool in search of a use. …

Markram seems undeterred. In a recent paper, he and his colleague Xue Fan firmly situated brain simulations within not just neuroscience as a field, but the entire arc of Western philosophy and human civilization. And in an email statement, he told me, “Political resistance (non-scientific) to the project has indeed slowed us down considerably, but it has by no means stopped us nor will it.” He noted the 140 people still working on the Blue Brain Project, a recent set of positive reviews from five external reviewers, and its “exponentially increasing” ability to “build biologically accurate models of larger and larger brain regions.”

No time frame, this time, but there’s no shortage of other people ready to make extravagant claims about the future of neuroscience. In 2014, I attended TED’s main Vancouver conference and watched the opening talk, from the MIT Media Lab founder Nicholas Negroponte. In his closing words, he claimed that in 30 years, “we are going to ingest information. …

I’m happy to see the update. As I recall, there was murmuring almost immediately about the Human Brain Project (HBP). I never got details but it seemed that people were quite actively unhappy about the disbursements. Of course, this kind of uproar is not unusual when great sums of money are involved and the Graphene Flagship also had its rocky moments.

As for Yong’s contribution, I’m glad he’s debunking some of the hype and glory associated with the current drive to colonize the human brain and other efforts (e.g. genetics) which they often claim are the ‘future of medicine’.

To be fair. Yong is focused on the brain simulation aspect of the HBP (and Markram’s efforts in the Blue Brain Project) but there are other HBP efforts, as well, even if brain simulation seems to be the HBP’s main interest.

After reading the article, I looked up Henry Markram’s Wikipedia entry and found this,

In 2013, the European Union funded the Human Brain Project, led by Markram, to the tune of $1.3 billion. Markram claimed that the project would create a simulation of the entire human brain on a supercomputer within a decade, revolutionising the treatment of Alzheimer’s disease and other brain disorders. Less than two years into it, the project was recognised to be mismanaged and its claims overblown, and Markram was asked to step down.[7][8]

On 8 October 2015, the Blue Brain Project published the first digital reconstruction and simulation of the micro-circuitry of a neonatal rat somatosensory cortex.[9]

I also looked up the Human Brain Project and, talking about their other efforts, was reminded that they have a neuromorphic computing platform, SpiNNaker (mentioned here in a January 24, 2019 posting; scroll down about 50% of the way). For anyone unfamiliar with the term, neuromorphic computing/engineering is what scientists call the effort to replicate the human brain’s ability to synthesize and process information in computing processors.

In fact, there was some discussion in 2013 that the Human Brain Project and the Graphene Flagship would have some crossover projects, e.g., trying to make computers more closely resemble human brains in terms of energy use and processing power.

The Human Brain Project’s (HBP) Silicon Brains webpage notes this about their neuromorphic computing platform,

Neuromorphic computing implements aspects of biological neural networks as analogue or digital copies on electronic circuits. The goal of this approach is twofold: Offering a tool for neuroscience to understand the dynamic processes of learning and development in the brain and applying brain inspiration to generic cognitive computing. Key advantages of neuromorphic computing compared to traditional approaches are energy efficiency, execution speed, robustness against local failures and the ability to learn.

Neuromorphic Computing in the HBP

In the HBP the neuromorphic computing Subproject carries out two major activities: Constructing two large-scale, unique neuromorphic machines and prototyping the next generation neuromorphic chips.

The large-scale neuromorphic machines are based on two complementary principles. The many-core SpiNNaker machine located in Manchester [emphasis mine] (UK) connects 1 million ARM processors with a packet-based network optimized for the exchange of neural action potentials (spikes). The BrainScaleS physical model machine located in Heidelberg (Germany) implements analogue electronic models of 4 Million neurons and 1 Billion synapses on 20 silicon wafers. Both machines are integrated into the HBP collaboratory and offer full software support for their configuration, operation and data analysis.

The most prominent feature of the neuromorphic machines is their execution speed. The SpiNNaker system runs at real-time, BrainScaleS is implemented as an accelerated system and operates at 10,000 times real-time. Simulations at conventional supercomputers typical run factors of 1000 slower than biology and cannot access the vastly different timescales involved in learning and development ranging from milliseconds to years.

Recent research in neuroscience and computing has indicated that learning and development are a key aspect for neuroscience and real world applications of cognitive computing. HBP is the only project worldwide addressing this need with dedicated novel hardware architectures.

I’ve highlighted Manchester because that’s a very important city where graphene is concerned. The UK’s National Graphene Institute is housed at the University of Manchester where graphene was first isolated in 2004 by two scientists, Andre Geim and Konstantin (Kostya) Novoselov. (For their effort, they were awarded the Nobel Prize for physics in 2010.)

Getting back to the HBP (and the Graphene Flagship for that matter), the funding should be drying up sometime around 2023 and I wonder if it will be possible to assess the impact.

Controlling neurons with light: no batteries or wires needed

Caption: Wireless and battery-free implant with advanced control over targeted neuron groups. Credit: Philipp Gutruf

This January 2, 2019 news item on ScienceDaily describes the object seen in the above and describes the problem it’s designed to solve,

University of Arizona biomedical engineering professor Philipp Gutruf is first author on the paper Fully implantable, optoelectronic systems for battery-free, multimodal operation in neuroscience research, published in Nature Electronics.

Optogenetics is a biological technique that uses light to turn specific neuron groups in the brain on or off. For example, researchers might use optogenetic stimulation to restore movement in case of paralysis or, in the future, to turn off the areas of the brain or spine that cause pain, eliminating the need for — and the increasing dependence on — opioids and other painkillers.

“We’re making these tools to understand how different parts of the brain work,” Gutruf said. “The advantage with optogenetics is that you have cell specificity: You can target specific groups of neurons and investigate their function and relation in the context of the whole brain.”

In optogenetics, researchers load specific neurons with proteins called opsins, which convert light to electrical potentials that make up the function of a neuron. When a researcher shines light on an area of the brain, it activates only the opsin-loaded neurons.

The first iterations of optogenetics involved sending light to the brain through optical fibers, which meant that test subjects were physically tethered to a control station. Researchers went on to develop a battery-free technique using wireless electronics, which meant subjects could move freely.

But these devices still came with their own limitations — they were bulky and often attached visibly outside the skull, they didn’t allow for precise control of the light’s frequency or intensity, and they could only stimulate one area of the brain at a time.

A Dec. 21, 2018 University of Azrizona news release (published Jan. 2, 2019 on EurekAlert), which originated the news item, discusses the work in more detail,

“With this research, we went two to three steps further,” Gutruf said. “We were able to implement digital control over intensity and frequency of the light being emitted, and the devices are very miniaturized, so they can be implanted under the scalp. We can also independently stimulate multiple places in the brain of the same subject, which also wasn’t possible before.”

The ability to control the light’s intensity is critical because it allows researchers to control exactly how much of the brain the light is affecting — the brighter the light, the farther it will reach. In addition, controlling the light’s intensity means controlling the heat generated by the light sources, and avoiding the accidental activation of neurons that are activated by heat.

The wireless, battery-free implants are powered by external oscillating magnetic fields, and, despite their advanced capabilities, are not significantly larger or heavier than past versions. In addition, a new antenna design has eliminated a problem faced by past versions of optogenetic devices, in which the strength of the signal being transmitted to the device varied depending on the angle of the brain: A subject would turn its head and the signal would weaken.

“This system has two antennas in one enclosure, which we switch the signal back and forth very rapidly so we can power the implant at any orientation,” Gutruf said. “In the future, this technique could provide battery-free implants that provide uninterrupted stimulation without the need to remove or replace the device, resulting in less invasive procedures than current pacemaker or stimulation techniques.”

Devices are implanted with a simple surgical procedure similar to surgeries in which humans are fitted with neurostimulators, or “brain pacemakers.” They cause no adverse effects to subjects, and their functionality doesn’t degrade in the body over time. This could have implications for medical devices like pacemakers, which currently need to be replaced every five to 15 years.

The paper also demonstrated that animals implanted with these devices can be safely imaged with computer tomography, or CT, and magnetic resonance imaging, or MRI, which allow for advanced insights into clinically relevant parameters such as the state of bone and tissue and the placement of the device.

This image of a combined MRI (magnetic resonance image) and CT (computer tomography) scan bookends, more or less, the picture of the device which headed this piece,

Combined image analysis with MRI and CT results superimposed on a 3D rendering of the animal implanted with the programmable bilateral multi µ-ILED device. Courtesy: University of Arizona

Here’s a link to and a citation for the paper,

Fully implantable optoelectronic systems for battery-free, multimodal operation in neuroscience research by Philipp Gutruf, Vaishnavi Krishnamurthi, Abraham Vázquez-Guardado, Zhaoqian Xie, Anthony Banks, Chun-Ju Su, Yeshou Xu, Chad R. Haney, Emily A. Waters, Irawati Kandela, Siddharth R. Krishnan, Tyler Ray, John P. Leshock, Yonggang Huang, Debashis Chanda, & John A. Rogers. Nature Electronics volume 1, pages652–660 (2018) DOI: https://doi.org/10.1038/s41928-018-0175-0 Published 13 December 2018

This paper is behind a paywall.

Democratizing science .. neuroscience that is

What is going on with the neuroscience folks? First it was Montreal Neuro opening up its science  as featured in my January 22, 2016 posting,

The Montreal Neurological Institute (MNI) in Québec, Canada, known informally and widely as Montreal Neuro, has ‘opened’ its science research to the world. David Bruggeman tells the story in a Jan. 21, 2016 posting on his Pasco Phronesis blog (Note: Links have been removed),

The Montreal Neurological Institute (MNI) at McGill University announced that it will be the first academic research institute to become what it calls ‘Open Science.’  As Science is reporting, the MNI will make available all research results and research data at the time of publication.  Additionally it will not seek patents on any of the discoveries made on research at the Institute.

Will this catch on?  I have no idea if this particular combination of open access research data and results with no patents will spread to other university research institutes.  But I do believe that those elements will continue to spread.  More universities and federal agencies are pursuing open access options for research they support.  Elon Musk has opted to not pursue patent litigation for any of Tesla Motors’ patents, and has not pursued patents for SpaceX technology (though it has pursued litigation over patents in rocket technology). …

Whether or not they were inspired by the MNI, the scientists at the University of Washington (UW [state]) have found their own unique way of opening up science. From a March 15, 2018 UW news blog posting (also on EurekAlert) by James Urton, Note: Links have been removed,

Over the past few years, scientists have faced a problem: They often cannot reproduce the results of experiments done by themselves or their peers.

This “replication crisis” plagues fields from medicine to physics, and likely has many causes. But one is undoubtedly the difficulty of sharing the vast amounts of data collected and analyses performed in so-called “big data” studies. The volume and complexity of the information also can make these scientific endeavors unwieldy when it comes time for researchers to share their data and findings with peers and the public.

Researchers at the University of Washington have developed a set of tools to make one critical area of big data research — that of our central nervous system — easier to share. In a paper published online March 5 [2018] in Nature Communications, the UW team describes an open-access browser they developed to display, analyze and share neurological data collected through a type of magnetic resonance imaging study known as diffusion-weighted MRI.

“There has been a lot of talk among researchers about the replication crisis,” said lead author Jason Yeatman. “But we wanted a tool — ready, widely available and easy to use — that would actually help fight the replication crisis.”

Yeatman — who is an assistant professor in the UW Department of Speech & Hearing Sciences and the Institute for Learning & Brain Sciences (I-LABS) — is describing AFQ-Browser. This web browser-based tool, freely available online, is a platform for uploading, visualizing, analyzing and sharing diffusion MRI data in a format that is publicly accessible, improving transparency and data-sharing methods for neurological studies. In addition, since it runs in the web browser, AFQ-Browser is portable — requiring no additional software package or equipment beyond a computer and an internet connection.

“One major barrier to data transparency in neuroscience is that so much data collection, storage and analysis occurs on local computers with special software packages,” said senior author Ariel Rokem, a senior data scientist in the UW eScience Institute. “But using AFQ-Browser, we eliminate those requirements and make uploading, sharing and analyzing diffusion-weighted MRI data a simple, straightforward process.”

Diffusion-weighted MRI measures the movement of fluid in the brain and spinal cord, revealing the structure and function of white-matter tracts. These are the connections of the central nervous system, tissue that are made up primarily of axons that transmit long-range signals between neural circuits. Diffusion MRI research on brain connectivity has fundamentally changed the way neuroscientists understand human brain function: The state, organization and layout of white matter tracts are at the core of cognitive functions such as memory, learning and other capabilities. Data collected using diffusion-weighted MRI can be used to diagnose complex neurological conditions such as multiple sclerosis (MS) and amyotrophic lateral sclerosis (ALS). Researchers also use diffusion-weighted MRI data to study the neurological underpinnings of conditions such as dyslexia and learning disabilities.

“This is a widely-used technique in neuroscience research, and it is particularly amenable to the benefits that can be gleaned from big data, so it became a logical starting point for developing browser-based, open-access tools for the field,” said Yeatman.

The AFQ-Browser — the AFQ stands for Automated Fiber-tract Quantification — can receive diffusion-weighted MRI data and perform tract analysis for each individual subject. The analyses occur via a remote server, again eliminating technical and financial barriers for researchers. The AFQ-Browser also contains interactive tools to display data for multiple subjects — allowing a researcher to easily visualize how white matter tracts might be similar or different among subjects, identify trends in the data and generate hypotheses for future experiments. Researchers also can insert additional code to analyze the data, as well as save, upload and share data instantly with fellow researchers.

“We wanted this tool to be as generalizable as possible, regardless of research goals,” said Rokem. “In addition, the format is easy for scientists from a variety of backgrounds to use and understand — so that neuroscientists, statisticians and other researchers can collaborate, view data and share methods toward greater reproducibility.”

The idea for the AFQ-Browser came out of a UW course on data visualization, and the researchers worked with several graduate students to develop and perfect the browser. They tested it on existing diffusion-weighted MRI datasets, including research subjects with ALS and MS. In the future, they hope that the AFQ-Browser can be improved to do automated analyses — and possibly even diagnoses — based on diffusion-weighted MRI data.

“AFQ-Browser is really just the start of what could be a number of tools for sharing neuroscience data and experiments,” said Yeatman. “Our goal here is greater reproducibility and transparency, and a more robust scientific process.”

Here are a couple of images the researchers have used to illustrate their work,

AFQ-Browser.Jason Yeatman/Ariel Rokem Courtesy: University of Washington

Depiction of the left hemisphere of the human brain. Colored regions are selected white matter regions that could be measured using diffusion-weighted MRI: Corticospinal tract (orange), arcuate fasciculus (blue) and cingulum (green).Jason Yeatman/Ariel Rokem

You can find an embedded version of the AFQ-Browser here: http://www.washington.edu/news/2018/03/15/democratizing-science-researchers-make-neuroscience-experiments-easier-to-share-reproduce/ (scroll down about 50 – 55% of the way).

As for the paper, here’s a link and a citation,

A browser-based tool for visualization and analysis of diffusion MRI data by Jason D. Yeatman, Adam Richie-Halford, Josh K. Smith, Anisha Keshavan, & Ariel Rokem. Nature Communicationsvolume 9, Article number: 940 (2018) doi:10.1038/s41467-018-03297-7 Published online: 05 March 2018

Fittingly, this paper is open access.