Mott memristors (mentioned in my Aug. 24, 2017 posting about neuristors and brainlike computing) gets more fulsome treatment in an Oct. 9, 2017 posting by Samuel K. Moore on the Nanoclast blog (found on the IEEE [Institute of Electrical and Electronics Engineers] website) Note: 1: Links have been removed; Note 2 : I quite like Moore’s writing style but he’s not for the impatient reader,
When you’re really harried, you probably feel like your head is brimful of chaos. You’re pretty close. Neuroscientists say your brain operates in a regime termed the “edge of chaos,” and it’s actually a good thing. It’s a state that allows for fast, efficient analog computation of the kind that can solve problems that grow vastly more difficult as they become bigger in size.
The trouble is, if you’re trying to replicate that kind of chaotic computation with electronics, you need an element that both acts chaotically—how and when you want it to—and could scale up to form a big system.
“No one had been able to show chaotic dynamics in a single scalable electronic device,” says Suhas Kumar, a researcher at Hewlett Packard Labs, in Palo Alto, Calif. Until now, that is.
He, John Paul Strachan, and R. Stanley Williams recently reported in the journal Nature that a particular configuration of a certain type of memristor contains that seed of controlled chaos. What’s more, when they simulated wiring these up into a type of circuit called a Hopfield neural network, the circuit was capable of solving a ridiculously difficult problem—1,000 instances of the traveling salesman problem—at a rate of 10 trillion operations per second per watt.
(It’s not an apples-to-apples comparison, but the world’s most powerful supercomputer as of June 2017 managed 93,015 trillion floating point operations per second but consumed 15 megawatts doing it. So about 6 billion operations per second per watt.)
The device in question is called a Mott memristor. Memristors generally are devices that hold a memory, in the form of resistance, of the current that has flowed through them. The most familiar type is called resistive RAM (or ReRAM or RRAM, depending on who’s asking). Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance.
The HP Labs team made their memristor from an 8-nanometer-thick layer of niobium dioxide (NbO2) sandwiched between two layers of titanium nitride. The bottom titanium nitride layer was in the form of a 70-nanometer wide pillar. “We showed that this type of memristor can generate chaotic and nonchaotic signals,” says Williams, who invented the memristor based on theory by Leon Chua.
(The traveling salesman problem is one of these. In it, the salesman must find the shortest route that lets him visit all of his customers’ cities, without going through any of them twice. It’s a difficult problem because it becomes exponentially more difficult to solve with each city you add.)
Here’s what the niobium dioxide-based Mott memristor looks like,
Photo: Suhas Kumar/Hewlett Packard Labs A micrograph shows the construction of a Mott memristor composed of an 8-nanometer-thick layer of niobium dioxide between two layers of titanium nitride.
This is not the first time I’ve posted about nanotechnology and neuroscience (see this April 2, 2013 piece about then new brain science initiative in the US and Michael Berger’s Nanowerk Spotlight article/review of an earlier paper covering the topic of nanotechnology and neuroscience).
Interestingly, the European Union (EU) had announced its two $1B Euro research initiatives, the Human Brain Project and the Graphene Flagship (see my Jan. 28, 2013 posting about it), months prior to the US brain research push. For those unfamiliar with the nanotechnology effort, graphene is a nanomaterial and there is high interest in its potential use in biomedical technology, thus partially connecting both EU projects.
In any event, Berger is highlighting a nanotechnology and neuroscience connection again in his Oct. 18, 2017 Nanowerk Spotlight article, or overview of, a new paper, which updates our understanding of the potential connections between the two fields (Note: A link has been removed),
Over the past several years, nanoscale analysis tools and in the design and synthesis of nanomaterials have generated optical, electrical, and chemical methods that can readily be adapted for use in neuroscience and brain activity mapping.
A review paper in Advanced Functional Materials (“Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping”) summarizes the basic concepts associated with neuroscience and the current journey of nanotechnology towards the study of neuron function by addressing various concerns on the significant role of nanomaterials in neuroscience and by describing the future applications of this emerging technology.
The collaboration between nanotechnology and neuroscience, though still at the early stages, utilizes broad concepts, such as drug delivery, cell protection, cell regeneration and differentiation, imaging and surgery, to give birth to novel clinical methods in neuroscience.
Ultimately, the clinical translation of nanoneuroscience implicates that central nervous system (CNS) diseases, including neurodevelopmental, neurodegenerative and psychiatric diseases, have the potential to be cured, while the industrial translation of nanoneuroscience indicates the need for advancement of brain-computer interface technologies.
Future Developing Arenas in Nanoneuroscience
The Brain Activity Map (BAM) Project aims to map the neural activity of every neuron across all neural circuits with the ultimate aim of curing diseases associated with the nervous system. The announcement of this collaborative, public-private research initiative in 2013 by President Obama has driven the surge in developing methods to elucidate neural circuitry. Three current developing arenas in the context of nanoneuroscience applications that will push such initiative forward are 1) optogenetics, 2) molecular/ion sensing and monitoring and 3) piezoelectric effects.
In their review, the authors discuss these aspects in detail.
Neurotoxicity of Nanomaterials
By engineering particles on the scale of molecular-level entities – proteins, lipid bilayers and nucleic acids – we can stereotactically interface with many of the components of cell systems, and at the cutting edge of this technology, we can begin to devise ways in which we can manipulate these components to our own ends. However, interfering with the internal environment of cells, especially neurons, is by no means simple.
“If we are to continue to make great strides in nanoneuroscience, functional investigations of nanomaterials must be complemented with robust toxicology studies,” the authors point out. “A database on the toxicity of materials that fully incorporates these findings for use in future schema must be developed. These databases should include information and data on 1) the chemical nature of the nanomaterials in complex aqueous environments; 2) the biological interactions of nanomaterials with chemical specificity; 3) the effects of various nanomaterial properties on living systems; and 4) a model for the simulation and computation of possible effects of nanomaterials in living systems across varying time and space. If we can establish such methods, it may be possible to design nanopharmaceuticals for improved research as well as quality of life.”
“However, challenges in nanoneuroscience are present in many forms, such as neurotoxicity; the inability to cross the blood-brain barrier [emphasis mine]; the need for greater specificity, bioavailability and short half-lives; and monitoring of disease treatment,” the authors conclude their review. “The nanoneurotoxicity surrounding these nanomaterials is a barrier that must be overcome for the translation of these applications from bench-to-bedside. While the challenges associated with nanoneuroscience seem unending, they represent opportunities for future work.”
I have a March 26, 2015 posting about Canadian researchers breaching the blood-brain barrier and an April 13, 2016 posting about US researchers at Cornell University also breaching the blood-brain barrier. Perhaps the “inability” mentioned in this Spotlight article means that it can’t be done consistently or that it hasn’t been achieved on humans.
What is it with the Canadian neuroscience community? First, there’s The Beautiful Brain an exhibition of the extraordinary drawings of Santiago Ramón y Cajal (1852–1934) at the Belkin Gallery on the University of British Columbia (UBC) campus in Vancouver and a series of events marking the exhibition (for more see my Sept. 11, 2017 posting ; scroll down about 30% for information about the drawings and the events still to come).
I guess there must be some money floating around for raising public awareness because now there’s a neuroscience and ‘storytelling’ event (Narrating Neuroscience) in Toronto, Canada. From a Sept. 25, 2017 ArtSci Salon announcement (received via email),
With NARRATING NEUROSCIENCE we plan to initiate a discussion on the role and the use of storytelling and art (both in verbal and visual forms) to communicate abstract and complex concepts in neuroscience to very different audiences, ranging from fellow scientists, clinicians and patients, to social scientists and the general public. We invited four guests to share their research through case studies and experiences stemming directly from their research or from other practices they have adopted and incorporated into their research, where storytelling and the arts have played a crucial role not only in communicating cutting edge research in neuroscience, but also in developing and advancing it.
MATTEO FARINELLA, PhD, Presidential Scholar in Society and Neuroscience – Columbia University
SHELLEY WALL , AOCAD, MSc, PhD – Assistant professor, Biomedical Communications Graduate Program and Department of Biology, UTM
ALFONSO FASANO, MD, PhD, Associate Professor – University of Toronto Clinician Investigator – Krembil Research Institute Movement Disorders Centre – Toronto Western Hospital
TAHANI BAAKDHAH, MD, MSc, PhD candidate – University of Toronto
DATE: October 20, 2017
TIME: 6:00-8:00 pm
LOCATION: The Fields Institute for Research in Mathematical Sciences
222 College Street, Toronto, ON
Events Facilitators: Roberta Buiani and Stephen Morris (ArtSci Salon) and Nina Czegledy (Leonardo Network)
TAHANI BAAKDHAH is a PhD student at the University of Toronto studying how the stem cells built our retina during development, the mechanism by which the light sensing cells inside the eye enable us to see this beautiful world and how we can regenerate these cells in case of disease or injury.
MATTEO FARINELLA combines a background in neuroscience with a lifelong passion for drawing, making comics and illustrations about the brain. He is the author of _Neurocomic_ (Nobrow 2013) published with the support of the Wellcome Trust, _Cervellopoli_ (Editoriale Scienza 2017) and he has collaborated with universities and educational institutions around
the world to make science more clear and accessible. In 2016 Matteo joined Columbia University as a Presidential Scholar in Society and Neuroscience, where he investigates the role of visual narratives in science communication. Working with science journalists, educators and cognitive neuroscientists he aims to understand how these tools may
affect the public perception of science and increase scientific literacy (cartoonscience.org ).
ALFONSO FASANO graduated from the Catholic University of Rome, Italy, in 2002 and became a neurologist in 2007. After a 2-year fellowship at the University of Kiel, Germany, he completed a PhD in neuroscience at the Catholic University of Rome. In 2013 he joined the Movement Disorder Centre at Toronto Western Hospital, where he is the co-director of the
surgical program for movement disorders. He is also an associate professor of medicine in the Division of Neurology at the University of Toronto and clinician investigator at the Krembil Research Institute. Dr. Fasano’s main areas of interest are the treatment of movement disorders with advanced technology (infusion pumps and neuromodulation), pathophysiology and treatment of tremor and gait disorders. He is author of more than 170 papers and book chapters. He is principal investigator of several clinical trials.
SHELLEY WALL is an assistant professor in the University of Toronto’s Biomedical Communications graduate program, a certified medical illustrator, and inaugural Illustrator-in-Residence in the Faculty of Medicine, University of Toronto. One of her primary areas of research, teaching, and creation is graphic medicine—the intersection of comics with illness, medicine, and caregiving—and one of her ongoing projects is a series of comics about caregiving and young onset Parkinson’s disease.
You can register for this free Toronto event here.
One brief observation, there aren’t any writers (other than academics) or storytellers included in this ‘storytelling’ event. The ‘storytelling’ being featured is visual. To be blunt I’m not of the ‘one picture is worth a thousand words’ school of thinking (see my Feb. 22, 2011 posting). Yes, sometimes pictures are all you need but that tiresome aphorism which suggests communication can be reduced to one means of communication really needs to be retired. As for academic writing, it’s not noted for its storytelling qualities or experimentation. Academics are not judged on their writing or storytelling skills although there are some who are very good.
Getting back to the Toronto event, they seem to have the visual part of their focus ” … discussion on the role and the use of storytelling and art (both in verbal and visual forms) … ” covered. Having recently attended a somewhat similar event in Vancouver, which was announced n my Sept. 11, 2017 posting, there were some exciting images and ideas presented.
The ArtSci Salon folks also announced this (from the Sept. 25, 2017 ArtSci Salon announcement; received via email),
ATTENTION ARTSCI SALONISTAS AND FANS OF ART AND SCIENCE!!
CALL FOR KNITTING AND CROCHET LOVERS!
In addition to being a PhD student at the University of Toronto, Tahani Baakdhah is a prolific knitter and crocheter and has been the motor behind two successful Knit-a-Neuron Toronto initiatives. We invite all Knitters and Crocheters among our ArtSci Salonistas to pick a pattern
(link below) and knit a neuron (or 2! Or as many as you want!!)
BRING THEM TO OUR OCTOBER 20 ARTSCI SALON!
Come to the ArtSci Salon and knit there!
You can’t come?
Share a picture with @ArtSci_Salon @SciCommTO #KnitANeuronTO  on
Or…Drop us a line at email@example.com !
I think it’s been a few years since my last science knitting post. No, it was Oct. 18, 2016. Moving on, I found more neuron knitting while researching this piece. Here’s the Neural Knitworks group, which is part of Australia’s National Science Week (11-19 August 2018) initiative (from the Neural Knitworks webpage),
Whether you’re a whiz with yarn, or just discovering the joy of craft, now you can crochet wrap, knit or knot—and find out about neuroscience.
During 2014 an enormous number of handmade neurons were donated (1665 in total!) and used to build a giant walk-in brain, as seen here at Hazelhurst Gallery [scroll to end of this post]. Since then Neural Knitworks have been held in dozens of communities across Australia, with installations created in Queensland, the ACT, Singapore, as part of the Cambridge Science Festival in the UK and in Philadelphia, USA.
In 2017, the Neural Knitworks team again invites you to host your own home-grown Neural Knitwork for National Science Week*. Together we’ll create a giant ‘virtual’ neural network by linking your displays visually online.
* If you wish to host a Neural Knitwork event outside of National Science Week or internationally we ask that you contact us to seek permission to use the material, particularly if you intend to create derivative works or would like to exhibit the giant brain. Please outline your plans in an email.
Your creation can be big or small, part of a formal display, or simply consist of neighbourhood neuron ‘yarn-bombings’. Knitworks can be created at home, at work or at school. No knitting experience is required and all ages can participate.
See below for how to register your event and download our scientifically informed patterns.
What is a neuron?
Neurons are electrically excitable cells of the brain, spinal cord and peripheral nerves. The billions of neurons in your body connect to each other in neural networks. They receive signals from every sense, control movement, create memories, and form the neural basis of every thought.
Gather together a group of friends who knit, crochet, design, spin, weave and anyone keen to give it a go. Those who know how to knit can teach others how to do it, and there’s even an easy no knit pattern that you can knot.
Download a neuroscience podcast to listen to, and you’ve got a Neural Knitwork!
I’ve written a couple times about Greg Gage and his Backyard Brains, first, in a March 28, 2012 posting (scroll down about 40% of the way for the mention of the first [?] ‘SpikerBox’) and, most recently, in a June 26, 2013 posting (scroll down about 25% of the way for the mention of a RoboRoach Kickstater project from Backyard Brains) which also featured the launch of a new educational product and a TED [technology education design] talk.
Here’s the latest from an Oct. 10, 2017 news release (received via email),
Backyard Brains Releases Plant SpikerBox, unlocking the Secret Electrical Language used in Plants
The first consumer device to investigate how plants create behaviors through electrophysiology and to enable interspecies plant to plant communication.
ANN ARBOR, MI, OCTOBER 10, 2017–Today Backyard Brains launched the Plant SpikerBox, the first ever science kit designed to reveal the wonderful nature behind plant behavior through electrophysiology experiments done at home or in the classroom. The new SpikerBox launched alongside three new experiments, enabling users to explore Venus Flytrap and Sensitive Mimosa signals and to perform a jaw-dropping Interspecies Plant-Plant-Communicator experiment. The Plant SpikerBox and all three experiments are featured in a live talk from TED2017 given by Backyard Brains CEO and cofounder Dr. Greg Gage which was released today on https://ted.com.
Backyard Brains received viral attention for their previous videos, TED talks, and for their mission to create hands-on neuroscience experiments for everyone. The company (run by professional neuroscientists) produces consumer-friendly versions of expensive graduate lab equipment used at top research universities around the world. The new plant experiments and device facilitate the growing movement of DIY [do it yourself] scientists, made up of passionate amateurs, students, parents, and teachers.
Like previous inventions, the Plant SpikerBox is extremely easy to use, making it accessible for students as young as middle school. The device works by recording the electrical activity responsible for different plant behaviors. For example, the Venus Flytrap uses an electrical signal to determine if prey has landed in its trap; the SpikerBox reveals these invisible messages and allows you to visualize them on your mobile device. For the first time ever, you can peer into the fascinating world of plant signaling and plant behaviors.
The new SpikerBox features an “Interspecies Plant-Plant-Communicator” which demonstrates the ubiquitous nature of electrical signaling seen in humans, insects, and plants. With this device, one can capture the electrical message (called an action potential) from one plant’s behavior, and send it to a different plant to activate another behavior.
Co-founder and CEO Greg Gage explains, “Itis surprising to many people that plants use electrical messages similar to those used by the neurons in our brains. I was shocked to hear that. Many neuroscientists are. But if you think about it, it [sic] does make sense. Our nervous system evolved to react quickly. Electricity is fast. The plants we are studying also need to react quickly, so it makes sense they would develop a similar system. To be clear: No, plants don’t have brains, but they do exhibit behaviors and they do use electric messages called ‘Action Potentials’ like we do to send information. The benefit of these plant experiments then is twofold: First, we can simply demonstrate fundamental neuroscience principles, and second, we can spread the wonder of understanding how living creatures work and hopefully encourage others to make a career in life sciences!”
The Plant SpikerBox is a trailblazer, bringing plant electrophysiology to the public for the first time ever. It is designed to work with the Backyard Brains SpikeRecorder software which is available to download for free on their website or in mobile app stores. The three plant experiments are just a few of the dozens of free experiments available on the Backyard Brains website. The Plant SpikerBox is available now for $149.99.
About Backyard Brains
A staggering 1 in 5 people will develop a neurological disorder in their lifetime, making the need for neuroscience studies urgent. Backyard Brains passionately responds with their motto “Neuroscience for Everyone,” providing exposure, education, and experiment kits to students of all ages. Founded in 2010 in Ann Arbor, MI by University of Michigan Neuroscience graduate students Greg Gage and Tim Marzullo, Backyard Brains have been dubbed Champions of Change at an Obama White House ceremony and have won prestigious awards from the National Institutes of Health and the Society for Neuroscience. To learn more, visit BackyardBrains.com
You can find an embedded video of Greg Gage’s TED talk and Plant SpikerBox launch along with links to experiments you could run with it on Backyard Brains’ Plant SpikerBox product page.
Your nervous system allows you to sense and respond quickly to the environment around you. You have a nervous system, animals have nervous systems, but plants do not. But not having a nervous system does not mean you cannot sense and respond to the world. Plants can certainly sense the environment around them and move. You have seen your plants slowly turn their leaves towards sunlight by the window over a week, open their flowers in the day, and close their flowers during the night. Some plants can move in much more dramatic fashion, such as the Venus Flytrap and the Sensitive Mimosa.
The Venus Flytrap comes from the swamps of North Carolina, USA, and lives in very nutrient-poor, water-logged soil. It photosynthesizes like other plants, but it can’t always rely on the sunlight for food. To supplement its food supply it traps and eats insects, extracting from them the nitrogen and phosphorous needed to form plant food (amino acids, nucleic acids, and other molecules).
If you look closely at the Venus Flytrap, you will notice it has very tiny “Trigger Hairs” inside its trap leaves.
If a wayward, unsuspecting insect touches a trigger hair, an Action Potential occurs in the leaves. This is a different Action Potential than what we are used to seeing in neurons, as it’s based on the movement of calcium, potassium, and chloride ions (vs. movement of potassium and sodium as in the Action Potentials of neurons and muscles), and it is muuuuuuuuucccchhhhhh longer than anything we’ve seen before.
If the trigger hair is touched twice within 20 seconds (firing two Action Potentials within 20 seconds), the trap closes. The trap is not closing due to muscular action (plants do not have muscles), but rather due to an osmotic, rapid change in the shape of curvature of the trap leaves. Interestingly, the firing of Action Potentials is not always reliable, depending on time of year, temperature, health of plant, and/or other factors. Quite different from we humans, Action Potential failure is not devastating to a Venus Flytrap.
We can observe this plant Action Potential using our Plant SpikerBox. Welcome to the Brave New World of Plant Electrophysiology.
Before you begin, make sure you have the Backyard Brains SpikeRecorder. The Backyard Brains SpikeRecorder program allows you to visualize and save data on your computer when doing experiments.
I did feel a bit sorry for the Venus Flytrap in Greg Gage’s TED talk which was fooled into closing its trap. According to Gage, the Venus Flytrap has limited number of times it can close its trap and after the last time, it dies. On the other hand, I eat meat and use leather goods so there is not pedestal for me to perch on.
For anyone who caught the Brittany Spears reference in the headline in this posting,
From exploring outer space with Brittany Spears to exploring plant communication and neuroscience in your back yard, science can be found in many different places.
The Sept. 19, 2017 Café Scientifique event, “Art in the Details A look at the role of art in science,” in Vancouver seems to be part of a larger neuroscience and the arts program at the University of British Columbia. First, the details about the Sept. 13, 2017 event from the eventful Vancouver webpage,
Café Scientifique – Art in the Details: A look at the role of art in science
Art in the Details: A look at the role of art in science With so much beauty in the natural world, why does the misconception that art and science are vastly different persist? Join us for discussion and dessert as we hear from artists, researchers and academic professionals about the role art has played in scientific research – from the formative work of Santiago Ramon Y Cajal to modern imaging, and beyond – and how it might help shape scientific understanding in the future. September 19th, 2017 7:00 – 9:00 pm (doors open at 6:45pm) TELUS World of Science [also known as Science World], 1455 Quebec St., Vancouver, BC V6A 3Z7 Free Admission [emphasis mine] Experts Dr Carol-Ann Courneya Associate Professor in the Department of Cellular and Physiological Science and Assistant Dean of Student Affairs, Faculty of Medicine, University of British Columbia Dr Jason Snyder Assistant Professor, Department of Psychology, University of British Columbia http://snyderlab.com/ Dr Steven Barnes Instructor and Assistant Head—Undergraduate Affairs, Department of Psychology, University of British Columbia http://stevenjbarnes.com/ Moderated By Bruce Claggett Senior Managing Editor, NEWS 1130 This evening event is presented in collaboration with the Djavad Mowafaghian Centre for Brain Health. Please note: this is a private, adult-oriented event and TELUS World of Science will be closed during this discussion.
The Art in the Details event page on the Science World website provides a bit more information about the speakers (mostly in the form of links to their webpage),,
Dr Carol-Ann Courneya
Associate Professor in the Department of Cellular and Physiological Science and Assistant Dean of Student Affairs, Faculty of Medicine, University of British Columbia
Should you click though to obtain tickets from either the eventful Vancouver or Science World websites, you’ll find the event is sold out but perhaps the organizers will include a waitlist.
Even if you can’t get a ticket, there’s an exhibition of Santiago Ramon Y Cajal’s work (from the Djavad Mowafaghian Centre for Brain Health’s Beautiful brain’s webpage),
Drawings of Santiago Ramón y Cajal to be shown at UBC
Pictured: Santiago Ramón y Cajal, injured Purkinje neurons, 1914, ink and pencil on paper. Courtesy of Instituto Cajal (CSIC).
The Beautiful Brain is the first North American museum exhibition to present the extraordinary drawings of Santiago Ramón y Cajal (1852–1934), a Spanish pathologist, histologist and neuroscientist renowned for his discovery of neuron cells and their structure, for which he was awarded the Nobel Prize in Physiology and Medicine in 1906. Known as the father of modern neuroscience, Cajal was also an exceptional artist. He combined scientific and artistic skills to produce arresting drawings with extraordinary scientific and aesthetic qualities.
A century after their completion, Cajal’s drawings are still used in contemporary medical publications to illustrate important neuroscience principles, and continue to fascinate artists and visual art audiences. Eighty of Cajal’s drawings will be accompanied by a selection of contemporary neuroscience visualizations by international scientists. The Morris and Helen Belkin Art Gallery exhibition will also include early 20th century works that imaged consciousness, including drawings from Annie Besant’s Thought Forms (1901) and Charles Leadbeater’s The Chakras (1927), as well as abstract works by Lawren Harris that explored his interest in spirituality and mysticism.
After countless hours at the microscope, Cajal was able to perceive that the brain was made up of individual nerve cells or neurons rather than a tangled single web, which was only decisively proven by electron microscopy in the 1950s and is the basis of neuroscience today. His speculative drawings stemmed from an understanding of aesthetics in their compressed detail and lucid composition, as he laboured to clearly represent matter and processes that could not be seen.
In a special collaboration with the Morris and Helen Belkin Art Gallery and the VGH & UBC Hospital Foundation this project will encourage meaningful dialogue amongst artists, curators, scientists and scholars on concepts of neuroplasticity and perception. Public and Academic programs will address the emerging field of art and neuroscience and engage interdisciplinary research of scholars from the sciences and humanities alike.
“This is an incredible opportunity for the neuroscience and visual arts communities at the University and Vancouver,” says Dr. Brian MacVicar, who has been working diligently with Director Scott Watson at the Morris and Helen Belkin Art Gallery and with his colleagues at the University of Minnesota for the past few years to bring this exhibition to campus. “Without Cajal’s impressive body of work, our understanding of the anatomy of the brain would not be so well-formed; Cajal’s legacy has been of critical importance to neuroscience teaching and research over the past century.”
A book published by Abrams accompanies the exhibition, containing full colour reproductions of all 80 of the exhibition drawings, commentary on each of the works and essays on Cajal’s life and scientific contributions, artistic roots and achievements and contemporary neuroscience imaging techniques.
Join the UBC arts and neuroscience communities for a free symposium and dance performance celebrating The Beautiful Brain at UBC on September 7. [link removed]
The Beautiful Brain: The Drawings of Santiago Ramón y Cajal was developed by the Frederick R. Weisman Art Museum, University of Minnesota with the Instituto Cajal. The exhibition at the Morris and Helen Belkin Art Gallery, University British Columbia is presented in partnership with the Djavad Mowafaghian Centre for Brain Health with support from the VGH & UBC Hospital Foundation. We gratefully acknowledge the generous support of the Canada Council for the Arts, the British Columbia Arts Council and Belkin Curator’s Forum members.
The Morris and Helen Belkin Art Gallery’s Beautiful Brain webpage has a listing of upcoming events associated with the exhibition as well as instructions on how to get there (if you click on About),
… Cajal was also an exceptional artist and studied as a teenager at the Academy of Arts in Huesca, Spain. He combined scientific and artistic skills to produce arresting drawings with extraordinary scientific and aesthetic qualities. A century after their completion, his drawings are still used in contemporary medical publications to illustrate important neuroscience principles, and continue to fascinate artists and visual art audiences. Eighty of Cajal’s drawings are accompanied by a selection of contemporary neuroscience visualizations by international scientists.
Organizationally, this seems a little higgledy piggledy with the Cafe Scientifique event found on some sites, the Belkin Gallery events found on one site, and no single listing of everything on any one site for the Beautiful Brain. Please let me know if you find something I’ve missed.
I’ve quickly read Michael Edgeworth McIntyre’s paper on multi-level thinking and find it provides fascinating insight and some good writing style (I’ve provided a few excerpts from the paper further down in the posting).
An unusual paper “On multi-level thinking and scientific understanding” appears in the October issue of Advances in Atmospheric Sciences. The author is Professor Michael Edgeworth McIntyre from University of Cambridge, whose work in atmospheric dynamics is well known. He has also had longstanding interests in astrophysics, music, perception psychology, and biological evolution.
The paper touches on a range of deep questions within and outside the atmospheric sciences. They include insights into the nature of science itself, and of scientific understanding — what it means to understand a scientific problem in depth — and into the communication skills necessary to convey that understanding and to mediate collaboration across specialist disciplines.
The paper appears in a Special Issue arising from last year’s Symposium held in Nanjing to commemorate the life of Professor Duzheng YE, who was well known as a national and international scientific leader and for his own wide range of interests, within and outside the atmospheric sciences. The symposium was organized by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences, where Prof. YE had worked nearly 70 years before he passed away. Upon the invitation of Prof. Jiang ZHU, the Director General of IAP, also the Editor-in-Chief of Advances in Atmospheric Sciences (AAS), Prof. McIntyre agreed to contribute a review paper to an AAS special issue commemorating the centenary of Duzheng YE’s birth. Prof. YE was also the founding Editor-in-Chief of this journal.
One of Professor McIntyre’s themes is that we all have unconscious mathematics, including Euclidean geometry and the calculus of variations. This is easy to demonstrate and is key to understanding not only how science works but also, for instance, how music works. Indeed, it reveals some of the deepest connections between music and mathematics, going beyond the usual remarks about number-patterns. All this revolves around the biological significance of what Professor McIntyre calls the “organic-change principle”.
Further themes include the scientific value of looking at a problem from more than one viewpoint, and the need to use more than one level of description. Many scientific and philosophical controversies stem from confusing one level of description with another, for instance applying arguments to one level that belong on another. This confusion can be especially troublesome when it comes to questions about human biology and human nature, and about what Professor YE called multi-level “orderly human activities”.
Related to all these points are the contrasting modes of perception and understanding offered by the brain’s left and right hemispheres. Our knowledge of their functioning has progressed far beyond the narrow clichés of popular culture, thanks to recent work in the neurosciences. The two hemispheres automatically give us different levels of description, and complementary views of a problem. Good science takes advantage of this. When the two hemispheres cooperate, with each playing to its own strengths, our problem-solving is at its most powerful.
The paper ends with three examples of unconscious assumptions that have impeded scientific progress in the past. Two of them are taken from Professor McIntyre’s main areas of research. A third is from biology.
To give you a sense of his writing and imagination, I’ve excerpted a few paragraphs from p. 1153 but first you need to see this .gif (he provides a number of ways to watch the .gif in his text but I think it’s easier to watch the copy of the one he has on his website),
Now for the excerpt,
Here is an example to show what I mean. It is a classic in experimental psychology, from the work of Professor Gunnar JOHANSSON in the 1970s. …
As soon as the twelve dots start moving, everyone with normal vision sees a person walking. This immediately illustrates several things. First, it illustrates that we all make unconscious assumptions. Here, we unconsciously assume a particular kind of three-dimensional motion. In this case the unconscious assumption is completely involuntary. We cannot help seeing a person walking, despite knowing that it is only twelve moving dots.
The animation also shows that we have unconscious mathematics, Euclidean geometry in this case. In order to generate the percept of a person walking, your brain has to ﬁt a mathematical model to the incoming visual data, in this case a mathematical model based on Euclidean geometry. (And the model-ﬁtting process is an active, and highly complex, predictive process most of which is inaccessible to conscious introspection.)
This brings me to the most central point in our discussion. Science does essentially the same thing. It ﬁts models to data. So science is, in the most fundamental possible sense, an extension of ordinary perception. That is a simple way of saying what was said many decades ago by great thinkers such as Professor Sir Karl POPPER….
I love that phase “unconscious mathematics” for the way it includes even those of us who would never dream of thinking we had any kind of mathematics. I encourage you to read his paper in its entirety, which does include a little technical language in a few spots but the overall thesis is clear and easily understood.
I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.
Quantum entanglement and neural networks
A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),
Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.
Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.
An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)
Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”
On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.
The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.
What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”
The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.
Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.
Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.
These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.
Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),
For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.
Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.
The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.
“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”
Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”
If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.
“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.
In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.
When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”
The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.
About Blue Brain
The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch
Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.
This would usually be a simple event announcement but with the advent of a new, related (in my mind if no one else’s) development on Facebook, this has become a roundup of sorts.
Facebotlish (Facebook’s chatbots create their own language)
The language created by Facebook’s chatbots, Facebotlish, was an unintended consequence—that’s right Facebook’s developers did not design a language for the chatbots or anticipate its independent development, apparently. Adrienne LaFrance’s June 20, 2017 article for theatlantic.com explores the development and the question further,
Something unexpected happened recently at the Facebook Artificial Intelligence Research lab. Researchers who had been training bots to negotiate with one another realized that the bots, left to their own devices, started communicating in a non-human language.
In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans could understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with human Facebook users.) …
Here’s what the language looks like (from LaFrance article),
Here’s an example of one of the bot negotiations that Facebook observed:Bob: i can i i everything else . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to Bob: you i everything else . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me Bob: i i can i i i everything else . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me Bob: i . . . . . . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to Bob: you i i i i i everything else . . . . . . . . . . . . . . Alice: balls have 0 to me to me to me to me to me to me to me to me to Bob: you i i i everything else . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to
It is incomprehensible to humans even after being tweaked, even so, some successful negotiations can ensue.
Facebook’s researchers aren’t the only one to come across the phenomenon (from LaFrance’s article; Note: Links have been removed),
Other AI researchers, too, say they’ve observed machines that can develop their own languages, including languages with a coherent structure, and defined vocabulary and syntax—though not always actual meaningful, by human standards.
In one preprint paper added earlier this year  to the research repository arXiv, a pair of computer scientists from the non-profit AI research firm OpenAI wrote about how bots learned to communicate in an abstract language—and how those bots turned to non-verbal communication, the equivalent of human gesturing or pointing, when language communication was unavailable. (Bots don’t need to have corporeal form to engage in non-verbal communication; they just engage with what’s called a visual sensory modality.) Another recent preprint paper, from researchers at the Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech, describes an experiment in which two bots invent their own communication protocol by discussing and assigning values to colors and shapes—in other words, the researchers write, they witnessed the “automatic emergence of grounded language and communication … no human supervision!”
The implications of this kind of work are dizzying. Not only are researchers beginning to see how bots could communicate with one another, they may be scratching the surface of how syntax and compositional structure emerged among humans in the first place.
LaFrance’s article is well worth reading in its entirety especially since the speculation is focused on whether or not the chatbots’ creation is in fact language. There is no mention of consciousness and perhaps this is just a crazy idea but is it possible that these chatbots have consciousness? The question is particularly intriguing in light of some of philosopher David Chalmers’ work (see his 2014 TED talk in Vancouver, Canada: https://www.ted.com/talks/david_chalmers_how_do_you_explain_consciousness/transcript?language=en runs roughly 18 mins.); a text transcript is also featured. There’s a condensed version of Chalmers’ TED talk offered in a roughly 9 minute NPR (US National Public Radio) interview by Gus Raz. Here are some highlights from the text transcript,
So we’ve been hearing from brain scientists who are asking how a bunch of neurons and synaptic connections in the brain add up to us, to who we are. But it’s consciousness, the subjective experience of the mind, that allows us to ask the question in the first place. And where consciousness comes from – that is an entirely separate question.
DAVID CHALMERS: Well, I like to distinguish between the easy problems of consciousness and the hard problem.
RAZ: This is David Chalmers. He’s a philosopher who coined this term, the hard problem of consciousness.
CHALMERS: Well, the easy problems are ultimately a matter of explaining behavior – things we do. And I think brain science is great at problems like that. It can isolate a neural circuit and show how it enables you to see a red object, to respondent and say, that’s red. But the hard problem of consciousness is subjective experience. Why, when all that happens in this circuit, does it feel like something? How does a bunch of – 86 billion neurons interacting inside the brain, coming together – how does that produce the subjective experience of a mind and of the world?
RAZ: Here’s how David Chalmers begins his TED Talk.
(SOUNDBITE OF TED TALK)
CHALMERS: Right now, you have a movie playing inside your head. It has 3-D vision and surround sound for what you’re seeing and hearing right now. Your movie has smell and taste and touch. It has a sense of your body, pain, hunger, orgasms. It has emotions, anger and happiness. It has memories, like scenes from your childhood, playing before you. This movie is your stream of consciousness. If we weren’t conscious, nothing in our lives would have meaning or value. But at the same time, it’s the most mysterious phenomenon in the universe. Why are we conscious?
RAZ: Why is consciousness more than just the sum of the brain’s parts?
CHALMERS: Well, the question is, you know, what is the brain? It’s this giant complex computer, a bunch of interacting parts with great complexity. What does all that explain? That explains objective mechanism. Consciousness is subjective by its nature. It’s a matter of subjective experience. And it seems that we can imagine all of that stuff going on in the brain without consciousness. And the question is, where is the consciousness from there? It’s like, if someone could do that, they’d get a Nobel Prize, you know?
CHALMERS: So here’s the mapping from this circuit to this state of consciousness. But underneath that is always going be the question, why and how does the brain give you consciousness in the first place?
(SOUNDBITE OF TED TALK)
CHALMERS: Right now, nobody knows the answers to those questions. So we may need one or two ideas that initially seem crazy before we can come to grips with consciousness, scientifically. The first crazy idea is that consciousness is fundamental. Physicists sometimes take some aspects of the universe as fundamental building blocks – space and time and mass – and you build up the world from there. Well, I think that’s the situation we’re in. If you can’t explain consciousness in terms of the existing fundamentals – space, time – the natural thing to do is to postulate consciousness itself as something fundamental – a fundamental building block of nature. The second crazy idea is that consciousness might be universal. This view is sometimes called panpsychism – pan, for all – psych, for mind. Every system is conscious. Not just humans, dogs, mice, flies, but even microbes. Even a photon has some degree of consciousness. The idea is not that photons are intelligent or thinking. You know, it’s not that a photon is wracked with angst because it’s thinking, oh, I’m always buzzing around near the speed of light. I never get to slow down and smell the roses. No, not like that. But the thought is, maybe photons might have some element of raw subjective feeling, some primitive precursor to consciousness.
RAZ: So this is a pretty big idea – right? – like, that not just flies, but microbes or photons all have consciousness. And I mean we, like, as humans, we want to believe that our consciousness is what makes us special, right – like, different from anything else.
CHALMERS: Well, I would say yes and no. I’d say the fact of consciousness does not make us special. But maybe we’ve a special type of consciousness ’cause you know, consciousness is not on and off. It comes in all these rich and amazing varieties. There’s vision. There’s hearing. There’s thinking. There’s emotion and so on. So our consciousness is far richer, I think, than the consciousness, say, of a mouse or a fly. But if you want to look for what makes us distinct, don’t look for just our being conscious, look for the kind of consciousness we have. …
Vancouver premiere of Baba Brinkman’s Rap Guide to Consciousness
Baba Brinkman’s new hip-hop theatre show “Rap Guide to Consciousness” is all about the neuroscience of consciousness. See it in Vancouver at the Rio Theatre before it goes to the Edinburgh Fringe Festival in August .
This event also features a performance of “Off the Top” with Dr. Heather Berlin (cognitive neuroscientist, TV host, and Baba’s wife), which is also going to Edinburgh.
Wednesday, July 5
Doors 6:00 pm | Show 6:30 pm
Advance tickets $12 | $15 at the door
*All ages welcome!
*Sorry, Groupons and passes not accepted for this event.
“Utterly unique… both brilliantly entertaining and hugely informative” ★ ★ ★ ★ ★ – Broadway Baby
“An education, inspiring, and wonderfully entertaining show from beginning to end” ★ ★ ★ ★ ★ – Mumble Comedy
There’s quite the poster for this rap guide performance,
In addition to the Vancouver and Edinburgh performance (the show was premiered at the Brighton Fringe Festival in May 2017; see Simon Topping’s very brief review in this May 10, 2017 posting on the reviewshub.com), Brinkman is raising money (goal is $12,000US; he has raised a little over $3,000 with approximately one month before the deadline) to produce a CD. Here’s more from the Rap Guide to Consciousness campaign page on Indiegogo,
Brinkman has been working with neuroscientists, Dr. Anil Seth (professor and co-director of Sackler Centre for Consciousness Science) and Dr. Heather Berlin (Brinkman’s wife as noted earlier; see her Wikipedia entry or her website).
There’s a bit more information about the rap project and Anil Seth in a May 3, 2017 news item by James Hakner for the University of Sussex,
The research frontiers of consciousness science find an unusual outlet in an exciting new Rap Guide to Consciousness, premiering at this year’s Brighton Fringe Festival.
Professor Anil Seth, Co-Director of the Sackler Centre for Consciousness Science at the University of Sussex, has teamed up with New York-based ‘peer-reviewed rapper’ Baba Brinkman, to explore the latest findings from the neuroscience and cognitive psychology of subjective experience.
What is it like to be a baby? We might have to take LSD to find out. What is it like to be an octopus? Imagine most of your brain was actually built into your fingertips. What is it like to be a rapper kicking some of the world’s most complex lyrics for amused fringe audiences? Surreal.
In this new production, Baba brings his signature mix of rap comedy storytelling to the how and why behind your thoughts and perceptions. Mixing cutting-edge research with lyrical performance and projected visuals, Baba takes you through the twists and turns of the only organ it’s better to donate than receive: the human brain. Discover how the various subsystems of your brain come together to create your own rich experience of the world, including the sights and sounds of a scientifically peer-reviewed rapper dropping knowledge.
The result is a truly mind-blowing multimedia hip-hop theatre performance – the perfect meta-medium through which to communicate the dazzling science of consciousness.
Baba comments: “This topic is endlessly fascinating because it underlies everything we do pretty much all the time, which is probably why it remains one of the toughest ideas to get your head around. The first challenge with this show is just to get people to accept the (scientifically uncontroversial) idea that their brains and minds are actually the same thing viewed from different angles. But that’s just the starting point, after that the details get truly amazing.”
Baba Brinkman is a Canadian rap artist and award-winning playwright, best known for his “Rap Guide” series of plays and albums. Baba has toured the world and enjoyed successful runs at the Edinburgh Fringe Festival and off-Broadway in New York. The Rap Guide to Religion was nominated for a 2015 Drama Desk Award for “Unique Theatrical Experience” and The Rap Guide to Evolution (“Astonishing and brilliant” NY Times), won a Scotsman Fringe First Award and a Drama Desk Award nomination for “Outstanding Solo Performance”. The Rap Guide to Climate Chaos premiered in Edinburgh in 2015, followed by a six-month off-Broadway run in 2016.
Baba is also a pioneer in the genre of “lit-hop” or literary hip-hop, known for his adaptations of The Canterbury Tales, Beowulf, and Gilgamesh. He is a recent recipient of the National Center for Science Education’s “Friend of Darwin Award” for his efforts to improve the public understanding of evolutionary biology.
Anil Seth is an internationally renowned researcher into the biological basis of consciousness, with more than 100 (peer-reviewed!) academic journal papers on the subject. Alongside science he is equally committed to innovative public communication. A Wellcome Trust Engagement Fellow (from 2016) and the 2017 British Science Association President (Psychology), Professor Seth has co-conceived and consulted on many science-art projects including drama (Donmar Warehouse), dance (Siobhan Davies dance company), and the visual arts (with artist Lindsay Seers). He has also given popular public talks on consciousness at the Royal Institution (Friday Discourse) and at the main TED conference in Vancouver. He is a regular presence in print and on the radio and is the recipient of awards including the BBC Audio Award for Best Single Drama (for ‘The Sky is Wider’) and the Royal Society Young People’s Book Prize (for EyeBenders). This is his first venture into rap.
Professor Seth said: “There is nothing more familiar, and at the same time more mysterious than consciousness, but research is finally starting to shed light on this most central aspect of human existence. Modern neuroscience can be incredibly arcane and complex, posing challenges to us as public communicators.
“It’s been a real pleasure and privilege to work with Baba on this project over the last year. I never thought I’d get involved with a rap artist – but hearing Baba perform his ‘peer reviewed’ breakdowns of other scientific topics I realized here was an opportunity not to be missed.”
Brinkman isn’t the only performance-based artist to be querying the concept of consciousness, Tom Stoppard has written a play about consciousness titled ‘The Hard Problem’, which debuted at the National Theatre (UK) in January 2015 (see BBC [British Broadcasting Corporation] news online’s Jan. 29, 2015 roundup of reviews). A May 25, 2017 commentary by Andrew Brown for the Guardian offers some insight into the play and the issues (Note: Links have been removed),
There is a lovely exchange in Tom Stoppard’s play about consciousness, The Hard Problem, when an atheist has been sneering at his girlfriend for praying. It is, he says, an utterly meaningless activity. Right, she says, then do one thing for me: pray! I can’t do that, he replies. It would betray all I believe in.
So prayer can have meanings, and enormously important ones, even for people who are certain that it doesn’t have the meaning it is meant to have. In that sense, your really convinced atheist is much more religious than someone who goes along with all the prayers just because that’s what everyone does, without for a moment supposing the action means anything more than asking about the weather.
The Hard Problem of the play’s title is a phrase coined by the Australian philosopher David Chalmers to describe the way in which consciousness arises from a physical world. What makes it hard is that we don’t understand it. What makes it a problem is slightly different. It isn’t the fact of consciousness, but our representations of consciousness, that give rise to most of the difficulties. We don’t know how to fit the first-person perspective into the third-person world that science describes and explores. But this isn’t because they don’t fit: it’s because we don’t understand how they fit. For some people, this becomes a question of consuming interest.
There are also a couple of video of Tom Stoppard, the playwright, discussing his play with various interested parties, the first being the director at the National Theatre who tackled the debut run, Nicolas Hytner: https://www.youtube.com/watch?v=s7J8rWu6HJg (it runs approximately 40 mins.). Then, there’s the chat Stoppard has with previously mentioned philosopher, David Chalmers: https://www.youtube.com/watch?v=4BPY2c_CiwA (this runs approximately 1 hr. 32 mins.).
I gather ‘consciousness’ is a hot topic these days and, in the venacular of the 1960s, I guess you could describe all of this as ‘expanding our consciousness’. Have a nice weekend!
I have three news bits about legal issues that are arising as a consequence of emerging technologies.
Deep neural networks, art, and copyright
Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka
Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,
In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”
With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.
Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.
For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.
These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.
DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.
Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.
The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.
Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.
The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.
DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.
Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.
Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.
Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.
Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.
The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.
In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.
DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.
The Fifth Annual Conference on Governance of Emerging Technologies:
Law, Policy and Ethics held at the new
Beus Center for Law & Society in Phoenix, AZ
May 17-19, 2017!
Call for Abstracts – Now Closed
The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.
Gillian Hadfield, Richard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law
Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan
Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence
Craig Shank,Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)
Innovation – Responsible and/or Permissionless
Ellen-Marie Forsberg,Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences
Adam Thierer,Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University
Andrew Maynard,Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University
Gary Marchant,Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University
Anupam Chander,Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law
Pilar Ossorio,Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence
George Poste,Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University
Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge
Responsible Development of AI
Spring Berman,Ira A. Fulton Schools of Engineering, Arizona State University
John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
Subbarao Kambhampati,Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University
Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics
*Current Student / ASU Law Alumni Registration: $50.00
^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)
There you have it.
Neuro-techno future laws
I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,
New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.
The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.
Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”
Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.
Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”
The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.
International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.
Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”
The last time I featured memrisors and a neuronal network it was in an April 22, 2016 posting about Russian research in that field. This latest work comes from the UK’s University of Southampton. From a Sept. 27, 2016 news item on phys.org,
New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be the ‘missing link’ in the development of implants that use electrical signals from the brain to help treat medical conditions.
Monitoring neuronal cell activity is fundamental to neuroscience and the development of neuroprosthetics – biomedically engineered devices that are driven by neural activity. However, a persistent problem is the device being able to process the neural data in real-time, which imposes restrictive requirements on bandwidth, energy and computation capacity.
In a new study, published in Nature Communications, the researchers showed that memristors could provide real-time processing of neuronal signals (spiking events) leading to efficient data compression and the potential to develop more precise and affordable neuroprosthetics and bioelectronic medicines.
Memristors are electrical components that limit or regulate the flow of electrical current in a circuit and can remember the amount of charge that was flowing through it and retain the data, even when the power is turned off.
Lead author Isha Gupta, Postgraduate Research Student at the University of Southampton, said: “Our work can significantly contribute towards further enhancing the understanding of neuroscience, developing neuroprosthetics and bio-electronic medicines by building tools essential for interpreting the big data in a more effective way.”
The research team developed a nanoscale Memristive Integrating Sensor (MIS) into which they fed a series of voltage-time samples, which replicated neuronal electrical activity.
Acting like synapses in the brain, the metal-oxide MIS was able to encode and compress (up to 200 times) neuronal spiking activity recorded by multi-electrode arrays. Besides addressing the bandwidth constraints, this approach was also very power efficient – the power needed per recording channel was up to 100 times less when compared to current best practice.
Co-author Dr Themis Prodromakis, Reader in Nanoelectronics and EPSRC Fellow in Electronics and Computer Science at the University of Southampton said: “We are thrilled that we succeeded in demonstrating that these emerging nanoscale devices, despite being rather simple in architecture, possess ultra-rich dynamics that can be harnessed beyond the obvious memory applications to address the fundamental constraints in bandwidth and power that currently prohibit scaling neural interfaces beyond 1,000 recording channels.”
The Prodromakis Group at the University of Southampton is acknowledged as world-leading in this field, collaborating among others with Leon Chua (a Diamond Jubilee Visiting Academic at the University of Southampton), who theoretically predicted the existence of memristors in 1971.