Tag Archives: University of Pittsburgh

Optical memristors and neuromorphic computing

A June 5, 2023 news item on Nanowerk announced a paper which reviews the state-of-the-art of optical memristors, Note: Links have been removed,

AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system – both hardware and software combined – has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.

Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.

A new review article published in Nature Photonics (“Integrated Optical Memristors”), sheds light on the evolution of this technology—and the work that still needs to be done for it to reach its full potential. Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.

A June 2, 2023 University of Pittsburgh news release (also on EurekAlert but published June 5, 2023), which originated the news item, provides more detail,

“Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.” 

The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address. 

“Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood. 

“One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”

Using Light to Revolutionize Computing

Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing. 

Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.

Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence. 

“We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor–something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.”

The publication of “Integrated Optical Memristors” (DOI: 10.1038/s41566-023-01217-w) was published in Nature Photonics and is coauthored by senior author Harish Bhaskaran at the University of Oxford, Wolfram Pernice at Heidelberg University, and Carlos Ríos at the University of Maryland.

Despite including that final paragraph, I’m also providing a link to and a citation for the paper,

Integrated optical memristors by Nathan Youngblood, Carlos A. Ríos Ocampo, Wolfram H. P. Pernice & Harish Bhaskaran. Nature Photonics volume 17, pages 561–572 (2023) DOI: https://doi.org/10.1038/s41566-023-01217-w Published online: 29 May 2023 Issue Date: July 2023

This paper is behind a paywall.

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

Resisting silver’s microbial properties?

Yes, it is possible for bacteria to become resistant to silver nanoparticles. However, that yes comes with some qualifications according to a July 13, 2021 news item on ScienceDaily (Note: Links have been removed),

Antimicrobials are used to kill or slow the growth of bacteria, viruses and other microorganisms. They can be in the form of antibiotics, used to treat bodily infections, or as an additive or coating on commercial products used to keep germs at bay. These life-saving tools are essential to preventing and treating infections in humans, animals and plants, but they also pose a global threat to public health when microorganisms develop resistance to them, a concept known as antimicrobial resistance.

One of the main drivers of antimicrobial resistance is the misuse and overuse of antimicrobial agents, which includes silver nanoparticles, [emphases mine] an advanced material with well-documented antimicrobial properties. It is increasingly used in commercial products that boast enhanced germ-killing performance — it has been woven into textiles, coated onto toothbrushes, and even mixed into cosmetics as a preservative.

The Gilbertson Group at the University of Pittsburgh [Pennsylvania, US} Swanson School of Engineering used laboratory strains of E.coli to better understand bacterial resistance to silver nanoparticles and attempt to get ahead of the potential misuse of this material. The team recently published their results in Nature Nanotechnology.

Caption: A depiction of hyper-motile E.coli, a strain of bacteria found to resist silver nanoparticles’ antimicrobial properties after repeated exposure. Credit: Lisa Stabryla/University of Pittsburgh.

A July 13, 2021 University of Pittsburgh news release (also on EurekAlert), which originated the news item, provides more insight into the research,

“Bacterial resistance to silver nanoparticles is understudied, so our group looked at the mechanisms behind this event,” said Lisa Stabryla, lead author on the paper and a recent civil and environmental PhD graduate at Pitt. “This is a promising innovation to add to our arsenal of antimicrobials, but we need to consciously study it and perhaps regulate its use to avoid decreased efficacy like we’ve seen with some common antibiotics.”

Stabryla exposed E.coli to 20 consecutive days of silver nanoparticles and monitored bacterial growth over time. Nanoparticles are roughly 50 times smaller than a bacterium.

“In the beginning, bacteria could only survive at low concentrations of silver nanoparticles, but as the experiment continued, we found that they could survive at higher doses,” Stabryla noted. “Interestingly, we found that bacteria developed resistance to the silver nanoparticles but not their released silver ions alone.”

The group sequenced the genome of the E.coli that had been exposed to silver nanoparticles and found a mutation in a gene that corresponds to an efflux pump that pushes heavy metal ions out of the cell.

“It is possible that some form of silver is getting into the cell, and when it arrives, the cell mutates to quickly pump it out,” she added. “More work is needed to determine if researchers can perhaps overcome this mechanism of resistance through particle design.”

The group then studied two different types of E.coli: a hyper-motile strain that swims through its environment more quickly than normally motile bacteria and a non-motile strain that does not have physical means for moving around. They found that only the hyper-motile strain developed resistance.

“This finding could suggest that silver nanoparticles may be a good option to target certain types of bacteria, particularly non-motile strains,” Stabryla said.

In the end, bacteria will still find a way to evolve and evade antimicrobials. The hope is that an understanding of the mechanisms that lead to this evolution and a mindful use of new antimicrobials will lessen the impact of antimicrobial resistance.

“We are the first to look at bacterial motility effects on the ability to develop resistance to silver nanoparticles,” said Leanne Gilbertson, assistant professor of civil and environmental engineering at Pitt. “The observed difference is really interesting and merits further investigation to understand it and how to link the genetic response – the efflux pump regulation – to the bacteria’s ability to move in the system.

“The results are promising for being able to tune particle properties for a desired response, such as high efficacy while avoiding resistance.”

Here’s a link to and a citation for the paper,

Role of bacterial motility in differential resistance mechanisms of silver nanoparticles and silver ions by Lisa M. Stabryla, Kathryn A. Johnston, Nathan A. Diemler, Vaughn S. Cooper, Jill E. Millstone, Sarah-Jane Haig & Leanne M. Gilbertson. Nature Nanotechnology (2021) DOI: https://doi.org/10.1038/s41565-021-00929-w Published: 21 June 2021

This paper appears to be open access.

The glorious glasswing butterfly and superomniphobic glass

This is not the first time the glasswing butterfly has inspired some new technology. Lat time, it was an eye implant,

The clear wings make this South-American butterfly hard to see in flight, a succesfull defense mechanism. Credit: Eddy Van 3000 from in Flanders fields – B – United Tribes ov Europe – the wings-become-windows butterfly. [downloaded from https://commons.wikimedia.org/wiki/Category:Greta_oto#/media/File:South-American_butterfly.jpg]

You’ll find that image and more in my May 22, 2018 posting about the eye implant. Don’t miss scrolling down to the video which features the butterfly fluttering its wings in the first few seconds.

Getting back to the glasswing butterfly’s latest act of inspiration a July 11, 2019 news item on ScienceDaily announces the work,

Glass for technologies like displays, tablets, laptops, smartphones, and solar cells need to pass light through, but could benefit from a surface that repels water, dirt, oil, and other liquids. Researchers from the University of Pittsburgh’s Swanson School of Engineering have created a nanostructure glass that takes inspiration from the wings of the glasswing butterfly to create a new type of glass that is not only very clear across a wide variety of wavelengths and angles, but is also antifogging.

A July 11, 2019 University of Pittsburgh news release (also on EurekAlert), which originated the news item, provides more technical detail about the new glass,

The nanostructured glass has random nanostructures, like the glasswing butterfly wing, that are smaller than the wavelengths of visible light. This allows the glass to have a very high transparency of 99.5% when the random nanostructures are on both sides of the glass. This high transparency can reduce the brightness and power demands on displays that could, for example, extend battery life. The glass is antireflective across higher angles, improving viewing angles. The glass also has low haze, less than 0.1%, which results in very clear images and text.

“The glass is superomniphobic, meaning it repels a wide variety of liquids such as orange juice, coffee, water, blood, and milk,” explains Sajad Haghanifar, lead author of the paper and doctoral candidate in industrial engineering at Pitt. “The glass is also anti-fogging, as water condensation tends to easily roll off the surface, and the view through the glass remains unobstructed. Finally, the nanostructured glass is durable from abrasion due to its self-healing properties–abrading the surface with a rough sponge damages the coating, but heating it restores it to its original function.”

Natural surfaces like lotus leaves, moth eyes and butterfly wings display omniphobic properties that make them self-cleaning, bacterial-resistant and water-repellant–adaptations for survival that evolved over millions of years. Researchers have long sought inspiration from nature to replicate these properties in a synthetic material, and even to improve upon them. While the team could not rely on evolution to achieve these results, they instead utilized machine learning.

“Something significant about the nanostructured glass research, in particular, is that we partnered with SigOpt to use machine learning to reach our final product,” says Paul Leu, PhD, associate professor of industrial engineering, whose lab conducted the research. Dr. Leu holds secondary appointments in mechanical engineering and materials science and chemical engineering. “When you create something like this, you don’t start with a lot of data, and each trial takes a great deal of time. We used machine learning to suggest variables to change, and it took us fewer tries to create this material as a result.”

“Bayesian optimization and active search are the ideal tools to explore the balance between transparency and omniphobicity efficiently, that is, without needing thousands of fabrications, requiring hundreds of days.” said Michael McCourt, PhD, research engineer at SigOpt. Bolong Cheng, PhD, fellow research engineer at SigOpt, added, “Machine learning and AI strategies are only relevant when they solve real problems; we are excited to be able to collaborate with the University of Pittsburgh to bring the power of Bayesian active learning to a new application.”

Here’s an image illustrating the work from the researchers,

Courtesy: University of Pittsburgh

Here’s a link to and a citation for the paper,

Creating glasswing butterfly-inspired durable antifogging superomniphobic supertransmissive, superclear nanostructured glass through Bayesian learning and optimization by Sajad Haghanifar, Michael McCourt, Bolong Cheng, Jeffrey Wuenschell, Paul Ohodnickic, and Paul W. Leu. Mater. Horiz., 2019, Advance Article DOI: 10.1039/C9MH00589G first published on 10 Jun 2019

This paper is behind a paywall. One more thing, here’s SigOpt, the company the scientists partnered.

If only AI had a brain (a Wizard of Oz reference?)

The title, which I’ve borrowed from the news release, is the only Wizard of Oz reference that I can find but it works so well, you don’t really need anything more.

Moving onto the news, a July 23, 2018 news item on phys.org announces new work on developing an artificial synapse (Note: A link has been removed),

Digital computation has rendered nearly all forms of analog computation obsolete since as far back as the 1950s. However, there is one major exception that rivals the computational power of the most advanced digital devices: the human brain.

The human brain is a dense network of neurons. Each neuron is connected to tens of thousands of others, and they use synapses to fire information back and forth constantly. With each exchange, the brain modulates these connections to create efficient pathways in direct response to the surrounding environment. Digital computers live in a world of ones and zeros. They perform tasks sequentially, following each step of their algorithms in a fixed order.

A team of researchers from Pitt’s [University of Pittsburgh] Swanson School of Engineering have developed an “artificial synapse” that does not process information like a digital computer but rather mimics the analog way the human brain completes tasks. Led by Feng Xiong, assistant professor of electrical and computer engineering, the researchers published their results in the recent issue of the journal Advanced Materials (DOI: 10.1002/adma.201802353). His Pitt co-authors include Mohammad Sharbati (first author), Yanhao Du, Jorge Torres, Nolan Ardolino, and Minhee Yun.

A July 23, 2018 University of Pittsburgh Swanson School of Engineering news release (also on EurekAlert), which originated the news item, provides further information,

“The analog nature and massive parallelism of the brain are partly why humans can outperform even the most powerful computers when it comes to higher order cognitive functions such as voice recognition or pattern recognition in complex and varied data sets,” explains Dr. Xiong.

An emerging field called “neuromorphic computing” focuses on the design of computational hardware inspired by the human brain. Dr. Xiong and his team built graphene-based artificial synapses in a two-dimensional honeycomb configuration of carbon atoms. Graphene’s conductive properties allowed the researchers to finely tune its electrical conductance, which is the strength of the synaptic connection or the synaptic weight. The graphene synapse demonstrated excellent energy efficiency, just like biological synapses.

In the recent resurgence of artificial intelligence, computers can already replicate the brain in certain ways, but it takes about a dozen digital devices to mimic one analog synapse. The human brain has hundreds of trillions of synapses for transmitting information, so building a brain with digital devices is seemingly impossible, or at the very least, not scalable. Xiong Lab’s approach provides a possible route for the hardware implementation of large-scale artificial neural networks.

According to Dr. Xiong, artificial neural networks based on the current CMOS (complementary metal-oxide semiconductor) technology will always have limited functionality in terms of energy efficiency, scalability, and packing density. “It is really important we develop new device concepts for synaptic electronics that are analog in nature, energy-efficient, scalable, and suitable for large-scale integrations,” he says. “Our graphene synapse seems to check all the boxes on these requirements so far.”

With graphene’s inherent flexibility and excellent mechanical properties, these graphene-based neural networks can be employed in flexible and wearable electronics to enable computation at the “edge of the internet”–places where computing devices such as sensors make contact with the physical world.

“By empowering even a rudimentary level of intelligence in wearable electronics and sensors, we can track our health with smart sensors, provide preventive care and timely diagnostics, monitor plants growth and identify possible pest issues, and regulate and optimize the manufacturing process–significantly improving the overall productivity and quality of life in our society,” Dr. Xiong says.

The development of an artificial brain that functions like the analog human brain still requires a number of breakthroughs. Researchers need to find the right configurations to optimize these new artificial synapses. They will need to make them compatible with an array of other devices to form neural networks, and they will need to ensure that all of the artificial synapses in a large-scale neural network behave in the same exact manner. Despite the challenges, Dr. Xiong says he’s optimistic about the direction they’re headed.

“We are pretty excited about this progress since it can potentially lead to the energy-efficient, hardware implementation of neuromorphic computing, which is currently carried out in power-intensive GPU clusters. The low-power trait of our artificial synapse and its flexible nature make it a suitable candidate for any kind of A.I. device, which would revolutionize our lives, perhaps even more than the digital revolution we’ve seen over the past few decades,” Dr. Xiong says.

There is a visual representation of this artificial synapse,

Caption: Pitt engineers built a graphene-based artificial synapse in a two-dimensional, honeycomb configuration of carbon atoms that demonstrated excellent energy efficiency comparable to biological synapses Credit: Swanson School of Engineering

Here’s a link to and a citation for the paper,

Low‐Power, Electrochemically Tunable Graphene Synapses for Neuromorphic Computing by Mohammad Taghi Sharbati, Yanhao Du, Jorge Torres, Nolan D. Ardolino, Minhee Yun, Feng Xiong. Advanced Materials DOP: https://doi.org/10.1002/adma.201802353 First published [online]: 23 July 2018

This paper is behind a paywall.

I did look at the paper and if I understand it rightly, this approach is different from the memristor-based approaches that I have so often featured here. More than that I cannot say.

Finally, the Wizard of Oz song ‘If I Only Had a Brain’,

Yes! Art, genetic modifications, gene editing, and xenotransplantation at the Vancouver Biennale (Canada)

Patricia Piccinini’s Curious Imaginings Courtesy: Vancouver Biennale [downloaded from http://dailyhive.com/vancouver/vancouver-biennale-unsual-public-art-2018/]

Up to this point, I’ve been a little jealous of the Art/Sci Salon’s (Toronto, Canada) January 2018 workshops for artists and discussions about CRISPR ((clustered regularly interspaced short palindromic repeats))/Cas9 and its social implications. (See my January 10, 2018 posting for more about the events.) Now, it seems Vancouver may be in line for its ‘own’ discussion about CRISPR and the implications of gene editing. The image you saw (above) represents one of the installations being hosted by the 2018 – 2020 edition of the Vancouver Biennale.

While this posting is mostly about the Biennale and Piccinini’s work, there is a ‘science’ subsection featuring the science of CRISPR and xenotransplantation. Getting back to the Biennale and Piccinini: A major public art event since 1988, the Vancouver Biennale has hosted over 91 outdoor sculptures and new media works by more than 78 participating artists from over 25 countries and from 4 continents.

Quickie description of the 2018 – 2020 Vancouver Biennale

The latest edition of the Vancouver Biennale was featured in a June 6, 2018 news item on the Daily Hive (Vancouver),

The Vancouver Biennale will be bringing new —and unusual— works of public art to the city beginning this June.

The theme for this season’s Vancouver Biennale exhibition is “re-IMAGE-n” and it kicks off on June 20 [2018] in Vanier Park with Saudi artist Ajlan Gharem’s Paradise Has Many Gates.

Gharem’s architectural chain-link sculpture resembles a traditional mosque, the piece is meant to challenge the notions of religious orthodoxy and encourages individuals to image a space free of Islamophobia.

Melbourne artist Patricia Piccinini’s Curious Imaginings is expected to be one of the most talked about installations of the exhibit. Her style of “oddly captivating, somewhat grotesque, human-animal hybrid creature” is meant to be shocking and thought-provoking.

Piccinini’s interactive [emphasis mine] experience will “challenge us to explore the social impacts of emerging biotechnology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.”

Piccinini’s work will be displayed in the 105-year-old Patricia Hotel in Vancouver’s Strathcona neighbourhood. The 90-day ticketed exhibition [emphasis mine] is scheduled to open this September [2018].

Given that this blog is focused on nanotechnology and other emerging technologies such as CRISPR, I’m focusing on Piccinini’s work and its art/science or sci-art status. This image from the GOMA Gallery where Piccinini’s ‘Curious Affection‘ installation is being shown from March 24 – Aug. 5, 2018 in Brisbane, Queensland, Australia may give you some sense of what one of her installations is like,

Courtesy: Queensland Art Gallery | Gallery of Modern Art (QAGOMA)

I spoke with Serena at the Vancouver Biennale office and asked about the ‘interactive’ aspect of Piccinini’s installation. She suggested the term ‘immersive’ as an alternative. In other words, you won’t be playing with the sculptures or pressing buttons and interacting with computer screens or robots. She also noted that the ticket prices have not been set yet and they are currently developing events focused on the issues raised by the installation. She knew that 2018 is the 200th anniversary of the publication of Mary Shelley’s Frankenstein but I’m not sure how the Biennale folks plan (or don’t plan)  to integrate any recognition of the novle’s impact on the discussions about ‘new’ technologies .They expect Piccinini will visit Vancouver. (Note 1: Piccinini’s work can  also be seen in a group exhibition titled: Frankenstein’s Birthday Party at the Hosfselt Gallery in San Francisco (California, US) from June 23 – August 11, 2018.  Note 2: I featured a number of international events commemorating the 200th anniversary of the publication of Mary Shelley’s novel, Frankenstein, in my Feb. 26, 2018 posting. Note 3: The term ‘Frankenfoods’ helped to shape the discussion of genetically modified organisms and food supply on this planet. It was a wildly successful campaign for activists affecting legislation in some areas of research. Scientists have not been as enthusiastic about the effects. My January 15, 2009 posting briefly traces a history of the term.)

The 2018 – 2020 Vancouver Biennale and science

A June 7, 2018 Vancouver Biennale news release provides more detail about the current series of exhibitions,

The Biennale is also committed to presenting artwork at the cutting edge of discussion and in keeping with the STEAM (science, technology, engineering, arts, math[ematics]) approach to integrating the arts and sciences. In August [2018], Colombian/American visual artist Jessica Angel will present her monumental installation Dogethereum Bridge at Hinge Park in Olympic Village. Inspired by blockchain technology, the artwork’s design was created through the integration of scientific algorithms, new developments in technology, and the arts. This installation, which will serve as an immersive space and collaborative hub for artists and technologists, will host a series of activations with blockchain as the inspirational jumping-off point.

In what is expected to become one of North America’s most talked-about exhibitions of the year, Melbourne artist Patricia Piccinini’s Curious Imaginings will see the intersection of art, science, and ethics. For the first time in the Biennale’s fifteen years of creating transformative experiences, and in keeping with the 2018-2020 theme of “re-IMAGE-n,” the Biennale will explore art in unexpected places by exhibiting in unconventional interior spaces.  The hyperrealist “world of oddly captivating, somewhat grotesque, human-animal hybrid creatures” will be the artist’s first exhibit in a non-museum setting, transforming a wing of the 105-year-old Patricia Hotel. Situated in Vancouver’s oldest neighbourbood of Strathcona, Piccinini’s interactive experience will “challenge us to explore the social impacts of emerging bio-technology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.” In this intimate hotel setting located in a neighborhood continually undergoing its own change, Curious Imaginings will empower visitors to personally consider questions posed by the exhibition, including the promises and consequences of genetic research and human interference. …

There are other pieces being presented at the Biennale but my special interest is in the art/sci pieces and, at this point, CRISPR.

Piccinini in more depth

You can find out more about Patricia Piccinini in her biography on the Vancouver Biennale website but I found this Char Larsson April 7, 2018 article for the Independent (UK) more informative (Note: A link has been removed),

Patricia Piccinini’s sculptures are deeply disquieting. Walking through Curious Affection, her new solo exhibition at Brisbane’s Gallery of Modern Art, is akin to entering a science laboratory full of DNA experiments. Made from silicone, fibreglass and even human hair, her sculptures are breathtakingly lifelike, however, we can’t be sure what life they are like. The artist creates an exuberant parallel universe where transgenic experiments flourish and human evolution has given way to genetic engineering and DNA splicing.

Curious Affection is a timely and welcome recognition of Piccinini’s enormous contribution to reaching back to the mid-1990s. Working across a variety of mediums including photography, video and drawing, she is perhaps best known for her hyperreal creations.

As a genre, hyperrealism depends on the skill of the artist to create the illusion of reality. To be truly successful, it must convince the spectator of its realness. Piccinini acknowledges this demand, but with a delightful twist. The excruciating attention to detail deliberately solicits our desire to look, only to generate unease, as her sculptures are imbued with a fascinating otherness. Part human, part animal, the works are uncannily familiar, but also alarmingly “other”.

Inspired by advances in genetically modified pigs to generate replacement organs for humans [also known as xenotransplantation], we are reminded that Piccinini has always been at the forefront of debates concerning the possibilities of science, technology and DNA cloning. She does so, however, with a warm affection and sense of humour, eschewing the hysterical anxiety frequently accompanying these scientific developments.

Beyond the astonishing level of detail achieved by working with silicon and fibreglass, there is an ethics at work here. Piccinini is asking us not to avert our gaze from the other, and in doing so, to develop empathy and understanding through the encounter.

I encourage anyone who’s interested to read Larsson’s entire piece (April 7, 2018 article).

According to her Wikipedia entry, Piccinini works in a variety of media including video, sound, sculpture, and more. She also has her own website.

Gene editing and xenotransplantation

Sarah Zhang’s June 8, 2018 article for The Atlantic provides a peek at the extraordinary degree of interest and competition in the field of gene editing and CRISPR ((clustered regularly interspaced short palindromic repeats))/Cas9 research (Note: A link has been removed),

China Is Genetically Engineering Monkeys With Brain Disorders

Guoping Feng applied to college the first year that Chinese universities reopened after the Cultural Revolution. It was 1977, and more than a decade’s worth of students—5.7 million—sat for the entrance exams. Feng was the only one in his high school to get in. He was assigned—by chance, essentially—to medical school. Like most of his contemporaries with scientific ambitions, he soon set his sights on graduate studies in the United States. “China was really like 30 to 50 years behind,” he says. “There was no way to do cutting-edge research.” So in 1989, he left for Buffalo, New York, where for the first time he saw snow piled several feet high. He completed his Ph.D. in genetics at the State University of New York at Buffalo.

Feng is short and slim, with a monk-like placidity and a quick smile, and he now holds an endowed chair in neuroscience at MIT, where he focuses on the genetics of brain disorders. His 45-person lab is part of the McGovern Institute for Brain Research, which was established in 2000 with the promise of a $350 million donation, the largest ever received by the university. In short, his lab does not lack for much.

Yet Feng now travels to China several times a year, because there, he can pursue research he has not yet been able to carry out in the United States. [emphasis mine] …

Feng had organized a symposium at SIAT [Shenzhen Institutes of Advanced Technology], and he was not the only scientist who traveled all the way from the United States to attend: He invited several colleagues as symposium speakers, including a fellow MIT neuroscientist interested in tree shrews, a tiny mammal related to primates and native to southern China, and Chinese-born neuroscientists who study addiction at the University of Pittsburgh and SUNY Upstate Medical University. Like Feng, they had left China in the ’80s and ’90s, part of a wave of young scientists in search of better opportunities abroad. Also like Feng, they were back in China to pursue a type of cutting-edge research too expensive and too impractical—and maybe too ethically sensitive—in the United States.

Here’s what precipitated Feng’s work in China, (from Zhang’s article; Note: Links have been removed)

At MIT, Feng’s lab worked on genetically engineering a monkey species called marmosets, which are very small and genuinely bizarre-looking. They are cheaper to keep due to their size, but they are a relatively new lab animal, and they can be difficult to train on lab tasks. For this reason, Feng also wanted to study Shank3 on macaques in China. Scientists have been cataloging the social behavior of macaques for decades, making it an obvious model for studies of disorders like autism that have a strong social component. Macaques are also more closely related to humans than marmosets, making their brains a better stand-in for those of humans.

The process of genetically engineering a macaque is not trivial, even with the advanced tools of CRISPR. Researchers begin by dosing female monkeys with the same hormones used in human in vitro fertilization. They then collect and fertilize the eggs, and inject the resulting embryos with CRISPR proteins using a long, thin glass needle. Monkey embryos are far more sensitive than mice embryos, and can be affected by small changes in the pH of the injection or the concentration of CRISPR proteins. Only some of the embryos will have the desired mutation, and only some will survive once implanted in surrogate mothers. It takes dozens of eggs to get to just one live monkey, so making even a few knockout monkeys required the support of a large breeding colony.

The first Shank3 macaque was born in 2015. Four more soon followed, bringing the total to five.

To visit his research animals, Feng now has to fly 8,000 miles across 12 time zones. It would be a lot more convenient to carry out his macaque research in the United States, of course, but so far, he has not been able to.

He originally inquired about making Shank3 macaques at the New England Primate Research Center, one of eight national primate research centers then funded by the National Institutes of Health in partnership with a local institution (Harvard Medical School, in this case). The center was conveniently located in Southborough, Massachusetts, just 20 miles west of the MIT campus. But in 2013, Harvard decided to shutter the center.

The decision came as a shock to the research community, and it was widely interpreted as a sign of waning interest in primate research in the United States. While the national primate centers have been important hubs of research on HIV, Zika, Ebola, and other diseases, they have also come under intense public scrutiny. Animal-rights groups like the Humane Society of the United States have sent investigators to work undercover in the labs, and the media has reported on monkey deaths in grisly detail. Harvard officially made its decision to close for “financial” reasons. But the announcement also came after the high-profile deaths of four monkeys from improper handling between 2010 and 2012. The deaths sparked a backlash; demonstrators showed up at the gates. The university gave itself two years to wind down their primate work, officially closing the center in 2015.

“They screwed themselves,” Michael Halassa, the MIT neuroscientist who spoke at Feng’s symposium, told me in Shenzhen. Wei-Dong Yao, another one of the speakers, chimed in, noting that just two years later CRISPR has created a new wave of interest in primate research. Yao was one of the researchers at Harvard’s primate center before it closed; he now runs a lab at SUNY Upstate Medical University that uses genetically engineered mouse and human stem cells, and he had come to Shenzhen to talk about restarting his addiction research on primates.

Here’s comes the competition (from Zhang’s article; Note: Links have been removed),

While the U.S. government’s biomedical research budget has been largely flat, both national and local governments in China are eager to raise their international scientific profiles, and they are shoveling money into research. A long-rumored, government-sponsored China Brain Project is supposed to give neuroscience research, and primate models in particular, a big funding boost. Chinese scientists may command larger salaries, too: Thanks to funding from the Shenzhen local government, a new principal investigator returning from overseas can get 3 million yuan—almost half a million U.S. dollars—over his or her first five years. China is even finding success in attracting foreign researchers from top U.S. institutions like Yale.

In the past few years, China has seen a miniature explosion of genetic engineering in monkeys. In Kunming, Shanghai, and Guangzhou, scientists have created monkeys engineered to show signs of Parkinson’s, Duchenne muscular dystrophy, autism, and more. And Feng’s group is not even the only one in China to have created Shank3 monkeys. Another group—a collaboration primarily between researchers at Emory University and scientists in China—has done the same.

Chinese scientists’ enthusiasm for CRISPR also extends to studies of humans, which are moving much more quickly, and in some cases under less oversight, than in the West. The first studies to edit human embryos and first clinical trials for cancer therapies using CRISPR have all happened in China. [emphases mine]

Some ethical issues are also covered (from Zhang’s article),

Parents with severely epileptic children had asked him if it would be possible to study the condition in a monkey. Feng told them what he thought would be technically possible. “But I also said, ‘I’m not sure I want to generate a model like this,’” he recalled. Maybe if there were a drug to control the monkeys’ seizures, he said: “I cannot see them seizure all the time.”

But is it ethical, he continued, to let these babies die without doing anything? Is it ethical to generate thousands or millions of mutant mice for studies of brain disorders, even when you know they will not elucidate much about human conditions?

Primates should only be used if other models do not work, says Feng, and only if a clear path forward is identified. The first step in his work, he says, is to use the Shank3 monkeys to identify the changes the mutations cause in the brain. Then, researchers might use that information to find targets for drugs, which could be tested in the same monkeys. He’s talking with the Oregon National Primate Research Center about carrying out similar work in the United States. ….[Note: I have a three-part series about CRISPR and germline editing* in the US, precipitated by research coming out of Oregon, Part 1, which links to the other parts, is here.]

Zhang’s June 8, 2018 article is excellent and I highly recommend reading it.

I touched on the topic of xenotransplanttaion in a commentary on a book about the science  of the television series, Orphan Black in a January 31,2018 posting (Note: A chimera is what you use to incubate a ‘human’ organ for transplantation or, more accurately, xenotransplantation),

On the subject of chimeras, the Canadian Broadcasting Corporation (CBC) featured a January 26, 2017 article about the pig-human chimeras on its website along with a video,

The end

I am very excited to see Piccinini’s work come to Vancouver. There have been a number of wonderful art and art/science installations and discussions here but this is the first one (I believe) to tackle the emerging gene editing technologies and the issues they raise. (It also fits in rather nicely with the 200th anniversary of the publication of Mary Shelley’s Frankenstein which continues to raise issues and stimulate discussion.)

In addition to the ethical issues raised in Zhang’s article, there are some other philosophical questions:

  • what does it mean to be human
  • if we are going to edit genes to create hybrid human/animals, what are they and how do they fit into our current animal/human schema
  • are you still human if you’ve had an organ transplant where the organ was incubated in a pig

There are also going to be legal issues. In addition to any questions about legal status, there are also fights about intellectual property such as the one involving Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley (March 15, 2017 posting)..

While I’m thrilled about the Piccinini installation, it should be noted the issues raised by other artworks hosted in this version of the Biennale are important. Happily, they have been broached here in Vancouver before and I suspect this will result in more nuanced  ‘conversations’ than are possible when a ‘new’ issue is introduced.

Bravo 2018 – 2020 Vancouver Biennale!

* Germline editing is when your gene editing will affect subsequent generations as opposed to editing out a mutated gene for the lifetime of a single individual.

Art/sci and CRISPR links

This art/science posting may prove of some interest:

The connectedness of living things: an art/sci project in Saskatchewan: evolutionary biology (February 16, 2018)

A selection of my CRISPR posts:

CRISPR and editing the germline in the US (part 1 of 3): In the beginning (August 15, 2017)

NOTE: An introductory CRISPR video describing how CRISPR/Cas9 works was embedded in part1.

Why don’t you CRISPR yourself? (January 25, 2018)

Editing the genome with CRISPR ((clustered regularly interspaced short palindromic repeats)-carrying nanoparticles (January 26, 2018)

Immune to CRISPR? (April 10, 2018)

Cosmopolitanism and the Local in Science and Nature (a three year Canadian project nearing its end date)

Working on a grant from Canada’s Social Sciences and Humanities Research Council (SSHRC), the  Cosmopolitanism and the Local in Science and Nature project has been establishing a ‘cosmopolitanism’ research network that critiques the eurocentric approach so beloved of Canadian academics and has set up nodes across Canada and in India and Southeast Asia.

I first wrote about the project in a Dec. 12, 2014 posting which also featured a job listing. It seems I was there for the beginning and now for the end. For one of the project’s blog postings in its final months, they’re profiling one of their researchers (Dr. Letitia Meynell, Sept. 6, 2017 posting),

1. What is your current place of research?

I am an associate professor in philosophy at Dalhousie University, cross appointed with gender and women studies.

2. Could you give us some details about your education background?

My 1st degree was in Theater, which I did at York University. I did, however, minor in Philosophy and I have always had a particular interest in philosophy of science. So, my minor was perhaps a little anomalous, comprising courses on philosophy of physics, philosophy of nature, and the philosophy of Karl Popper along with courses on aesthetics and existentialism. After taking a few more courses in philosophy at the University of Calgary, I enrolled there for a Master’s degree, writing a thesis on conceptualization, with a view to its role in aesthetics and epistemology. From there I moved to the University of Western Ontario where I brought these three interests together, writing a thesis on the epistemology of pictures in science. Throughout these studies I maintained a keen interest in feminist philosophy, especially the politics of knowledge, and I have always seen my work on pictures in science as fitting into broader feminist commitments.

3. What projects are you currently working on and what are some projects you’ve worked on in the past?

4. What’s one thing you particularly enjoy about working in your field?

5. How do you relate your work to the broader topic of ‘cosmopolitanism and the local’?

As feminist philosophers have long realized, having perspectives on a topic that are quite different to your own is incredibly powerful for critically assessing both your own views and those of others. So, for instance, if you want to address the exploitation of nonhuman animals in our society it is incredibly powerful to consider how people from, say, South Asian traditions have thought about the differences, similarities, and relationships between humans and other animals. Keeping non-western perspectives in mind, even as one works in a western philosophical tradition, helps one to be both more rigorous in one’s analyses and less dogmatic. Rigor and critical openness are, in my opinion, central virtues of philosophy and, indeed, science.

Dr. Maynell will be speaking at the ‘Bridging the Gap: Scientific Imagination Meets Aesthetic Imagination‘ conference Oct. 5-6, 2017 at the London School of Economics,

On 5–6 October, this 2-day conference aims to connect work on artistic and scientific imagination, and to advance our understanding of the epistemic and heuristic roles that imagination can play.

Why, how, and when do scientists imagine, and what epistemological roles does the imagination play in scientific progress? Over the past few years, many philosophical accounts have emerged that are relevant to these questions. Roman Frigg, Arnon Levy, and Adam Toon have developed theories of scientific models that place imagination at the heart of modelling practice. And James R. Brown, Tamar Gendler, James McAllister, Letitia Meynell, and Nancy Nersessian have developed theories that recognize the indispensable role of the imagination in the performance of thought experiments. On the other hand, philosophers like Michael Weisberg dismiss imagination-based views of scientific modelling as mere “folk ontology”, and John D. Norton seems to claim that thought experiments are arguments whose imaginary components are epistemologically irrelevant.

In this conference we turn to aesthetics for help in addressing issues concerning scientific imagination-use. Aesthetics is said to have begun in 1717 with an essay called “The Pleasures of the Imagination” by Joseph Addison, and ever since imagination has been what Michael Polyani called “the cornerstone of aesthetic theory”. In recent years Kendall Walton has fruitfully explored the fundamental relevance of imagination for understanding literary, visual and auditory fictions. And many others have been inspired to do the same, including Greg Currie, David Davies, Peter Lamarque, Stein Olsen, and Kathleen Stock.

This conference aims to connect work on artistic and scientific imagination, and to advance our understanding of the epistemic and heuristic roles that imagination can play. Specific topics may include:

  • What kinds of imagination are involved in science?
  • What is the relation between scientific imagination and aesthetic imagination?
  • What are the structure and limits of knowledge and understanding acquired through imagination?
  • From a methodological point of view, how can aesthetic considerations about imagination play a role in philosophical accounts of scientific reasoning?
  • What can considerations about scientific imagination contribute to our understanding of aesthetic imagination?

The conference will include eight invited talks and four contributed papers. Two of the four slots for contributed papers are being reserved for graduate students, each of whom will receive a travel bursary of £100.

Invited speakers

Margherita Arcangeli (Humboldt University, Berlin)

Andrej Bicanski (Institute of Cognitive Neuroscience, University College London)

Gregory Currie (University of York)

Jim Faeder (University of Pittsburgh School of Medicine)

Tim de Mey (Erasmus University of Rotterdam)

Laetitia Meynell (Dalhousie University, Canada)

Adam Toon (University of Exeter)

Margot Strohminger (Humboldt University, Berlin)

This event is organised by LSE’s Centre for Philosophy of Natural and Social Science and it is co-sponsored by the British Society of Aesthetics, the Mind Association, the Aristotelian Society and the Marie Skłodowska-Curie grant agreement No 654034.

I wonder if they’ll be rubbing shoulders with Angelina Jolie? She is slated to be teaching there in Fall 2017 according to a May 23, 2016 news item in the Guardian (Note: Links have been removed),

The Hollywood actor and director has been appointed a visiting professor at the London School of Economics, teaching a course on the impact of war on women.

From 2017, Jolie will join the former foreign secretary William Hague as a “professor in practice”, the university announced on Monday, as part of a new MSc course on women, peace and security, which LSE says is the first of its kind in the world.

The course, it says, is intended to “[develop] strategies to promote gender equality and enhance women’s economic, social and political participation and security”, with visiting professors playing an active part in giving lectures, participating in workshops and undertaking their own research.

Getting back to ‘Cosmopolitanism’, some of the principals organized a summer 2017 event (from a Sept. 6, 2017 posting titled: Summer Events – 25th International Congress of History of Science and Technology),

CosmoLocal partners Lesley Cormack (University of Alberta, Canada), Gordon McOuat (University of King’s College, Halifax, Canada), and Dhruv Raina (Jawaharlal Nehru University, India) organized a symposium “Cosmopolitanism and the Local in Science and Nature” as part of the 25th International Congress of History of Science and Technology.  The conference was held July 23-29, 2017, in Rio de Janeiro, Brazil.  The abstract of the CosmoLocal symposium is below, and a pdf version can be found here.

Science, and its associated technologies, is typically viewed as “universal”. At the same time we were also assured that science can trace its genealogy to Europe in a period of rising European intellectual and imperial global force, ‘going outwards’ towards the periphery. As such, it is strikingly parochial. In a kind of sad irony, the ‘subaltern’ was left to retell that tale as one of centre-universalism dominating a traditionalist periphery. Self-described ‘modernity’ and ‘the west’ (two intertwined concepts of recent and mutually self-supporting origin) have erased much of the local engagement and as such represent science as emerging sui generis, moving in one direction. This story is now being challenged within sociology, political theory and history.

… Significantly, scholars who study the history of science in Asia and India have been examining different trajectories for the origin and meaning of science. It is now time for a dialogue between these approaches. Grounding the dialogue is the notion of a “cosmopolitical” science. “Cosmopolitics” is a term borrowed from Kant’s notion of perpetual peace and modern civil society, imagining shared political, moral and economic spaces within which trade, politics and reason get conducted.  …

The abstract is a little ‘high falutin’ but I’m glad to see more efforts being made in  Canada to understand science and its history as a global affair.

Building metal nanoparticles: one step closer

University of Pittsburgh scientists have researched why metal nanoparticles form, a necessary first step before developing techniques for synthesizing them commercially. From a July 10, 2017 news item on ScienceDaily,

Although scientists have for decades been able to synthesize nanoparticles in the lab, the process is mostly trial and error, and how the formation actually takes place is obscure. A new study explains how metal nanoparticles form.

Caption: This is a structure of a ligand-protected Au25 nanocluster. Credit: Computer-Aided Nano and Energy Lab (C.A.N.E.LA.)

A July 10, 2017 University of Pittsburgh news release (also on EurekAlert), which originated the news item, expands on the theme (Note: A link has been removed),

“Even though there is extensive research into metal nanoparticle synthesis, there really isn’t a rational explanation why a nanoparticle is formed,” Dr. Mpourmpakis [Giannis Mpourmpakis, assistant professor of chemical and petroleum engineering] said. “We wanted to investigate not just the catalytic applications of nanoparticles, but to make a step further and understand nanoparticle stability and formation. This new thermodynamic stability theory explains why ligand-protected metal nanoclusters are stabilized at specific sizes.”

A ligand is a molecule that binds to metal atoms to form metal cores that are stabilized by a shell of ligands, and so understanding how they contribute to nanoparticle stabilization is essential to any process of nanoparticle application. Dr. Mpourmpakis explained that previous theories describing why nanoclusters stabilized at specific sizes were based on empirical electron counting rules – the number of electrons that form a closed shell electronic structure, but show limitations since there have been metal nanoclusters experimentally synthesized that do not necessarily follow these rules.

“The novelty of our contribution is that we revealed that for experimentally synthesizable nanoclusters there has to be a fine balance between the average bond strength of the nanocluster’s metal core, and the binding strength of the ligands to the metal core,” he said. “We could then relate this to the structural and compositional characteristic of the nanoclusters, like size, number of metal atoms, and number of ligands.

“Now that we have a more complete understanding of this stability, we can better tailor the nanoparticle morphologies and in turn properties, to applications from biolabeling of individual cells and targeted drug delivery to catalytic reactions, thereby creating more efficient and sustainable production processes.”

Here’s a link to and a citation for the paper,

Thermodynamic stability of ligand-protected metal nanoclusters by Michael G. Taylor & Giannis Mpourmpakis. Nature Communications 8, Article number: 15988 (2017) doi:10.1038/ncomms15988 Published online: 07 July 2017

This paper is open access.

Centralized depot (Wikipedia style) for data on neurons

The decades worth of data that has been collected about the billions of neurons in the brain is astounding. To help scientists make sense of this “brain big data,” researchers at Carnegie Mellon University have used data mining to create http://www.neuroelectro.org, a publicly available website that acts like Wikipedia, indexing physiological information about neurons.

opens a March 30, 2015 news item on ScienceDaily (Note: A link has been removed),

The site will help to accelerate the advance of neuroscience research by providing a centralized resource for collecting and comparing data on neuronal function. A description of the data available and some of the analyses that can be performed using the site are published online by the Journal of Neurophysiology

A March 30, 2015 Carnegie Mellon University news release on EurekAlert, which originated the news item, describes, in more detail,  the endeavour and what the scientists hope to achieve,

The neurons in the brain can be divided into approximately 300 different types based on their physical and functional properties. Researchers have been studying the function and properties of many different types of neurons for decades. The resulting data is scattered across tens of thousands of papers in the scientific literature. Researchers at Carnegie Mellon turned to data mining to collect and organize these data in a way that will make possible, for the first time, new methods of analysis.

“If we want to think about building a brain or re-engineering the brain, we need to know what parts we’re working with,” said Nathan Urban, interim provost and director of Carnegie Mellon’s BrainHubSM neuroscience initiative. “We know a lot about neurons in some areas of the brain, but very little about neurons in others. To accelerate our understanding of neurons and their functions, we need to be able to easily determine whether what we already know about some neurons can be applied to others we know less about.”

Shreejoy J. Tripathy, who worked in Urban’s lab when he was a graduate student in the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition (CNBC) Program in Neural Computation, selected more than 10,000 published papers that contained physiological data describing how neurons responded to various inputs. He used text mining algorithms to “read” each of the papers. The text mining software found the portions of each paper that identified the type of neuron studied and then isolated the electrophysiological data related to the properties of that neuronal type. It also retrieved information about how each of the experiments in the literature was completed, and corrected the data to account for any differences that might be caused by the format of the experiment. Overall, Tripathy, who is now a postdoc at the University of British Columbia, was able to collect and standardize data for approximately 100 different types of neurons, which he published on the website http://www.neuroelectro.org.

Since the data on the website was collected using text mining, the researchers realized that it was likely to contain errors related to extraction and standardization. Urban and his group validated much of the data, but they also created a mechanism that allows site users to flag data for further evaluation. Users also can contribute new data with minimal intervention from site administrators, similar to Wikipedia.

“It’s a dynamic environment in which people can collect, refine and add data,” said Urban, who is the Dr. Frederick A. Schwertz Distinguished Professor of Life Sciences and a member of the CNBC. “It will be a useful resource to people doing neuroscience research all over the world.”

Ultimately, the website will help researchers find groups of neurons that share the same physiological properties, which could provide a better understanding of how a neuron functions. For example, if a researcher finds that a type of neuron in the brain’s neocortex fires spontaneously, they can look up other neurons that fire spontaneously and access research papers that address this type of neuron. Using that information, they can quickly form hypotheses about whether or not the same mechanisms are at play in both the newly discovered and previously studied neurons.

To demonstrate how neuroelectro.org could be used, the researchers compared the electrophysiological data from more than 30 neuron types that had been most heavily studied in the literature. These included pyramidal neurons in the hippocampus, which are responsible for memory, and dopamine neurons in the midbrain, thought to be responsible for reward-seeking behaviors and addiction, among others. The site was able to find many expected similarities between the different types of neurons, and some similarities that were a surprise to researchers. Those surprises represent promising areas for future research.

In ongoing work, the Carnegie Mellon researchers are comparing the data on neuroelectro.org with other kinds of data, including data on neurons’ patterns of gene expression. For example, Urban’s group is using another publicly available resource, the Allen Brain Atlas, to find whether groups of neurons with similar electrical function have similar gene expression.

“It would take a lot of time, effort and money to determine both the physiological properties of a neuron and its gene expression,” Urban said. “Our website will help guide this research, making it much more efficient.”

The researchers have produced a brief video describing neurons and their project,

Here’s a link to and a citation for the researchers’ paper,

Brain-wide analysis of electrophysiological diversity yields novel categorization of mammalian neuron types by Shreejoy J Tripathy, Shawn D. Burton, Matthew Geramita, Richard C. Gerkin, and Nathaniel N. Urban. Journal of Neurophysiology Published 25 March 2015 DOI: 10.1152/jn.00237.2015

This paper is behind a paywall.