Tag Archives: University of Michigan

Dr. Wei Lu and bio-inspired ‘memristor’ chips

It’s been a while since I’ve featured Dr. Wei Lu’s work here. This April  15, 2010 posting features Lu’s most relevant previous work.) Here’s his latest ‘memristor’ work , from a May 22, 2017 news item on Nanowerk (Note: A link has been removed),

Inspired by how mammals see, a new “memristor” computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today’s most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology (“Sparse coding with memristor networks”).

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

A May 22, 2017 University of Michigan news release (also on EurekAlert), which originated the news item, provides more information about memristors and about the research,

Memristors are electrical resistors with memory—advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified—when ‘stored’ in the appropriate category in our heads.”

Image of a memristor chip Image of a memristor chip Similarly, Lu’s electronic system is designed to detect the patterns very efficiently—and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

The project is titled “Sparse Adaptive Local Learning for Sensing and Analytics.” Other collaborators are Zhengya Zhang and Michael Flynn of the U-M Department of Electrical Engineering and Computer Science, Garrett Kenyon of the Los Alamos National Lab and Christof Teuscher of Portland State University.

The work is part of a $6.9 million Unconventional Processing of Signals for Intelligent Data Exploitation project that aims to build a computer chip based on self-organizing, adaptive neural networks. It is funded by the [US] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Sparse coding with memristor networks by Patrick M. Sheridan, Fuxi Cai, Chao Du, Wen Ma, Zhengya Zhang, & Wei D. Lu. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.83 Published online 22 May 2017

This paper is behind a paywall.

For the interested, there are a number of postings featuring memristors here (just use ‘memristor’ as your search term in the blog search engine). You might also want to check out ‘neuromorphic engineeering’ and ‘neuromorphic computing’ and ‘artificial brain’.

Patent Politics: a June 23, 2017 book launch at the Wilson Center (Washington, DC)

I received a June 12, 2017 notice (via email) from the Wilson Center (also know as the Woodrow Wilson Center for International Scholars) about a book examining patents and policies in the United States and in Europe and its upcoming launch,

Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe

Over the past thirty years, the world’s patent systems have experienced pressure from civil society like never before. From farmers to patient advocates, new voices are arguing that patents impact public health, economic inequality, morality—and democracy. These challenges, to domains that we usually consider technical and legal, may seem surprising. But in Patent Politics, Shobita Parthasarathy argues that patent systems have always been deeply political and social.

To demonstrate this, Parthasarathy takes readers through a particularly fierce and prolonged set of controversies over patents on life forms linked to important advances in biology and agriculture and potentially life-saving medicines. Comparing battles over patents on animals, human embryonic stem cells, human genes, and plants in the United States and Europe, she shows how political culture, ideology, and history shape patent system politics. Clashes over whose voices and which values matter in the patent system, as well as what counts as knowledge and whose expertise is important, look quite different in these two places. And through these debates, the United States and Europe are developing very different approaches to patent and innovation governance. Not just the first comprehensive look at the controversies swirling around biotechnology patents, Patent Politics is also the first in-depth analysis of the political underpinnings and implications of modern patent systems, and provides a timely analysis of how we can reform these systems around the world to maximize the public interest.

Join us on June 23 [2017] from 4-6 pm [elsewhere the time is listed at 4-7 pm] for a discussion on the role of the patent system in governing emerging technologies, on the launch of Shobita Parthasarathy’s Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe (University of Chicago Press, 2017).

You can find more information such as this on the Patent Politics event page,

Speakers

Keynote


  • Shobita Parthasarathy

    Fellow
    Associate Professor of Public Policy and Women’s Studies, and Director of the Science, Technology, and Public Policy Program, at University of Michigan

Moderator


  • Eleonore Pauwels

    Senior Program Associate and Director of Biology Collectives, Science and Technology Innovation Program
    Formerly European Commission, Directorate-General for Research and Technological Development, Directorate on Science, Economy and Society

Panelists


  • Daniel Sarewitz

    Co-Director, Consortium for Science, Policy & Outcomes Professor of Science and Society, School for the Future of Innovation in Society

  • Richard Harris

    Award-Winning Journalist National Public Radio Author of “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions”

For those who cannot attend in person, there will be a live webcast. If you can be there in person, you can RSVP here (Note: The time frame for the event is listed in some places as 4-7 pm.) I cannot find any reason for the time frame disparity. My best guess is that the discussion is scheduled for two hours with a one hour reception afterwards for those who can attend in person.

Transparent silver

This March 21, 2017 news item on Nanowerk is the first I’ve heard of transparent silver; it’s usually transparent aluminum (Note: A link has been removed),

The thinnest, smoothest layer of silver that can survive air exposure has been laid down at the University of Michigan, and it could change the way touchscreens and flat or flexible displays are made (Advanced Materials, “High-performance Doped Silver Films: Overcoming Fundamental Material Limits for Nanophotonic Applications”).

It could also help improve computing power, affecting both the transfer of information within a silicon chip and the patterning of the chip itself through metamaterial superlenses.

A March 21, 2017 University of Michigan  news release, which originated the news item, provides details about the research and features a mention about aluminum,

By combining the silver with a little bit of aluminum, the U-M researchers found that it was possible to produce exceptionally thin, smooth layers of silver that are resistant to tarnishing. They applied an anti-reflective coating to make one thin metal layer up to 92.4 percent transparent.

The team showed that the silver coating could guide light about 10 times as far as other metal waveguides—a property that could make it useful for faster computing. And they layered the silver films into a metamaterial hyperlens that could be used to create dense patterns with feature sizes a fraction of what is possible with ordinary ultraviolet methods, on silicon chips, for instance.

Screens of all stripes need transparent electrodes to control which pixels are lit up, but touchscreens are particularly dependent on them. A modern touch screen is made of a transparent conductive layer covered with a nonconductive layer. It senses electrical changes where a conductive object—such as a finger—is pressed against the screen.

“The transparent conductor market has been dominated to this day by one single material,” said L. Jay Guo, professor of electrical engineering and computer science.

This material, indium tin oxide, is projected to become expensive as demand for touch screens continues to grow; there are relatively few known sources of indium, Guo said.

“Before, it was very cheap. Now, the price is rising sharply,” he said.

The ultrathin film could make silver a worthy successor.

Usually, it’s impossible to make a continuous layer of silver less than 15 nanometers thick, or roughly 100 silver atoms. Silver has a tendency to cluster together in small islands rather than extend into an even coating, Guo said.

By adding about 6 percent aluminum, the researchers coaxed the metal into a film of less than half that thickness—seven nanometers. What’s more, when they exposed it to air, it didn’t immediately tarnish as pure silver films do. After several months, the film maintained its conductive properties and transparency. And it was firmly stuck on, whereas pure silver comes off glass with Scotch tape.

In addition to their potential to serve as transparent conductors for touch screens, the thin silver films offer two more tricks, both having to do with silver’s unparalleled ability to transport visible and infrared light waves along its surface. The light waves shrink and travel as so-called surface plasmon polaritons, showing up as oscillations in the concentration of electrons on the silver’s surface.

Those oscillations encode the frequency of the light, preserving it so that it can emerge on the other side. While optical fibers can’t scale down to the size of copper wires on today’s computer chips, plasmonic waveguides could allow information to travel in optical rather than electronic form for faster data transfer. As a waveguide, the smooth silver film could transport the surface plasmons over a centimeter—enough to get by inside a computer chip.

Here’s a link to and a citation for the paper,

High-Performance Doped Silver Films: Overcoming Fundamental Material Limits for Nanophotonic Applications by Cheng Zhang, Nathaniel Kinsey, Long Chen, Chengang Ji, Mingjie Xu, Marcello Ferrera, Xiaoqing Pan, Vladimir M. Shalaev, Alexandra Boltasseva, and Jay Guo. Advanced Materials DOI: 10.1002/adma.201605177 Version of Record online: 20 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Using open-source software for a 3D look at nanomaterials

A 3-D view of a hyperbranched nanoparticle with complex structure, made possible by Tomviz 1.0, a new open-source software platform developed by researchers at the University of Michigan, Cornell University and Kitware Inc. Image credit: Robert Hovden, Michigan Engineering

An April 3, 2017 news item on ScienceDaily describes this new and freely available software,

Now it’s possible for anyone to see and share 3-D nanoscale imagery with a new open-source software platform developed by researchers at the University of Michigan, Cornell University and open-source software company Kitware Inc.

Tomviz 1.0 is the first open-source tool that enables researchers to easily create 3-D images from electron tomography data, then share and manipulate those images in a single platform.

A March 31, 2017 University of Michigan news release, which originated the news item, expands on the theme,

The world of nanoscale materials—things 100 nanometers and smaller—is an important place for scientists and engineers who are designing the stuff of the future: semiconductors, metal alloys and other advanced materials.

Seeing in 3-D how nanoscale flecks of platinum arrange themselves in a car’s catalytic converter, for example, or how spiky dendrites can cause short circuits inside lithium-ion batteries, could spur advances like safer, longer-lasting batteries; lighter, more fuel efficient cars; and more powerful computers.

“3-D nanoscale imagery is useful in a variety of fields, including the auto industry, semiconductors and even geology,” said Robert Hovden, U-M assistant professor of materials science engineering and one of the creators of the program. “Now you don’t have to be a tomography expert to work with these images in a meaningful way.”

Tomviz solves a key challenge: the difficulty of interpreting data from the electron microscopes that examine nanoscale objects in 3-D. The machines shoot electron beams through nanoparticles from different angles. The beams form projections as they travel through the object, a bit like nanoscale shadow puppets.

Once the machine does its work, it’s up to researchers to piece hundreds of shadows into a single three-dimensional image. It’s as difficult as it sounds—an art as well as a science. Like staining a traditional microscope slide, researchers often add shading or color to 3-D images to highlight certain attributes.

A 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityA 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityTraditionally, they’ve have had to rely on a hodgepodge of proprietary software to do the heavy lifting. The work is expensive and time-consuming; so much so that even big companies like automakers struggle with it. And once a 3-D image is created, it’s often impossible for other researchers to reproduce it or to share it with others.

Tomviz dramatically simplifies the process and reduces the amount of time and computing power needed to make it happen, its designers say. It also enables researchers to readily collaborate by sharing all the steps that went into creating a given image and enabling them to make tweaks of their own.

“These images are far different from the 3-D graphics you’d see at a movie theater, which are essentially cleverly lit surfaces,” Hovden said. “Tomviz explores both the surface and the interior of a nanoscale object, with detailed information about its density and structure. In some cases, we can see individual atoms.”

Key to making Tomviz happen was getting tomography experts and software developers together to collaborate, Hovden said. Their first challenge was gaining access to a large volume of high-quality tomography. The team rallied experts at Cornell, Berkeley Lab and UCLA to contribute their data, and also created their own using U-M’s microscopy center. To turn raw data into code, Hovden’s team worked with open-source software maker Kitware.

With the release of Tomviz 1.0, Hovden is looking toward the next stages of the project, where he hopes to integrate the software directly with microscopes. He believes that U-M’s atom probe tomography facilities and expertise could help him design a version that could ultimately uncover the chemistry of all atoms in 3-D.

“We are unlocking access to see new 3D nanomaterials that will power the next generation of technology,” Hovden said. “I’m very interested in pushing the boundaries of understanding materials in 3-D.”

There is a video about Tomviz,

You can download Tomviz from here and you can find Kitware here. Happy 3D nanomaterial viewing!

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

The physics of melting in two-dimensional systems

You might want to skip over the reference to snow as it doesn’t have much relevance to this story about ‘melting’, from a Feb. 1, 2017 news item on Nanowerk (Note: A link has been removed),

Snow falls in winter and melts in spring, but what drives the phase change in between?
Although melting is a familiar phenomenon encountered in everyday life, playing a part in many industrial and commercial processes, much remains to be discovered about this transformation at a fundamental level.

In 2015, a team led by the University of Michigan’s Sharon Glotzer used high-performance computing at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory [ORNL] to study melting in two-dimensional (2-D) systems, a problem that could yield insights into surface interactions in materials important to technologies like solar panels, as well as into the mechanism behind three-dimensional melting. The team explored how particle shape affects the physics of a solid-to-fluid melting transition in two dimensions.

Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility, the team’s [latest?] work revealed that the shape and symmetry of particles can dramatically affect the melting process (“Shape and symmetry determine two-dimensional melting transitions of hard regular polygons”). This fundamental finding could help guide researchers in search of nanoparticles with desirable properties for energy applications.

There is a video  of the ‘melting’ process but I have to confess to finding it a bit enigmatic,

A Feb. 1, 2017 ORNL news release (also on EurekAlert), which originated the news item, provides more detail about the research,

o tackle the problem, Glotzer’s team needed a supercomputer capable of simulating systems of up to 1 million hard polygons, simple particles used as stand-ins for atoms, ranging from triangles to 14-sided shapes. Unlike traditional molecular dynamics simulations that attempt to mimic nature, hard polygon simulations give researchers a pared-down environment in which to evaluate shape-influenced physics.

“Within our simulated 2-D environment, we found that the melting transition follows one of three different scenarios depending on the shape of the systems’ polygons,” University of Michigan research scientist Joshua Anderson said. “Notably, we found that systems made up of hexagons perfectly follow a well-known theory for 2-D melting, something that hasn’t been described until now.”

Shifting Shape Scenarios

In 3-D systems such as a thinning icicle, melting takes the form of a first-order phase transition. This means that collections of molecules within these systems exist in either solid or liquid form with no in-between in the presence of latent heat, the energy that fuels a solid-to-fluid phase change . In 2-D systems, such as thin-film materials used in batteries and other technologies, melting can be more complex, sometimes exhibiting an intermediate phase known as the hexatic phase.

The hexatic phase, a state characterized as a halfway point between an ordered solid and a disordered liquid, was first theorized in the 1970s by researchers John Kosterlitz, David Thouless, Burt Halperin, David Nelson, and Peter Young. The phase is a principle feature of the KTHNY theory, a 2-D melting theory posited by the researchers (and named based on the first letters of their last names). In 2016 Kosterlitz and Thouless were awarded the Nobel Prize in Physics, along with physicist Duncan Haldane, for their contributions to 2-D materials research.

At the molecular level, solid, hexatic, and liquid systems are defined by the arrangement of their atoms. In a crystalline solid, two types of order are present: translational and orientational. Translational order describes the well-defined paths between atoms over distances, like blocks in a carefully constructed Jenga tower. Orientational order describes the relational and clustered order shared between atoms and groups of atoms over distances. Think of that same Jenga tower turned askew after several rounds of play. The general shape of the tower remains, but its order is now fragmented.

The hexatic phase has no translational order but possesses orientational order. (A liquid has neither translational nor orientational order but exhibits short-range order, meaning any atom will have some average number of neighbors nearby but with no predicable order.)

Deducing the presence of a hexatic phase requires a leadership-class computer that can calculate large hard-particle systems. Glotzer’s team gained access to the OLCF’s 27-petaflop Titan through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, running its GPU-accelerated HOOMD-blue code to maximize time on the machine.

On Titan, HOOMD-blue used 64 GPUs for each massively parallel Monte Carlo simulation of up to 1 million particles. Researchers explored 11 different shape systems, applying an external pressure to push the particles together. Each system was simulated at 21 different densities, with the lowest densities representing a fluid state and the highest densities a solid state.

The simulations demonstrated multiple melting scenarios hinging on the polygons’ shape. Systems with polygons of seven sides or more closely followed the melting behavior of hard disks, or circles, exhibiting a continuous phase transition from the solid to the hexatic phase and a first-order phase transition from the hexatic to the liquid phase. A continuous phase transition means a constantly changing area in response to a changing external pressure. A first-order phase transition is characterized by a discontinuity in which the volume jumps across the phase transition in response to the changing external pressure. The team found pentagons and fourfold pentilles, irregular pentagons with two different edge lengths, exhibit a first-order solid-to-liquid phase transition.

The most significant finding, however, emerged from hexagon systems, which perfectly followed the phase transition described by the KTHNY theory. In this scenario, the particles’ shift from solid to hexatic and hexatic to fluid in a perfect continuous phase transition pattern.

“It was actually sort of surprising that no one else has found that until now,” Anderson said, “because it seems natural that the hexagon, with its six sides, and the honeycomb-like hexagonal arrangement would be a perfect match for this theory” in which the hexatic phase generally contains sixfold orientational order.

Glotzer’s team, which recently received a 2017 INCITE allocation, is now applying its leadership-class computing prowess to tackle phase transitions in 3-D. The team is focusing on how fluid particles crystallize into complex colloids—mixtures in which particles are suspended throughout another substance. Common examples of colloids include milk, paper, fog, and stained glass.

“We’re planning on using Titan to study how complexity can arise from these simple interactions, and to do that we’re actually going to look at how the crystals grow and study the kinetics of how that happens,” said Anderson.

There is a paper on arXiv,

Shape and symmetry determine two-dimensional melting transitions of hard regular polygons by Joshua A. Anderson, James Antonaglia, Jaime A. Millan, Michael Engel, Sharon C. Glotzer
(Submitted on 2 Jun 2016 (v1), last revised 23 Dec 2016 (this version, v2))  arXiv:1606.00687 [cond-mat.soft] (or arXiv:1606.00687v2

This paper is open access and open to public peer review.

Creating multiferroic material at room temperature

A Sept. 23, 2016 news item on ScienceDaily describes some research from Cornell University (US),

Multiferroics — materials that exhibit both magnetic and electric order — are of interest for next-generation computing but difficult to create because the conditions conducive to each of those states are usually mutually exclusive. And in most multiferroics found to date, their respective properties emerge only at extremely low temperatures.

Two years ago, researchers in the labs of Darrell Schlom, the Herbert Fisk Johnson Professor of Industrial Chemistry in the Department of Materials Science and Engineering, and Dan Ralph, the F.R. Newman Professor in the College of Arts and Sciences, in collaboration with professor Ramamoorthy Ramesh at UC Berkeley, published a paper announcing a breakthrough in multiferroics involving the only known material in which magnetism can be controlled by applying an electric field at room temperature: the multiferroic bismuth ferrite.

Schlom’s group has partnered with David Muller and Craig Fennie, professors of applied and engineering physics, to take that research a step further: The researchers have combined two non-multiferroic materials, using the best attributes of both to create a new room-temperature multiferroic.

Their paper, “Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic,” was published — along with a companion News & Views piece — Sept. 22 [2016] in Nature. …

A Sept. 22, 2016 Cornell University news release by Tom Fleischman, which originated the news item, details more about the work (Note: A link has been removed),

The group engineered thin films of hexagonal lutetium iron oxide (LuFeO3), a material known to be a robust ferroelectric but not strongly magnetic. The LuFeO3 consists of alternating single monolayers of lutetium oxide and iron oxide, and differs from a strong ferrimagnetic oxide (LuFe2O4), which consists of alternating monolayers of lutetium oxide with double monolayers of iron oxide.

The researchers found, however, that they could combine these two materials at the atomic-scale to create a new compound that was not only multiferroic but had better properties that either of the individual constituents. In particular, they found they need to add just one extra monolayer of iron oxide to every 10 atomic repeats of the LuFeO3 to dramatically change the properties of the system.

That precision engineering was done via molecular-beam epitaxy (MBE), a specialty of the Schlom lab. A technique Schlom likens to “atomic spray painting,” MBE let the researchers design and assemble the two different materials in layers, a single atom at a time.

The combination of the two materials produced a strongly ferrimagnetic layer near room temperature. They then tested the new material at the Lawrence Berkeley National Laboratory (LBNL) Advanced Light Source in collaboration with co-author Ramesh to show that the ferrimagnetic atoms followed the alignment of their ferroelectric neighbors when switched by an electric field.

“It was when our collaborators at LBNL demonstrated electrical control of magnetism in the material that we made that things got super exciting,” Schlom said. “Room-temperature multiferroics are exceedingly rare and only multiferroics that enable electrical control of magnetism are relevant to applications.”

In electronics devices, the advantages of multiferroics include their reversible polarization in response to low-power electric fields – as opposed to heat-generating and power-sapping electrical currents – and their ability to hold their polarized state without the need for continuous power. High-performance memory chips make use of ferroelectric or ferromagnetic materials.

“Our work shows that an entirely different mechanism is active in this new material,” Schlom said, “giving us hope for even better – higher-temperature and stronger – multiferroics for the future.”

Collaborators hailed from the University of Illinois at Urbana-Champaign, the National Institute of Standards and Technology, the University of Michigan and Penn State University.

Here is a link and a citation to the paper and to a companion piece,

Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic by Julia A. Mundy, Charles M. Brooks, Megan E. Holtz, Jarrett A. Moyer, Hena Das, Alejandro F. Rébola, John T. Heron, James D. Clarkson, Steven M. Disseler, Zhiqi Liu, Alan Farhan, Rainer Held, Robert Hovden, Elliot Padgett, Qingyun Mao, Hanjong Paik, Rajiv Misra, Lena F. Kourkoutis, Elke Arenholz, Andreas Scholl, Julie A. Borchers, William D. Ratcliff, Ramamoorthy Ramesh, Craig J. Fennie, Peter Schiffer et al. Nature 537, 523–527 (22 September 2016) doi:10.1038/nature19343 Published online 21 September 2016

Condensed-matter physics: Multitasking materials from atomic templates by Manfred Fiebig. Nature 537, 499–500  (22 September 2016) doi:10.1038/537499a Published online 21 September 2016

Both the paper and its companion piece are behind a paywall.

eXXpedition Great Lakes 2016: hunting for plastic debris

An all woman expedition set sail on Aug. 20, 2016 for a journey across the five Great Lakes in search of plastic debris according to an Aug. 16, 2016 news item on phys.org,

Female scientists from the U.S. and Canada will set sail Aug. 20 [2016] on all five Great Lakes and connecting waterways to sample plastic debris pollution and to raise public awareness about the issue.

Event organizers say eXXpedition Great Lakes 2016 will include the largest number of simultaneous samplings for aquatic plastic debris in history. [emphasis mine] The all-female crew members on the seven lead research vessels also aim to inspire young women to pursue careers in science and engineering.

An Aug. 15, 2016 University of Michigan news release, which originated the news item, provides more details,

Teams of researchers will collect plastic debris on the five Great Lakes, as well Lake St. Clair-Detroit River and the Saint Lawrence River. Data collected will contribute to growing open-source databases documenting plastic and toxic pollution and their impacts on biodiversity and waterway health, according to event organizers.

Two University of Michigan faculty members, biologist Melissa Duhaime and Laura Alford of the Department of Naval Architecture and Marine Engineering, will lead the Lake St. Clair-Detroit River team, aboard a 30-foot sailboat.

The crew of up to eight people will include an Ann Arbor middle school teacher, an artist and student at the Great Lakes Boat Building School, and girls from Detroit-area schools. Onboard activities will include water sampling and trawling for plastic debris using protocols developed by the 5 Gyres Institute.

“There is a place for scientists in this type of public outreach, and it is a complement to the research that we do,” said Duhaime, an assistant research scientist in the U-M Department of Ecology and Evolutionary Biology.

“In a single day through an event like this, we can potentially reach orders of magnitude more people than we do when we publish our scientific papers, which are read mainly by other scientists. And greater public awareness about this topic, rooted in rigorously collected and interpreted data, can certainly lead to changed behavior in our relationships with plastics.”

Duhaime’s lab studies the sources of Great Lakes plastics, as well as how they are transported within the lakes and where they end up. The work has involved a summer on three of the Great Lakes, trips to Detroit-area wastewater treatment plants, and the sampling of fish and mussels.

The group’s first Great Lakes project included multiple U-M labs, one of which analyzed the stomach contents of fish and mussels, looking for tiny plastic beads, fibers and fragments. They found no plastic “microbeads”—spheres typically less than 1 millimeter in diameter—but plastic fibers were present in a third of the zebra and quagga mussels and at various levels in all the fish species they checked: 15 percent of emerald shiners and bloaters, 20 percent of round gobies, and 26 percent of rainbow smelt, according to Duhaime.

The stomach-content study, which will be submitted for publication in a peer-reviewed scientific journal, was based on work done in lakes Erie and Huron and was led by Larissa Sano, who is now at U-M’s Sweetland Center for Writing.

For years, plastic microbeads were added as abrasives to beauty and health products like exfoliating facial scrubs and toothpaste. But the federal Microbead-Free Waters Act of 2015, signed into law by President Obama on Dec. 28, bans the manufacture of microbeads beginning next year.

Sources of tiny plastic fibers that make it into the Great Lakes include fleece jackets and other types of synthetic clothing. These microfibers are released during laundering, then slip through wastewater treatment plants and into waterways. Fibers found in common household textiles such as carpets, upholstered furniture and curtains also make their way into the environment and can end up in the lakes.

“Microbeads were just the tip of the iceberg,” Duhaime said. “I think fibers are the future of this research and a much more important issue than microbeads, because of the prevalence and the pervasiveness of these plastic textiles in our lives.”

Researchers like Duhaime are also investigating the possibility that tiny bits of Great Lakes plastics can transfer toxic pollutants from the water into fish and other aquatic organisms. It is unclear what level of human health risk, if any, these microplastics pose to people who eat Great Lakes fish; it is a topic of active research.

On Aug. 20 [2016], the team led by Duhaime and Alford will sail up the Detroit River to Lake St. Clair, sampling water and trawling for plastics along the way. Throughout the day at Detroit’s Belle Isle, members of their team will host a beach cleanup and data collection. In addition, a free public-awareness event will be held throughout the day outside Belle Isle Aquarium, followed by a plastic-free community picnic with live music.

Members of the general public are also encouraged to collect Great Lakes water samples and to participate in shoreline cleanup events on the 20th [Aug. 20, 2016].

The mission leaders for eXXpedition Great Lakes 2016 event are two women who met during an all-female voyage across the Atlantic Ocean in 2014. Jennifer Pate is a filmmaker, and Elaine McKinnon is a clinical neuropsychologist. Pate plans to use video footage and photographs gathered during the Aug. 20 [2016] event to create a film called “Love Your Greats.”

“In parts of the Great Lakes, we have a higher density of microplastics than in any of the ocean gyres,” Pate said. “So the problem isn’t just out there in the oceans. It’s right here in our backyard, in our lakes and on our dinner plates.

“We are all a part of the problem, but that means we are also all part of the solution. That’s why we are holding this event, to give people an opportunity to change the story and create a healthier future.”

Partners in the eXXpedition Great Lakes 2016 event include Adventurers & Scientists for Conservation, the Great Canadian Shoreline Cleanup in Canada and the Alliance for the Great Lakes’ Adopt-a-Beach program in the United States.

What a great idea! I wonder if this might inspire an annual event.

D-PLACE: an open access database of places, language, culture, and enviroment

In an attempt to be a bit more broad in my interpretation of the ‘society’ part of my commentary I’m including this July 8, 2016 news item on ScienceDaily (Note: A link has been removed),

An international team of researchers has developed a website at d-place.org to help answer long-standing questions about the forces that shaped human cultural diversity.

D-PLACE — the Database of Places, Language, Culture and Environment — is an expandable, open access database that brings together a dispersed body of information on the language, geography, culture and environment of more than 1,400 human societies. It comprises information mainly on pre-industrial societies that were described by ethnographers in the 19th and early 20th centuries.

A July 8, 2016 University of Toronto news release (also on EurekAlert), which originated the news item, expands on the theme,

“Human cultural diversity is expressed in numerous ways: from the foods we eat and the houses we build, to our religious practices and political organisation, to who we marry and the types of games we teach our children,” said Kathryn Kirby, a postdoctoral fellow in the Departments of Ecology & Evolutionary Biology and Geography at the University of Toronto and lead author of the study. “Cultural practices vary across space and time, but the factors and processes that drive cultural change and shape patterns of diversity remain largely unknown.

“D-PLACE will enable a whole new generation of scholars to answer these long-standing questions about the forces that have shaped human cultural diversity.”

Co-author Fiona Jordan, senior lecturer in anthropology at the University of Bristol and one of the project leads said, “Comparative research is critical for understanding the processes behind cultural diversity. Over a century of anthropological research around the globe has given us a rich resource for understanding the diversity of humanity – but bringing different resources and datasets together has been a huge challenge in the past.

“We’ve drawn on the emerging big data sets from ecology, and combined these with cultural and linguistic data so researchers can visualise diversity at a glance, and download data to analyse in their own projects.”

D-PLACE allows users to search by cultural practice (e.g., monogamy vs. polygamy), environmental variable (e.g. elevation, mean annual temperature), language family (e.g. Indo-European, Austronesian), or region (e.g. Siberia). The search results can be displayed on a map, a language tree or in a table, and can also be downloaded for further analysis.

It aims to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions.

D-PLACE was developed by an international team of scientists interested in cross-cultural research. It includes researchers from Max Planck Institute for the Science of Human history in Jena Germany, University of Auckland, Colorado State University, University of Toronto, University of Bristol, Yale, Human Relations Area Files, Washington University in Saint Louis, University of Michigan, American Museum of Natural History, and City University of New York.

The diverse team included: linguists; anthropologists; biogeographers; data scientists; ethnobiologists; and evolutionary ecologists, who employ a variety of research methods including field-based primary data collection; compilation of cross-cultural data sources; and analyses of existing cross-cultural datasets.

“The team’s diversity is reflected in D-PLACE, which is designed to appeal to a broad user base,” said Kirby. “Envisioned users range from members of the public world-wide interested in comparing their cultural practices with those of other groups, to cross-cultural researchers interested in pushing the boundaries of existing research into the drivers of cultural change.”

Here’s a link to and a citation for the paper,

D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity by Kathryn R. Kirby, Russell D. Gray, Simon J. Greenhill, Fiona M. Jordan, Stephanie Gomes-Ng, Hans-Jörg Bibiko, Damián E. Blasi, Carlos A. Botero, Claire Bowern, Carol R. Ember, Dan Leehr, Bobbi S. Low, Joe McCarter, William Divale, Michael C. Gavin.  PLOS ONE, 2016; 11 (7): e0158391 DOI: 10.1371/journal.pone.0158391 Published July 8, 2016.

This paper is open access.

You can find D-PLACE here.

While it might not seem like that there would be a close link between anthropology and physics in the 19th and early 20th centuries, that information can be mined for more contemporary applications. For example, someone who wants to make a case for a more diverse scientific community may want to develop a social science approach to the discussion. The situation in my June 16, 2016 post titled: Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons, could  be extended into a discussion and educational process using data from D-Place and other sources to make the point,

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

More generally, with D-PLACE and the recently announced Trans-Atlantic Platform (see my July 15, 2016 post about it), it seems Canada’s humanities and social sciences communities are taking strides toward greater international collaboration and a more profound investment in digital scholarship.

Inhaling buckyballs (C60 fullerenes)

Carbon nanotubes (also known as buckytubes) have attracted most of the attention where carbon nanomaterials and health and safety are concerned. But, University of Michigan researchers opted for a change of pace and focused their health and safety research on buckyballs (also known as C60 or fullerenes) according to a Feb. 24, 2016 news item on Nanowerk,

Scientists at the University of Michigan have found evidence that some carbon nanomaterials can enter into immune cell membranes, seemingly going undetected by the cell’s built-in mechanisms for engulfing and disposing of foreign material, and then escape through some unidentified pathway. [emphasis mine]

The researchers from the School of Public Health and College of Engineering say their findings of a more passive entry of the materials into cells is the first research to show that the normal process of endocytosis-phagocytosis isn’t always activated when cells are confronted with tiny Carbon 60 (C60 ) molecules.

A Feb. 23, 2016 University of Michigan news release (also on EurekAlert but dated Feb. 24, 2016), which originated the news item, provides more detail about the research,

… This study examined nanomaterials known as carbon fullerenes, in this case C60, which has a distinct spherical shape.

Over the last decade, scientists have found these carbon-based materials useful in a number of commercial products, including drugs, medical devices, cosmetics, lubricants, antimicrobial agents and more. Fullerenes also are produced in nature through events like volcanic eruptions and wildfires.

The concern is that however exposed, commercially or naturally, little is known about how inhaling these materials impacts health.

“It’s entirely possible that even tiny amounts of some nanomaterials could cause altered cellular signaling,” said Martin Philbert, dean and professor of toxicology at the U-M School of Public Health.

Philbert said much of the previously published research bombarded cells with large amounts of particle clusters, unlike a normal environmental exposure.

The U-M researchers examined various mechanisms of cell entry through a combination of classical biological, biophysical and newer computational techniques, using models developed by a team led by Angela Violi to determine how C60 molecules find their way into living immune cells of mice.

They found that the C60 particles in low concentrations were entering the membrane individually, without perturbing the structure of the cell enough to trigger its normal response.

“Computational modeling of C60 interacting with lipid bilayers, representative of the cellular membrane, show that particles readily diffuse into biological membranes and find a thermodynamically stable equilibrium in an eccentric position within the bilayer,” said Violi, U-M professor of mechanical engineering, chemical engineering, biomedical engineering, and macromolecular science and engineering.

“The surprising contribution of passive modes of cellular entry provides new avenues for toxicological research, as we still don’t know exactly what are the mechanisms that cause this crossing.”

So, while the buckyballs enter cells, they also escape from them somehow. I wonder if the mechanisms that allow them to enter the cells are similar to the ones that allow them to escape. Regardless, here’s a link to and a citation for the paper,

C60 fullerene localization and membrane interactions in RAW 264.7 immortalized mouse macrophages by K. A. Russ, P. Elvati, T. L. Parsonage, A. Dews, J. A. Jarvis, M. Ray, B. Schneider, P. J. S. Smith, P. T. F. Williamson, A. Violi and M. A. Philbert. Nanoscale, 2016, 8, 4134-4144 DOI: 10.1039/C5NR07003A First published online 25 Jan 2016

This paper is behind a paywall.