I’ve quickly read Michael Edgeworth McIntyre’s paper on multi-level thinking and find it provides fascinating insight and some good writing style (I’ve provided a few excerpts from the paper further down in the posting).
An unusual paper “On multi-level thinking and scientific understanding” appears in the October issue of Advances in Atmospheric Sciences. The author is Professor Michael Edgeworth McIntyre from University of Cambridge, whose work in atmospheric dynamics is well known. He has also had longstanding interests in astrophysics, music, perception psychology, and biological evolution.
The paper touches on a range of deep questions within and outside the atmospheric sciences. They include insights into the nature of science itself, and of scientific understanding — what it means to understand a scientific problem in depth — and into the communication skills necessary to convey that understanding and to mediate collaboration across specialist disciplines.
The paper appears in a Special Issue arising from last year’s Symposium held in Nanjing to commemorate the life of Professor Duzheng YE, who was well known as a national and international scientific leader and for his own wide range of interests, within and outside the atmospheric sciences. The symposium was organized by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences, where Prof. YE had worked nearly 70 years before he passed away. Upon the invitation of Prof. Jiang ZHU, the Director General of IAP, also the Editor-in-Chief of Advances in Atmospheric Sciences (AAS), Prof. McIntyre agreed to contribute a review paper to an AAS special issue commemorating the centenary of Duzheng YE’s birth. Prof. YE was also the founding Editor-in-Chief of this journal.
One of Professor McIntyre’s themes is that we all have unconscious mathematics, including Euclidean geometry and the calculus of variations. This is easy to demonstrate and is key to understanding not only how science works but also, for instance, how music works. Indeed, it reveals some of the deepest connections between music and mathematics, going beyond the usual remarks about number-patterns. All this revolves around the biological significance of what Professor McIntyre calls the “organic-change principle”.
Further themes include the scientific value of looking at a problem from more than one viewpoint, and the need to use more than one level of description. Many scientific and philosophical controversies stem from confusing one level of description with another, for instance applying arguments to one level that belong on another. This confusion can be especially troublesome when it comes to questions about human biology and human nature, and about what Professor YE called multi-level “orderly human activities”.
Related to all these points are the contrasting modes of perception and understanding offered by the brain’s left and right hemispheres. Our knowledge of their functioning has progressed far beyond the narrow clichés of popular culture, thanks to recent work in the neurosciences. The two hemispheres automatically give us different levels of description, and complementary views of a problem. Good science takes advantage of this. When the two hemispheres cooperate, with each playing to its own strengths, our problem-solving is at its most powerful.
The paper ends with three examples of unconscious assumptions that have impeded scientific progress in the past. Two of them are taken from Professor McIntyre’s main areas of research. A third is from biology.
To give you a sense of his writing and imagination, I’ve excerpted a few paragraphs from p. 1153 but first you need to see this .gif (he provides a number of ways to watch the .gif in his text but I think it’s easier to watch the copy of the one he has on his website),
Now for the excerpt,
Here is an example to show what I mean. It is a classic in experimental psychology, from the work of Professor Gunnar JOHANSSON in the 1970s. …
As soon as the twelve dots start moving, everyone with normal vision sees a person walking. This immediately illustrates several things. First, it illustrates that we all make unconscious assumptions. Here, we unconsciously assume a particular kind of three-dimensional motion. In this case the unconscious assumption is completely involuntary. We cannot help seeing a person walking, despite knowing that it is only twelve moving dots.
The animation also shows that we have unconscious mathematics, Euclidean geometry in this case. In order to generate the percept of a person walking, your brain has to ﬁt a mathematical model to the incoming visual data, in this case a mathematical model based on Euclidean geometry. (And the model-ﬁtting process is an active, and highly complex, predictive process most of which is inaccessible to conscious introspection.)
This brings me to the most central point in our discussion. Science does essentially the same thing. It ﬁts models to data. So science is, in the most fundamental possible sense, an extension of ordinary perception. That is a simple way of saying what was said many decades ago by great thinkers such as Professor Sir Karl POPPER….
I love that phase “unconscious mathematics” for the way it includes even those of us who would never dream of thinking we had any kind of mathematics. I encourage you to read his paper in its entirety, which does include a little technical language in a few spots but the overall thesis is clear and easily understood.
It takes a lot more imagination than I have to describe the object on the right as resembling the candy cane on the left, assuming that’s what was intended when it was used to illustrate the university’s press release. I like being pushed to see resemblances to things that are not immediately apparent to me. This may never look like a candy cane to me but I appreciate that someone finds it to be so. An August 16, 2017 news item on ScienceDaily announces the ‘candy cane’ supercapacitor,
Supercapacitors promise recharging of phones and other devices in seconds and minutes as opposed to hours for batteries. But current technologies are not usually flexible, have insufficient capacities, and for many their performance quickly degrades with charging cycles.
Researchers at Queen Mary University of London (QMUL) and the University of Cambridge have found a way to improve all three problems in one stroke.
Their prototyped polymer electrode, which resembles a candy cane usually hung on a Christmas tree, achieves energy storage close to the theoretical limit, but also demonstrates flexibility and resilience to charge/discharge cycling.
The technique could be applied to many types of materials for supercapacitors and enable fast charging of mobile phones, smart clothes and implantable devices.
Pseudocapacitance is a property of polymer and composite supercapacitors that allows ions to enter inside the material and thus pack much more charge than carbon ones that mostly store the charge as concentrated ions (in the so-called double layer) near the surface.
The problem with polymer supercapacitors, however, is that the ions necessary for these chemical reactions can only access the top few nanometers below the material surface, leaving the rest of the electrode as dead weight. Growing polymers as nano-structures is one way to increase the amount of accessible material near the surface, but this can be expensive, hard to scale up, and often results in poor mechanical stability.
The researchers, however, have developed a way to interweave nanostructures within a bulk material, thereby achieving the benefits of conventional nanostructuring without using complex synthesis methods or sacrificing material toughness.
Project leader, Stoyan Smoukov, explained: “Our supercapacitors can store a lot of charge very quickly, because the thin active material (the conductive polymer) is always in contact with a second polymer which contains ions, just like the red thin regions of a candy cane are always in close proximity to the white parts. But this is on a much smaller scale.
“This interpenetrating structure enables the material to bend more easily, as well as swell and shrink without cracking, leading to greater longevity. This one method is like killing not just two, but three birds with one stone.”
The Smoukov group had previously pioneered a combinatorial route to multifunctionality using interpenetrating polymer networks (IPN) in which each component would have its own function, rather than using trial-and-error chemistry to fit all functions in one molecule.
This time they applied the method to energy storage, specifically supercapacitors, because of the known problem of poor material utilization deep beneath the electrode surface.
This interpenetration technique drastically increases the material’s surface area, or more accurately the interfacial area between the different polymer components.
Interpenetration also happens to solve two other major problems in supercapacitors. It brings flexibility and toughness because the interfaces stop growth of any cracks that may form in the material. It also allows the thin regions to swell and shrink repeatedly without developing large stresses, so they are electrochemically resistant and maintain their performance over many charging cycles.
The researchers are currently rationally designing and evaluating a range of materials that can be adapted into the interpenetrating polymer system for even better supercapacitors.
In an upcoming review, accepted for publication in the journal Sustainable Energy and Fuels, they overview the different techniques people have used to improve the multiple parameters required for novel supercapacitors.
Such devices could be made in soft and flexible freestanding films, which could power electronics embedded in smart clothing, wearable and implantable devices, and soft robotics. The developers hope to make their contribution to provide ubiquitous power for the emerging Internet of Things (IoT) devices, which is still a significant challenge ahead.
This summarizes some of what’s happening in nanomedicine and provides a plug (boost) for the University of Cambridge’s nanotechnology programmes (from a June 26, 2017 news item on Nanowerk),
Nanotechnology is creating new opportunities for fighting disease – from delivering drugs in smart packaging to nanobots powered by the world’s tiniest engines.
Chemotherapy benefits a great many patients but the side effects can be brutal.
When a patient is injected with an anti-cancer drug, the idea is that the molecules will seek out and destroy rogue tumour cells. However, relatively large amounts need to be administered to reach the target in high enough concentrations to be effective. As a result of this high drug concentration, healthy cells may be killed as well as cancer cells, leaving many patients weak, nauseated and vulnerable to infection.
One way that researchers are attempting to improve the safety and efficacy of drugs is to use a relatively new area of research known as nanothrapeutics to target drug delivery just to the cells that need it.
Professor Sir Mark Welland is Head of the Electrical Engineering Division at Cambridge. In recent years, his research has focused on nanotherapeutics, working in collaboration with clinicians and industry to develop better, safer drugs. He and his colleagues don’t design new drugs; instead, they design and build smart packaging for existing drugs.
The University of Cambridge has produced a video interview (referencing a 1966 movie ‘Fantastic Voyage‘ in its title) with Sir Mark Welland,
Nanotherapeutics come in many different configurations, but the easiest way to think about them is as small, benign particles filled with a drug. They can be injected in the same way as a normal drug, and are carried through the bloodstream to the target organ, tissue or cell. At this point, a change in the local environment, such as pH, or the use of light or ultrasound, causes the nanoparticles to release their cargo.
Nano-sized tools are increasingly being looked at for diagnosis, drug delivery and therapy. “There are a huge number of possibilities right now, and probably more to come, which is why there’s been so much interest,” says Welland. Using clever chemistry and engineering at the nanoscale, drugs can be ‘taught’ to behave like a Trojan horse, or to hold their fire until just the right moment, or to recognise the target they’re looking for.
“We always try to use techniques that can be scaled up – we avoid using expensive chemistries or expensive equipment, and we’ve been reasonably successful in that,” he adds. “By keeping costs down and using scalable techniques, we’ve got a far better chance of making a successful treatment for patients.”
In 2014, he and collaborators demonstrated that gold nanoparticles could be used to ‘smuggle’ chemotherapy drugs into cancer cells in glioblastoma multiforme, the most common and aggressive type of brain cancer in adults, which is notoriously difficult to treat. The team engineered nanostructures containing gold and cisplatin, a conventional chemotherapy drug. A coating on the particles made them attracted to tumour cells from glioblastoma patients, so that the nanostructures bound and were absorbed into the cancer cells.
Once inside, these nanostructures were exposed to radiotherapy. This caused the gold to release electrons that damaged the cancer cell’s DNA and its overall structure, enhancing the impact of the chemotherapy drug. The process was so effective that 20 days later, the cell culture showed no evidence of any revival, suggesting that the tumour cells had been destroyed.
While the technique is still several years away from use in humans, tests have begun in mice. Welland’s group is working with MedImmune, the biologics R&D arm of pharmaceutical company AstraZeneca, to study the stability of drugs and to design ways to deliver them more effectively using nanotechnology.
“One of the great advantages of working with MedImmune is they understand precisely what the requirements are for a drug to be approved. We would shut down lines of research where we thought it was never going to get to the point of approval by the regulators,” says Welland. “It’s important to be pragmatic about it so that only the approaches with the best chance of working in patients are taken forward.”
The researchers are also targeting diseases like tuberculosis (TB). With funding from the Rosetrees Trust, Welland and postdoctoral researcher Dr Íris da luz Batalha are working with Professor Andres Floto in the Department of Medicine to improve the efficacy of TB drugs.
Their solution has been to design and develop nontoxic, biodegradable polymers that can be ‘fused’ with TB drug molecules. As polymer molecules have a long, chain-like shape, drugs can be attached along the length of the polymer backbone, meaning that very large amounts of the drug can be loaded onto each polymer molecule. The polymers are stable in the bloodstream and release the drugs they carry when they reach the target cell. Inside the cell, the pH drops, which causes the polymer to release the drug.
In fact, the polymers worked so well for TB drugs that another of Welland’s postdoctoral researchers, Dr Myriam Ouberaï, has formed a start-up company, Spirea, which is raising funding to develop the polymers for use with oncology drugs. Ouberaï is hoping to establish a collaboration with a pharma company in the next two years.
“Designing these particles, loading them with drugs and making them clever so that they release their cargo in a controlled and precise way: it’s quite a technical challenge,” adds Welland. “The main reason I’m interested in the challenge is I want to see something working in the clinic – I want to see something working in patients.”
Could nanotechnology move beyond therapeutics to a time when nanomachines keep us healthy by patrolling, monitoring and repairing the body?
Nanomachines have long been a dream of scientists and public alike. But working out how to make them move has meant they’ve remained in the realm of science fiction.
But last year, Professor Jeremy Baumberg and colleagues in Cambridge and the University of Bath developed the world’s tiniest engine – just a few billionths of a metre [nanometre] in size. It’s biocompatible, cost-effective to manufacture, fast to respond and energy efficient.
The forces exerted by these ‘ANTs’ (for ‘actuating nano-transducers’) are nearly a hundred times larger than those for any known device, motor or muscle. To make them, tiny charged particles of gold, bound together with a temperature-responsive polymer gel, are heated with a laser. As the polymer coatings expel water from the gel and collapse, a large amount of elastic energy is stored in a fraction of a second. On cooling, the particles spring apart and release energy.
The researchers hope to use this ability of ANTs to produce very large forces relative to their weight to develop three-dimensional machines that swim, have pumps that take on fluid to sense the environment and are small enough to move around our bloodstream.
Working with Cambridge Enterprise, the University’s commercialisation arm, the team in Cambridge’s Nanophotonics Centre hopes to commercialise the technology for microfluidics bio-applications. The work is funded by the Engineering and Physical Sciences Research Council and the European Research Council.
“There’s a revolution happening in personalised healthcare, and for that we need sensors not just on the outside but on the inside,” explains Baumberg, who leads an interdisciplinary Strategic Research Network and Doctoral Training Centre focused on nanoscience and nanotechnology.
“Nanoscience is driving this. We are now building technology that allows us to even imagine these futures.”
I have featured Welland and his work here before and noted his penchant for wanting to insert nanodevices into humans as per this excerpt from an April 30, 2010 posting,
Getting back to the Cambridge University video, do go and watch it on the Nanowerk site. It is fun and very informative and approximately 17 mins. I noticed that they reused part of their Nokia morph animation (last mentioned on this blog here) and offered some thoughts from Professor Mark Welland, the team leader on that project. Interestingly, Welland was talking about yet another possibility. (Sometimes I think nano goes too far!) He was suggesting that we could have chips/devices in our brains that would allow us to think about phoning someone and an immediate connection would be made to that person. Bluntly—no. Just think what would happen if the marketers got access and I don’t even want to think what a person who suffers psychotic breaks (i.e., hearing voices) would do with even more input. Welland starts to talk at the 11 minute mark (I think). For an alternative take on the video and more details, visit Dexter Johnson’s blog, Nanoclast, for this posting. Hint, he likes the idea of a phone in the brain much better than I do.
I’m not sure what could have occasioned this latest press release and related video featuring Welland and nanotherapeutics other than guessing that it was a slow news period.
An image of spectacular swirling graphene ink in alcohol, which can be used to print electrical circuits onto paper, has won the overall prize in a national science photography competition, organised by the Engineering and Physical Sciences Research Council (EPSRC)
‘Graphene – IPA Ink’, by James Macleod, from the University of Cambridge, shows powdered graphite in alcohol which produces a conductive ink. The ink is forced at high pressure through micrometre-scale capillaries made of diamond. This rips the layers apart resulting in a smooth, conductive material in solution.
The image, came first in two categories, Innovation, and Equipment and Facilities, as well as winning overall against many other stunning pictures, featuring research in action, in the EPSRC‘s competition – now in its fourth year.
James Macleod, explained how the photograph came about: We are working to create conductive inks for printing flexible electronics and are currently focused on optimising our recipe for use in different printing methods and for printing onto different surfaces. This was the first time we had used alcohol to create our ink and I was struck by how mesmerising it looked while mixing.
The competition’s five categories were: Eureka and Discovery, Equipment and Facilities, People and Skills, Innovation, and Weird and Wonderful. Other winning images feature:
A 3D printed gripper which was programmed to lift delicate, geometrical complex objects like a lightbulb, pneumatically rather than using sensors.
A scanning electron microscope image showing the surface of a silicon chip, patterned to create a one metre ultra-thin optical wire, just one millionth of a metre wide made into a spiral and wrapped into an area the size of a square millimetre.
Researcher Michael Coto with a local student in Vingunguti, Dar es Salaam, Tanzania, testing and purifying polluted water using new solar active catalysts.
An image captured on an iPhone 4s through an optical microscope that shows the variety of textures appearing on the surface of a silicon solar cell, not dissimilar to pyramids surrounded by a sea of dunes in a desert, but with the size of a human hair.
Tiny biodegradable polymer particles resembling golf balls being developed to target infectious diseases and cancers. Only 0.04mm across, they form part of scaffolds which are being studied to see if they can support the growth of healthy new cells.
One of the judges was physicist, oceanographer and broadcaster, Dr Helen Czerski, Lecturer at UCL, she said: Scientists and engineers are often so busy focusing on the technical details of their research that they can be blind to what everyone else sees first: the aesthetics of their work. Science is a part of our culture, and it can contribute in many different ways. This competition is a wonderful reminder of the emotional and artistic aspects of science, and it’s great that EPSRC researchers have found this richness in their own work.
Congratulating the winners and entrants, Professor Tom Rodden, EPSRC‘s Deputy Chief Executive, said: The quality of entries into our competition demonstrates that EPSRC-funded researchers are keen to show the world how beautiful and interesting science and engineering can be. I’d like to thank everyone who entered; judging was really difficult.
These stunning images are a great way to engage the public with the research they fund, and inspire everyone to take an interest in science and engineering.
The competition received over 100 entries which were drawn from researchers in receipt of EPSRC funding.
The judges were:
Martin Keene – Group Picture Editor – Press Association
Dr Helen Czerski – Lecturer at the Department of Mechanical Engineering, University College London
Professor Tom Rodden – EPSRC‘s Deputy Chief Executive
I have three news bits about legal issues that are arising as a consequence of emerging technologies.
Deep neural networks, art, and copyright
Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka
Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,
In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”
With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.
Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.
For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.
These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.
DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.
Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.
The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.
Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.
The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.
DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.
Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.
Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.
Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.
Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.
The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.
In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.
DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.
The Fifth Annual Conference on Governance of Emerging Technologies:
Law, Policy and Ethics held at the new
Beus Center for Law & Society in Phoenix, AZ
May 17-19, 2017!
Call for Abstracts – Now Closed
The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.
Gillian Hadfield, Richard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law
Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan
Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence
Craig Shank,Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)
Innovation – Responsible and/or Permissionless
Ellen-Marie Forsberg,Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences
Adam Thierer,Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University
Andrew Maynard,Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University
Gary Marchant,Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University
Anupam Chander,Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law
Pilar Ossorio,Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence
George Poste,Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University
Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge
Responsible Development of AI
Spring Berman,Ira A. Fulton Schools of Engineering, Arizona State University
John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
Subbarao Kambhampati,Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University
Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics
*Current Student / ASU Law Alumni Registration: $50.00
^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)
There you have it.
Neuro-techno future laws
I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,
New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.
The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.
Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”
Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.
Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”
The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.
International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.
Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”
I think it’s time to give this suggestion again. Always hold a little doubt about the science information you read and hear. Everybody makes mistakes.
Here’s an example of what can happen. George Tulevski who gave a talk about nanotechnology in Nov. 2016 for TED@IBM is an accomplished scientist who appears to have made an error during his TED talk. From Tulevski’s The Next Step in Nanotechnology talk transcript page,
When I was a graduate student, it was one of the most exciting times to be working in nanotechnology. There were scientific breakthroughs happening all the time. The conferences were buzzing, there was tons of money pouring in from funding agencies. And the reason is when objects get really small, they’re governed by a different set of physics that govern ordinary objects, like the ones we interact with. We call this physics quantum mechanics. [emphases mine] And what it tells you is that you can precisely tune their behavior just by making seemingly small changes to them, like adding or removing a handful of atoms, or twisting the material. It’s like this ultimate toolkit. You really felt empowered; you felt like you could make anything.
In September 2016, scientists at Cambridge University (UK) announced they had concrete proof that the physics governing materials at the nanoscale is unique, i.e., it does not follow the rules of either classical or quantum physics. From my Oct. 27, 2016 posting,
In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.
Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.
It is very, very easy to miss new developments no matter how tirelessly you scan for information.
Tulevski is a good, interesting, and informed speaker but I do have one other hesitation regarding his talk. He seems to think that over the last 15 years there should have been more practical applications arising from the field of nanotechnology. There are two aspects here. First, he seems to be dating the ‘nanotechnology’ effort from the beginning of the US National Nanotechnology Initiative and there are many scientists who would object to that as the starting point. Second, 15 or even 30 or more years is a brief period of time especially when you are investigating that which hasn’t been investigated before. For example, you might want to check out the book, “Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life” (published 1985) is a book by Steven Shapin and Simon Schaffer (Wikipedia entry for the book). The amount of time (years) spent on how to make just the glue which held the various experimental apparatuses together was a revelation to me. Of course, it makes perfect sense that if you’re trying something new, you’re going to have figure out everything.
By the way, I include my blog as one of the sources of information that can be faulty despite efforts to make corrections and to keep up with the latest. Even the scientists at Cambridge University can run into some problems as I noted in my Jan. 28, 2016 posting.
ETA Jan. 24, 2017: For some insight into how uncertain, tortuous, and expensive commercializing technology can be read Dexter Johnson’s Jan. 23, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website). Here’s an excerpt (Note: Links have been removed),
The brief description of this odyssey includes US $78 million in financing over 15 years and $50 million in revenues over that period through licensing of its technology and patents. That revenue includes a back-against-the-wall sell-off of a key business unit to Lockheed Martin in 2008. Another key moment occured back in 2012 when Belgian-based nanoelectronics powerhouse Imec took on the job of further developing Nantero’s carbon-nanotube-based memory back in 2012. Despite the money and support from major electronics players, the big commercial breakout of their NRAM technology seemed ever less likely to happen with the passage of time.
Slate.com is dedicating a month (January 2017) to Frankenstein. This means there were will be one or more essays each week on one aspect or another of Frankenstein and science. These essays are one of a series of initiatives jointly supported by Slate, Arizona State University, and an organization known as New America. It gets confusing since these essays are listed as part of two initiatives: Futurography and Future Tense.
The really odd part, as far as I’m concerned, is that there is no mention of Arizona State University’s (ASU) The Frankenstein Bicentennial Project (mentioned in my Oct. 26, 2016 posting). Perhaps they’re concerned that people will think ASU is advertising the project?
Getting back to the essays, a Jan. 3, 2017 article by Jacob Brogan explains, by means of a ‘Question and Answer’ format article, why the book and the monster maintain popular interest after two centuries (Note: We never do find out who or how many people are supplying the answers),
OK, fine. I get that this book is important, but why are we talking about it in a series about emerging technology?
Though people still tend to weaponize it as a simple anti-scientific screed, Frankenstein, which was first published in 1818, is much richer when we read it as a complex dialogue about our relationship to innovation—both our desire for it and our fear of the changes it brings. Mary Shelley was just a teenager when she began to compose Frankenstein, but she was already grappling with our complex relationship to new forces. Almost two centuries on, the book is just as propulsive and compelling as it was when it was first published. That’s partly because it’s so thick with ambiguity—and so resistant to easy interpretation.
Is it really ambiguous? I mean, when someone calls something frankenfood, they aren’t calling it “ethically ambiguous food.”
It’s a fair point. For decades, Frankenstein has been central to discussions in and about bioethics. Perhaps most notably, it frequently crops up as a reference point in discussions of genetically modified organisms, where the prefix Franken- functions as a sort of convenient shorthand for human attempts to meddle with the natural order. Today, the most prominent flashpoint for those anxieties is probably the clustered regularly interspaced short palindromic repeats, or CRISPR, gene-editing technique [emphasis mine]. But it’s really oversimplifying to suggest Frankenstein is a cautionary tale about monkeying with life.
As we’ll see throughout this month on Futurography, it’s become a lens for looking at the unintended consequences of things like synthetic biology, animal experimentation, artificial intelligence, and maybe even social networking. Facebook, for example, has arguably taken on a life of its own, as its algorithms seem to influence the course of elections. Mark Zuckerberg, who’s sometimes been known to disavow the power of his own platform, might well be understood as a Frankensteinian figure, amplifying his creation’s monstrosity by neglecting its practical needs.
But this book is almost 200 years old! Surely the actual science in it is bad.
Shelley herself would probably be the first to admit that the science in the novel isn’t all that accurate. Early in the novel, Victor Frankenstein meets with a professor who castigates him for having read the wrong works of “natural philosophy.” Shelley’s protagonist has mostly been studying alchemical tomes and otherwise fantastical works, the sort of things that were recognized as pseudoscience, even by the standards of the day. Near the start of the novel, Frankenstein attends a lecture in which the professor declaims on the promise of modern science. He observes that where the old masters “promised impossibilities and performed nothing,” the new scientists achieve far more in part because they “promise very little; they know that metals cannot be transmuted and that the elixir of life is a chimera.”
Is it actually about bad science, though?
Not exactly, but it has been read as a story about bad scientists.
Ultimately, Frankenstein outstrips his own teachers, of course, and pulls off the very feats they derided as mere fantasy. But Shelley never seems to confuse fact and fiction, and, in fact, she largely elides any explanation of how Frankenstein pulls off the miraculous feat of animating dead tissue. We never actually get a scene of the doctor awakening his creature. The novel spends far more dwelling on the broader reverberations of that act, showing how his attempt to create one life destroys countless others. Read in this light, Frankenstein isn’t telling us that we shouldn’t try to accomplish new things, just that we should take care when we do.
This speaks to why the novel has stuck around for so long. It’s not about particular scientific accomplishments but the vagaries of scientific progress in general.
Does that make it into a warning against playing God?
It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.
Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.
I encourage you to read Brogan’s piece in its entirety and perhaps supplement the reading. Mary Shelley has a pretty interesting history. She ran off with Percy Bysshe Shelley who was married to another woman, in 1814 at the age of seventeen years. Her parents were both well known and respected intellectuals and philosophers, William Godwin and Mary Wollstonecraft. By the time Mary Shelley wrote her book, her first baby had died and she had given birth to a second child, a boy. Percy Shelley was to die a few years later as was her son and a third child she’d given birth to. (Her fourth child born in 1819 did survive.) I mention the births because one analysis I read suggests the novel is also a commentary on childbirth. In fact, the Frankenstein narrative has been examined from many perspectives (other than science) including feminism and LGBTQ studies.
Getting back to the science fiction end of things, the next part of the Futurography series is titled “A Cheat-Sheet Guide to Frankenstein” and that too is written by Jacob Brogan with a publication date of Jan. 3, 2017,
Marilyn Butler: Butler, a literary critic and English professor at the University of Cambridge, authored the seminal essay “Frankenstein and Radical Science.”
Jennifer Doudna: A professor of chemistry and biology at the University of California, Berkeley, Doudna helped develop the CRISPR gene-editing technique [emphasis mine].
Stephen Jay Gould: Gould is an evolutionary biologist and has written in defense of Frankenstein’s scientific ambitions, arguing that hubris wasn’t the doctor’s true fault.
Seán Ó hÉigeartaigh: As executive director of the Center for Existential Risk at the University of Cambridge, hÉigeartaigh leads research into technologies that threaten the existience of our species.
Jim Hightower: This columnist and activist helped popularize the term frankenfood to describe genetically modified crops.
Mary Shelley: Shelley, the author of Frankenstein, helped create science fiction as we now know it.
J. Craig Venter: A leading genomic researcher, Venter has pursued a variety of human biotechnology projects.
‘Franken’ and CRISPR
The first essay is in a Jan. 6, 2016 article by Kay Waldman focusing on the ‘franken’ prefix (Note: links have been removed),
In a letter to the New York Times on June 2, 1992, an English professor named Paul Lewis lopped off the top of Victor Frankenstein’s surname and sewed it onto a tomato. Railing against genetically modified crops, Lewis put a new generation of natural philosophers on notice: “If they want to sell us Frankenfood, perhaps it’s time to gather the villagers, light some torches and head to the castle,” he wrote.
William Safire, in a 2000 New York Times column, tracked the creation of the franken- prefix to this moment: an academic channeling popular distrust of science by invoking the man who tried to improve upon creation and ended up disfiguring it. “There’s no telling where or how it will end,” he wrote wryly, referring to the spread of the construction. “It has enhanced the sales of the metaphysical novel that Ms. Shelley’s husband, the poet Percy Bysshe Shelley, encouraged her to write, and has not harmed sales at ‘Frank’n’Stein,’ the fast-food chain whose hot dogs and beer I find delectably inorganic.” Safire went on to quote the American Dialect Society’s Laurence Horn, who lamented that despite the ’90s flowering of frankenfruits and frankenpigs, people hadn’t used Frankensense to describe “the opposite of common sense,” as in “politicians’ motivations for a creatively stupid piece of legislation.”
A year later, however, Safire returned to franken- in dead earnest. In an op-ed for the Times avowing the ethical value of embryonic stem cell research, the columnist suggested that a White House conference on bioethics would salve the fears of Americans concerned about “the real dangers of the slippery slope to Frankenscience.”
All of this is to say that franken-, the prefix we use to talk about human efforts to interfere with nature, flips between “funny” and “scary” with ease. Like Shelley’s monster himself, an ungainly patchwork of salvaged parts, it can seem goofy until it doesn’t—until it taps into an abiding anxiety that technology raises in us, a fear of overstepping.
Waldman’s piece hints at how language can shape discussions while retaining a rather playful quality.
Since its publication nearly 200 years ago, Shelley’s gothic novel has been read as a cautionary tale of the dangers of creation and experimentation. James Whale’s 1931 film took the message further, assigning explicitly the hubris of playing God to the mad scientist. As his monster comes to life, Dr. Frankenstein, played by Colin Clive, triumphantly exclaims: “Now I know what it feels like to be God!”
The admonition against playing God has since been ceaselessly invoked as a rhetorical bogeyman. Secular and religious, critic and journalist alike have summoned the term to deride and outright dismiss entire areas of research and technology, including stem cells, genetically modified crops, recombinant DNA, geoengineering, and gene editing. As we near the two-century commemoration of Shelley’s captivating story, we would be wise to shed this shorthand lesson—and to put this part of the Frankenstein legacy to rest in its proverbial grave.
The trouble with the term arises first from its murkiness. What exactly does it mean to play God, and why should we find it objectionable on its face? All but zealots would likely agree that it’s fine to create new forms of life through selective breeding and grafting of fruit trees, or to use in-vitro fertilization to conceive life outside the womb to aid infertile couples. No one objects when people intervene in what some deem “acts of God,” such as earthquakes, to rescue victims and provide relief. People get fully behind treating patients dying of cancer with “unnatural” solutions like chemotherapy. Most people even find it morally justified for humans to mete out decisions as to who lives or dies in the form of organ transplant lists that prize certain people’s survival over others.
So what is it—if not the imitation of a deity or the creation of life—that inspires people to invoke the idea of “playing God” to warn against, or even stop, particular technologies? A presidential commission charged in the early 1980s with studying the ethics of genetic engineering of humans, in the wake of the recombinant DNA revolution, sheds some light on underlying motivations. The commission sought to understand the concerns expressed by leaders of three major religious groups in the United States—representing Protestants, Jews, and Catholics—who had used the phrase “playing God” in a 1980 letter to President Jimmy Carter urging government oversight. Scholars from the three faiths, the commission concluded, did not see a theological reason to flat-out prohibit genetic engineering. Their concerns, it turned out, weren’t exactly moral objections to scientists acting as God. Instead, they echoed those of the secular public; namely, they feared possible negative effects from creating new human traits or new species. In other words, the religious leaders who called recombinant DNA tools “playing God” wanted precautions taken against bad consequences but did not inherently oppose the use of the technology as an act of human hubris.
She presents an interesting argument and offers this as a solution,
The lesson for contemporary science, then, is not that we should cease creating and discovering at the boundaries of current human knowledge. It’s that scientists and technologists ought to steward their inventions into society, and to more rigorously participate in public debate about their work’s social and ethical consequences. Frankenstein’s proper legacy today would be to encourage researchers to address the unsavory implications of their technologies, whether it’s the cognitive and social effects of ubiquitous smartphone use or the long-term consequences of genetically engineered organisms on ecosystems and biodiversity.
Some will undoubtedly argue that this places an undue burden on innovators. Here, again, Shelley’s novel offers a lesson. Scientists who cloister themselves as Dr. Frankenstein did—those who do not fully contemplate the consequences of their work—risk later encounters with the horror of their own inventions.
At a guess, Venkataraman seems to be assuming that if scientists communicate and make their case that the public will cease to panic with reference moralistic and other concerns. My understanding is that social scientists have found this is not the case. Someone may understand the technology quite well and still oppose it.
Frankenstein and anti-vaxxers
The Jan. 16, 2017 essay by Charles Kenny is the weakest of the lot, so far (Note: Links have been removed),
In 1780, University of Bologna physician Luigi Galvani found something peculiar: When he applied an electric current to the legs of a dead frog, they twitched. Thirty-seven years later, Mary Shelley had Galvani’s experiments in mind as she wrote her fable of Faustian overreach, wherein Dr. Victor Frankenstein plays God by reanimating flesh.
And a little less than halfway between those two dates, English physician Edward Jenner demonstrated the efficacy of a vaccine against smallpox—one of the greatest killers of the age. Given the suspicion with which Romantic thinkers like Shelley regarded scientific progress, it is no surprise that many at the time damned the procedure as against the natural order. But what is surprising is how that suspicion continues to endure, even after two centuries of spectacular successes for vaccination. This anti-vaccination stance—which now infects even the White House—demonstrates the immense harm that can be done by excessive distrust of technological advance.
Kenny employs history as a framing device. Crudely, Galvani’s experiments led to Mary Shelley’s Frankenstein which is a fable about ‘playing God’. (Kenny seems unaware there are many other readings of and perspectives on the book.) As for his statement ” … the suspicion with which Romantic thinkers like Shelley regarded scientific progress … ,” I’m not sure how he arrived at his conclusion about Romantic thinkers. According to Richard Holmes (in his book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science), their relationship to science was more complex. Percy Bysshe Shelley ran ballooning experiments and wrote poetry about science, which included footnotes for the literature and concepts he was referencing; John Keats was a medical student prior to his establishment as a poet; and Samuel Taylor Coleridge (The Rime of the Ancient Mariner, etc.) maintained a healthy correspondence with scientists of the day sometimes influencing their research. In fact, when you analyze the matter, you realize even scientists are, on occasion, suspicious of science.
As for the anti-vaccination wars, I wish this essay had been more thoughtful. Yes, Andrew Wakefield’s research showing a link between MMR (measles, mumps, and rubella) vaccinations and autism is a sham. However, having concerns and suspicions about technology does not render you a fool who hasn’t progressed from 18th/19th Century concerns and suspicions about science and technology. For example, vaccines are being touted for all kinds of things, the latest being a possible antidote to opiate addiction (see Susan Gados’ June 28, 2016 article for ScienceNews). Are we going to be vaccinated for everything? What happens when you keep piling vaccination on top of vaccination? Instead of a debate, the discussion has devolved to: “I’m right and you’re wrong.”
For the record, I’m grateful for the vaccinations I’ve had and the diminishment of diseases that were devastating and seem to be making a comeback with this current anti-vaccination fever. That said, I think there are some important questions about vaccines.
Kenny’s essay could have been a nuanced discussion of vaccines that have clearly raised the bar for public health and some of the concerns regarding the current pursuit of yet more vaccines. Instead, he’s been quite dismissive of anyone who questions vaccination orthodoxy.
The end of this piece
There will be more essays in Slate’s Frankenstein series but I don’t have time to digest and write commentary for all of them.
Please use this piece as a critical counterpoint to some of the series and, if I’ve done my job, you’ll critique this critique. Please do let me know if you find any errors or want to add an opinion or add your own critique in the Comments of this blog.
ETA Jan. 25, 2017: Here’s the Frankenstein webspace on Slate’s Futurography which lists all the essays in this series. It’s well worth looking at the list. There are several that were not covered here.
The conversion of bacteria from an enemy to be vanquished at all costs to a ‘frenemy’, a friendly enemy supplying possible solutions for problems is fascinating. An Oct. 26, 2016 news item on Nanowerk falls into the ‘frenemy’ camp,
A new prototype of a lithium-sulphur battery – which could have five times the energy density of a typical lithium-ion battery – overcomes one of the key hurdles preventing their commercial development by mimicking the structure of the cells which allow us to absorb nutrients.
Researchers have developed a prototype of a next-generation lithium-sulphur battery which takes its inspiration in part from the cells lining the human intestine. The batteries, if commercially developed, would have five times the energy density of the lithium-ion batteries used in smartphones and other electronics.
The new design, by researchers from the University of Cambridge, overcomes one of the key technical problems hindering the commercial development of lithium-sulphur batteries, by preventing the degradation of the battery caused by the loss of material within it. The results are reported in the journal Advanced Functional Materials.
Working with collaborators at the Beijing Institute of Technology, the Cambridge researchers based in Dr Vasant Kumar’s team in the Department of Materials Science and Metallurgy developed and tested a lightweight nanostructured material which resembles villi, the finger-like protrusions which line the small intestine. In the human body, villi are used to absorb the products of digestion and increase the surface area over which this process can take place.
In the new lithium-sulphur battery, a layer of material with a villi-like structure, made from tiny zinc oxide wires, is placed on the surface of one of the battery’s electrodes. This can trap fragments of the active material when they break off, keeping them electrochemically accessible and allowing the material to be reused.
“It’s a tiny thing, this layer, but it’s important,” said study co-author Dr Paul Coxon from Cambridge’s Department of Materials Science and Metallurgy. “This gets us a long way through the bottleneck which is preventing the development of better batteries.”
A typical lithium-ion battery is made of three separate components: an anode (negative electrode), a cathode (positive electrode) and an electrolyte in the middle. The most common materials for the anode and cathode are graphite and lithium cobalt oxide respectively, which both have layered structures. Positively-charged lithium ions move back and forth from the cathode, through the electrolyte and into the anode.
The crystal structure of the electrode materials determines how much energy can be squeezed into the battery. For example, due to the atomic structure of carbon, each carbon atom can take on six lithium ions, limiting the maximum capacity of the battery.
Sulphur and lithium react differently, via a multi-electron transfer mechanism meaning that elemental sulphur can offer a much higher theoretical capacity, resulting in a lithium-sulphur battery with much higher energy density. However, when the battery discharges, the lithium and sulphur interact and the ring-like sulphur molecules transform into chain-like structures, known as a poly-sulphides. As the battery undergoes several charge-discharge cycles, bits of the poly-sulphide can go into the electrolyte, so that over time the battery gradually loses active material.
The Cambridge researchers have created a functional layer which lies on top of the cathode and fixes the active material to a conductive framework so the active material can be reused. The layer is made up of tiny, one-dimensional zinc oxide nanowires grown on a scaffold. The concept was trialled using commercially-available nickel foam for support. After successful results, the foam was replaced by a lightweight carbon fibre mat to reduce the battery’s overall weight.
“Changing from stiff nickel foam to flexible carbon fibre mat makes the layer mimic the way small intestine works even further,” said study co-author Dr Yingjun Liu.
This functional layer, like the intestinal villi it resembles, has a very high surface area. The material has a very strong chemical bond with the poly-sulphides, allowing the active material to be used for longer, greatly increasing the lifespan of the battery.
“This is the first time a chemically functional layer with a well-organised nano-architecture has been proposed to trap and reuse the dissolved active materials during battery charging and discharging,” said the study’s lead author Teng Zhao, a PhD student from the Department of Materials Science & Metallurgy. “By taking our inspiration from the natural world, we were able to come up with a solution that we hope will accelerate the development of next-generation batteries.”
For the time being, the device is a proof of principle, so commercially-available lithium-sulphur batteries are still some years away. Additionally, while the number of times the battery can be charged and discharged has been improved, it is still not able to go through as many charge cycles as a lithium-ion battery. However, since a lithium-sulphur battery does not need to be charged as often as a lithium-ion battery, it may be the case that the increase in energy density cancels out the lower total number of charge-discharge cycles.
“This is a way of getting around one of those awkward little problems that affects all of us,” said Coxon. “We’re all tied in to our electronic devices – ultimately, we’re just trying to make those devices work better, hopefully making our lives a little bit nicer.”
I hadn’t realized this still needed to be proved but it’s always good to have your misconceptions adjusted. Here’s more about the work from the University of Cambridge in a Sept. 30, 2016 news item on phys.org,
Scientists have long suspected that the way materials behave on the nanoscale – that is when particles have dimensions of about 1–100 nanometres – is different from how they behave on any other scale. A new paper in the journal Chemical Science provides concrete proof that this is the case.
The laws of thermodynamics govern the behaviour of materials in the macro world, while quantum mechanics describes behaviour of particles at the other extreme, in the world of single atoms and electrons.
In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.
Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.
And because there are so many particles doing different things at the same time, it’s difficult to simulate all their interactions using a computer. It’s also hard to gather much experimental information, because we haven’t yet developed the capacity to measure behaviour on such a tiny scale.
This conundrum becomes particularly acute when we’re trying to understand crystallisation, the process by which particles, randomly distributed in a solution, can form highly ordered crystal structures, given the right conditions.
Chemists don’t really understand how this works. How do around 1018 molecules, moving around in solution at random, come together to form a micro- to millimetre size ordered crystal? Most remarkable perhaps is the fact that in most cases every crystal is ordered in the same way every time the crystal is formed.
However, it turns out that different conditions can sometimes yield different crystal structures. These are known as polymorphs, and they’re important in many branches of science including medicine – a drug can behave differently in the body depending on which polymorph it’s crystallised in.
What we do know so far about the process, at least according to one widely accepted model, is that particles in solution can come together to form a nucleus, and once a critical mass is reached we see crystal growth. The structure of the nucleus determines the structure of the final crystal, that is, which polymorph we get.
What we have not known until now is what determines the structure of the nucleus in the first place, and that happens on the nanoscale.
In this paper, the authors have used mechanochemistry – that is milling and grinding – to obtain nanosized particles, small enough that surface effects become significant. In other words, the chemistry of the nanoworld – which structures are the most stable at this scale, and what conditions affect their stability, has been studied for the first time with carefully controlled experiments.
And by changing the milling conditions, for example by adding a small amount of solvent, the authors have been able to control which polymorph is the most stable. Professor Jeremy Sanders of the University of Cambridge’s Department of Chemistry, who led the work, said “It is exciting that these simple experiments, when carried out with great care, can unexpectedly open a new door to understanding the fundamental question of how surface effects can control the stability of nanocrystals.”
Joel Bernstein, Global Distinguished Professor of Chemistry at NYU Abu Dhabi, and an expert in crystal growth and structure, explains: “The authors have elegantly shown how to experimentally measure and simulate situations where you have two possible nuclei, say A and B, and determine that A is more stable. And they can also show what conditions are necessary in order for these stabilities to invert, and for B to become more stable than A.”
“This is really news, because you can’t make those predictions using classical thermodynamics, and nor is this the quantum effect. But by doing these experiments, the authors have started to gain an understanding of how things do behave on this size regime, and how we can predict and thus control it. The elegant part of the experiment is that they have been able to nucleate A and B selectively and reversibly.”
One of the key words of chemical synthesis is ‘control’. Chemists are always trying to control the properties of materials, whether that’s to make a better dye or plastic, or a drug that’s more effective in the body. So if we can learn to control how molecules in a solution come together to form solids, we can gain a great deal. This work is a significant first step in gaining that control.
A new prize is being inaugurated, the $US100,000 Nine Dots Prize for creative thinking and it’s open to anyone anywhere in the world. Here’s more from an Oct. 21, 2016 article by Jane Tinkler for the Guardian (Note: Links have been removed),
In the debate over this year’s surprise award to Bob Dylan, it is easy to lose sight of the long history of prizes being used to recognise great writing (in whatever form), great research and other outstanding achievements.
The use of prizes dates back furthest in the sciences. In 1714, the British government famously offered an award of £20,000 (about £2.5 million at today’s value) to the person who could find a way of determining a ship’s longitude. British clockmaker John Harrison won the Longitude Prize and, by doing so, improved the safety of long-distance sea travel.
Prizes are now proliferating. Since 2000, more than sixty prizes of more than $100,000 (US dollars) have been created, and the field of philanthropic prize-giving is estimated to exceed £1 billion each year. Prizes are seen as ways to reward excellence, build networks, support collaboration and direct efforts towards practical and social goals. Those awarding them include philanthropists, governments and companies.
Today [Oct. 21, 2016] sees the launch of the newest kid on the prize-giving block. Drawing its name from a puzzle that can be solved only by lateral thinking, the Nine Dots prize wants to encourage creative thinking and writing that can help to tackle social problems. It is sponsored by the Kadas Prize Foundation, with the support of the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) at the University of Cambridge, and Cambridge University Press.
The Nine Dots prize is a hybrid of [three types of prizes]. There is a recognition [emphasis mine] aspect, but it doesn’t require an extensive back catalogue. The prize will be judged by a board of twelve renowned scholars, thinkers and writers. They will assess applications on an anonymised basis, so whoever wins will have done so not because of past work, but because of the strength of their ideas, and ability to communicate them effectively.
It is an incentive [emphasis mine] prize in that we ask applicants to respond to a defined question. The inaugural question is: “Are digital technologies making politics impossible?” [emphasis mine]. This is not proscriptive: applicants are encouraged to define what the question means to them, and to respond to that. We expect the submissions to be wildly varied. A new question will be set every two years, always with a focus on pressing issues that affect society. The prize’s disciplinary heartland lies in the social sciences, but responses from all fields, sectors and life experiences are welcome.
Finally, it is a resource [emphasis mine] prize in that it does not expect all the answers at the point of application. Applicants need to provide a 3,000-word summary of how they would approach the question. Board members will assess these, and the winner will then be invited to write their ideas up into a short, accessible book, that will be published by Cambridge University Press. A prize award of $100,000 (£82,000) will support the winner to take time out to think and write over a nine month period. The winner will also have the option of a term’s visiting fellowship at the University of Cambridge, to help with the writing process.
With this mix of elements, we hope the Nine Dots prize will encourage creative thinking about some of today’s most pressing issues. The winner’s book will be made freely accessible online; we hope it will capture the public’s imagination and spark a real debate.
The submission deadline is Jan. 31, 2017 and the winner announcement is May 2017. The winner’s book is to be published May 2018.