An image of spectacular swirling graphene ink in alcohol, which can be used to print electrical circuits onto paper, has won the overall prize in a national science photography competition, organised by the Engineering and Physical Sciences Research Council (EPSRC)
‘Graphene – IPA Ink’, by James Macleod, from the University of Cambridge, shows powdered graphite in alcohol which produces a conductive ink. The ink is forced at high pressure through micrometre-scale capillaries made of diamond. This rips the layers apart resulting in a smooth, conductive material in solution.
The image, came first in two categories, Innovation, and Equipment and Facilities, as well as winning overall against many other stunning pictures, featuring research in action, in the EPSRC‘s competition – now in its fourth year.
James Macleod, explained how the photograph came about: We are working to create conductive inks for printing flexible electronics and are currently focused on optimising our recipe for use in different printing methods and for printing onto different surfaces. This was the first time we had used alcohol to create our ink and I was struck by how mesmerising it looked while mixing.
The competition’s five categories were: Eureka and Discovery, Equipment and Facilities, People and Skills, Innovation, and Weird and Wonderful. Other winning images feature:
A 3D printed gripper which was programmed to lift delicate, geometrical complex objects like a lightbulb, pneumatically rather than using sensors.
A scanning electron microscope image showing the surface of a silicon chip, patterned to create a one metre ultra-thin optical wire, just one millionth of a metre wide made into a spiral and wrapped into an area the size of a square millimetre.
Researcher Michael Coto with a local student in Vingunguti, Dar es Salaam, Tanzania, testing and purifying polluted water using new solar active catalysts.
An image captured on an iPhone 4s through an optical microscope that shows the variety of textures appearing on the surface of a silicon solar cell, not dissimilar to pyramids surrounded by a sea of dunes in a desert, but with the size of a human hair.
Tiny biodegradable polymer particles resembling golf balls being developed to target infectious diseases and cancers. Only 0.04mm across, they form part of scaffolds which are being studied to see if they can support the growth of healthy new cells.
One of the judges was physicist, oceanographer and broadcaster, Dr Helen Czerski, Lecturer at UCL, she said: Scientists and engineers are often so busy focusing on the technical details of their research that they can be blind to what everyone else sees first: the aesthetics of their work. Science is a part of our culture, and it can contribute in many different ways. This competition is a wonderful reminder of the emotional and artistic aspects of science, and it’s great that EPSRC researchers have found this richness in their own work.
Congratulating the winners and entrants, Professor Tom Rodden, EPSRC‘s Deputy Chief Executive, said: The quality of entries into our competition demonstrates that EPSRC-funded researchers are keen to show the world how beautiful and interesting science and engineering can be. I’d like to thank everyone who entered; judging was really difficult.
These stunning images are a great way to engage the public with the research they fund, and inspire everyone to take an interest in science and engineering.
The competition received over 100 entries which were drawn from researchers in receipt of EPSRC funding.
The judges were:
Martin Keene – Group Picture Editor – Press Association
Dr Helen Czerski – Lecturer at the Department of Mechanical Engineering, University College London
Professor Tom Rodden – EPSRC‘s Deputy Chief Executive
I have three news bits about legal issues that are arising as a consequence of emerging technologies.
Deep neural networks, art, and copyright
Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka
Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,
In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”
With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.
Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.
For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.
These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.
DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.
Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.
The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.
Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.
The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.
DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.
Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.
Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.
Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.
Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.
The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.
In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.
DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.
The Fifth Annual Conference on Governance of Emerging Technologies:
Law, Policy and Ethics held at the new
Beus Center for Law & Society in Phoenix, AZ
May 17-19, 2017!
Call for Abstracts – Now Closed
The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.
Gillian Hadfield, Richard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law
Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan
Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence
Craig Shank,Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)
Innovation – Responsible and/or Permissionless
Ellen-Marie Forsberg,Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences
Adam Thierer,Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University
Andrew Maynard,Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University
Gary Marchant,Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University
Anupam Chander,Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law
Pilar Ossorio,Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence
George Poste,Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University
Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge
Responsible Development of AI
Spring Berman,Ira A. Fulton Schools of Engineering, Arizona State University
John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
Subbarao Kambhampati,Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University
Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics
*Current Student / ASU Law Alumni Registration: $50.00
^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)
There you have it.
Neuro-techno future laws
I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,
New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.
The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.
Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”
Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.
Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”
The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.
International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.
Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”
I think it’s time to give this suggestion again. Always hold a little doubt about the science information you read and hear. Everybody makes mistakes.
Here’s an example of what can happen. George Tulevski who gave a talk about nanotechnology in Nov. 2016 for TED@IBM is an accomplished scientist who appears to have made an error during his TED talk. From Tulevski’s The Next Step in Nanotechnology talk transcript page,
When I was a graduate student, it was one of the most exciting times to be working in nanotechnology. There were scientific breakthroughs happening all the time. The conferences were buzzing, there was tons of money pouring in from funding agencies. And the reason is when objects get really small, they’re governed by a different set of physics that govern ordinary objects, like the ones we interact with. We call this physics quantum mechanics. [emphases mine] And what it tells you is that you can precisely tune their behavior just by making seemingly small changes to them, like adding or removing a handful of atoms, or twisting the material. It’s like this ultimate toolkit. You really felt empowered; you felt like you could make anything.
In September 2016, scientists at Cambridge University (UK) announced they had concrete proof that the physics governing materials at the nanoscale is unique, i.e., it does not follow the rules of either classical or quantum physics. From my Oct. 27, 2016 posting,
In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.
Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.
It is very, very easy to miss new developments no matter how tirelessly you scan for information.
Tulevski is a good, interesting, and informed speaker but I do have one other hesitation regarding his talk. He seems to think that over the last 15 years there should have been more practical applications arising from the field of nanotechnology. There are two aspects here. First, he seems to be dating the ‘nanotechnology’ effort from the beginning of the US National Nanotechnology Initiative and there are many scientists who would object to that as the starting point. Second, 15 or even 30 or more years is a brief period of time especially when you are investigating that which hasn’t been investigated before. For example, you might want to check out the book, “Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life” (published 1985) is a book by Steven Shapin and Simon Schaffer (Wikipedia entry for the book). The amount of time (years) spent on how to make just the glue which held the various experimental apparatuses together was a revelation to me. Of course, it makes perfect sense that if you’re trying something new, you’re going to have figure out everything.
By the way, I include my blog as one of the sources of information that can be faulty despite efforts to make corrections and to keep up with the latest. Even the scientists at Cambridge University can run into some problems as I noted in my Jan. 28, 2016 posting.
ETA Jan. 24, 2017: For some insight into how uncertain, tortuous, and expensive commercializing technology can be read Dexter Johnson’s Jan. 23, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website). Here’s an excerpt (Note: Links have been removed),
The brief description of this odyssey includes US $78 million in financing over 15 years and $50 million in revenues over that period through licensing of its technology and patents. That revenue includes a back-against-the-wall sell-off of a key business unit to Lockheed Martin in 2008. Another key moment occured back in 2012 when Belgian-based nanoelectronics powerhouse Imec took on the job of further developing Nantero’s carbon-nanotube-based memory back in 2012. Despite the money and support from major electronics players, the big commercial breakout of their NRAM technology seemed ever less likely to happen with the passage of time.
Slate.com is dedicating a month (January 2017) to Frankenstein. This means there were will be one or more essays each week on one aspect or another of Frankenstein and science. These essays are one of a series of initiatives jointly supported by Slate, Arizona State University, and an organization known as New America. It gets confusing since these essays are listed as part of two initiatives: Futurography and Future Tense.
The really odd part, as far as I’m concerned, is that there is no mention of Arizona State University’s (ASU) The Frankenstein Bicentennial Project (mentioned in my Oct. 26, 2016 posting). Perhaps they’re concerned that people will think ASU is advertising the project?
Getting back to the essays, a Jan. 3, 2017 article by Jacob Brogan explains, by means of a ‘Question and Answer’ format article, why the book and the monster maintain popular interest after two centuries (Note: We never do find out who or how many people are supplying the answers),
OK, fine. I get that this book is important, but why are we talking about it in a series about emerging technology?
Though people still tend to weaponize it as a simple anti-scientific screed, Frankenstein, which was first published in 1818, is much richer when we read it as a complex dialogue about our relationship to innovation—both our desire for it and our fear of the changes it brings. Mary Shelley was just a teenager when she began to compose Frankenstein, but she was already grappling with our complex relationship to new forces. Almost two centuries on, the book is just as propulsive and compelling as it was when it was first published. That’s partly because it’s so thick with ambiguity—and so resistant to easy interpretation.
Is it really ambiguous? I mean, when someone calls something frankenfood, they aren’t calling it “ethically ambiguous food.”
It’s a fair point. For decades, Frankenstein has been central to discussions in and about bioethics. Perhaps most notably, it frequently crops up as a reference point in discussions of genetically modified organisms, where the prefix Franken- functions as a sort of convenient shorthand for human attempts to meddle with the natural order. Today, the most prominent flashpoint for those anxieties is probably the clustered regularly interspaced short palindromic repeats, or CRISPR, gene-editing technique [emphasis mine]. But it’s really oversimplifying to suggest Frankenstein is a cautionary tale about monkeying with life.
As we’ll see throughout this month on Futurography, it’s become a lens for looking at the unintended consequences of things like synthetic biology, animal experimentation, artificial intelligence, and maybe even social networking. Facebook, for example, has arguably taken on a life of its own, as its algorithms seem to influence the course of elections. Mark Zuckerberg, who’s sometimes been known to disavow the power of his own platform, might well be understood as a Frankensteinian figure, amplifying his creation’s monstrosity by neglecting its practical needs.
But this book is almost 200 years old! Surely the actual science in it is bad.
Shelley herself would probably be the first to admit that the science in the novel isn’t all that accurate. Early in the novel, Victor Frankenstein meets with a professor who castigates him for having read the wrong works of “natural philosophy.” Shelley’s protagonist has mostly been studying alchemical tomes and otherwise fantastical works, the sort of things that were recognized as pseudoscience, even by the standards of the day. Near the start of the novel, Frankenstein attends a lecture in which the professor declaims on the promise of modern science. He observes that where the old masters “promised impossibilities and performed nothing,” the new scientists achieve far more in part because they “promise very little; they know that metals cannot be transmuted and that the elixir of life is a chimera.”
Is it actually about bad science, though?
Not exactly, but it has been read as a story about bad scientists.
Ultimately, Frankenstein outstrips his own teachers, of course, and pulls off the very feats they derided as mere fantasy. But Shelley never seems to confuse fact and fiction, and, in fact, she largely elides any explanation of how Frankenstein pulls off the miraculous feat of animating dead tissue. We never actually get a scene of the doctor awakening his creature. The novel spends far more dwelling on the broader reverberations of that act, showing how his attempt to create one life destroys countless others. Read in this light, Frankenstein isn’t telling us that we shouldn’t try to accomplish new things, just that we should take care when we do.
This speaks to why the novel has stuck around for so long. It’s not about particular scientific accomplishments but the vagaries of scientific progress in general.
Does that make it into a warning against playing God?
It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.
Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.
I encourage you to read Brogan’s piece in its entirety and perhaps supplement the reading. Mary Shelley has a pretty interesting history. She ran off with Percy Bysshe Shelley who was married to another woman, in 1814 at the age of seventeen years. Her parents were both well known and respected intellectuals and philosophers, William Godwin and Mary Wollstonecraft. By the time Mary Shelley wrote her book, her first baby had died and she had given birth to a second child, a boy. Percy Shelley was to die a few years later as was her son and a third child she’d given birth to. (Her fourth child born in 1819 did survive.) I mention the births because one analysis I read suggests the novel is also a commentary on childbirth. In fact, the Frankenstein narrative has been examined from many perspectives (other than science) including feminism and LGBTQ studies.
Getting back to the science fiction end of things, the next part of the Futurography series is titled “A Cheat-Sheet Guide to Frankenstein” and that too is written by Jacob Brogan with a publication date of Jan. 3, 2017,
Marilyn Butler: Butler, a literary critic and English professor at the University of Cambridge, authored the seminal essay “Frankenstein and Radical Science.”
Jennifer Doudna: A professor of chemistry and biology at the University of California, Berkeley, Doudna helped develop the CRISPR gene-editing technique [emphasis mine].
Stephen Jay Gould: Gould is an evolutionary biologist and has written in defense of Frankenstein’s scientific ambitions, arguing that hubris wasn’t the doctor’s true fault.
Seán Ó hÉigeartaigh: As executive director of the Center for Existential Risk at the University of Cambridge, hÉigeartaigh leads research into technologies that threaten the existience of our species.
Jim Hightower: This columnist and activist helped popularize the term frankenfood to describe genetically modified crops.
Mary Shelley: Shelley, the author of Frankenstein, helped create science fiction as we now know it.
J. Craig Venter: A leading genomic researcher, Venter has pursued a variety of human biotechnology projects.
‘Franken’ and CRISPR
The first essay is in a Jan. 6, 2016 article by Kay Waldman focusing on the ‘franken’ prefix (Note: links have been removed),
In a letter to the New York Times on June 2, 1992, an English professor named Paul Lewis lopped off the top of Victor Frankenstein’s surname and sewed it onto a tomato. Railing against genetically modified crops, Lewis put a new generation of natural philosophers on notice: “If they want to sell us Frankenfood, perhaps it’s time to gather the villagers, light some torches and head to the castle,” he wrote.
William Safire, in a 2000 New York Times column, tracked the creation of the franken- prefix to this moment: an academic channeling popular distrust of science by invoking the man who tried to improve upon creation and ended up disfiguring it. “There’s no telling where or how it will end,” he wrote wryly, referring to the spread of the construction. “It has enhanced the sales of the metaphysical novel that Ms. Shelley’s husband, the poet Percy Bysshe Shelley, encouraged her to write, and has not harmed sales at ‘Frank’n’Stein,’ the fast-food chain whose hot dogs and beer I find delectably inorganic.” Safire went on to quote the American Dialect Society’s Laurence Horn, who lamented that despite the ’90s flowering of frankenfruits and frankenpigs, people hadn’t used Frankensense to describe “the opposite of common sense,” as in “politicians’ motivations for a creatively stupid piece of legislation.”
A year later, however, Safire returned to franken- in dead earnest. In an op-ed for the Times avowing the ethical value of embryonic stem cell research, the columnist suggested that a White House conference on bioethics would salve the fears of Americans concerned about “the real dangers of the slippery slope to Frankenscience.”
All of this is to say that franken-, the prefix we use to talk about human efforts to interfere with nature, flips between “funny” and “scary” with ease. Like Shelley’s monster himself, an ungainly patchwork of salvaged parts, it can seem goofy until it doesn’t—until it taps into an abiding anxiety that technology raises in us, a fear of overstepping.
Waldman’s piece hints at how language can shape discussions while retaining a rather playful quality.
Since its publication nearly 200 years ago, Shelley’s gothic novel has been read as a cautionary tale of the dangers of creation and experimentation. James Whale’s 1931 film took the message further, assigning explicitly the hubris of playing God to the mad scientist. As his monster comes to life, Dr. Frankenstein, played by Colin Clive, triumphantly exclaims: “Now I know what it feels like to be God!”
The admonition against playing God has since been ceaselessly invoked as a rhetorical bogeyman. Secular and religious, critic and journalist alike have summoned the term to deride and outright dismiss entire areas of research and technology, including stem cells, genetically modified crops, recombinant DNA, geoengineering, and gene editing. As we near the two-century commemoration of Shelley’s captivating story, we would be wise to shed this shorthand lesson—and to put this part of the Frankenstein legacy to rest in its proverbial grave.
The trouble with the term arises first from its murkiness. What exactly does it mean to play God, and why should we find it objectionable on its face? All but zealots would likely agree that it’s fine to create new forms of life through selective breeding and grafting of fruit trees, or to use in-vitro fertilization to conceive life outside the womb to aid infertile couples. No one objects when people intervene in what some deem “acts of God,” such as earthquakes, to rescue victims and provide relief. People get fully behind treating patients dying of cancer with “unnatural” solutions like chemotherapy. Most people even find it morally justified for humans to mete out decisions as to who lives or dies in the form of organ transplant lists that prize certain people’s survival over others.
So what is it—if not the imitation of a deity or the creation of life—that inspires people to invoke the idea of “playing God” to warn against, or even stop, particular technologies? A presidential commission charged in the early 1980s with studying the ethics of genetic engineering of humans, in the wake of the recombinant DNA revolution, sheds some light on underlying motivations. The commission sought to understand the concerns expressed by leaders of three major religious groups in the United States—representing Protestants, Jews, and Catholics—who had used the phrase “playing God” in a 1980 letter to President Jimmy Carter urging government oversight. Scholars from the three faiths, the commission concluded, did not see a theological reason to flat-out prohibit genetic engineering. Their concerns, it turned out, weren’t exactly moral objections to scientists acting as God. Instead, they echoed those of the secular public; namely, they feared possible negative effects from creating new human traits or new species. In other words, the religious leaders who called recombinant DNA tools “playing God” wanted precautions taken against bad consequences but did not inherently oppose the use of the technology as an act of human hubris.
She presents an interesting argument and offers this as a solution,
The lesson for contemporary science, then, is not that we should cease creating and discovering at the boundaries of current human knowledge. It’s that scientists and technologists ought to steward their inventions into society, and to more rigorously participate in public debate about their work’s social and ethical consequences. Frankenstein’s proper legacy today would be to encourage researchers to address the unsavory implications of their technologies, whether it’s the cognitive and social effects of ubiquitous smartphone use or the long-term consequences of genetically engineered organisms on ecosystems and biodiversity.
Some will undoubtedly argue that this places an undue burden on innovators. Here, again, Shelley’s novel offers a lesson. Scientists who cloister themselves as Dr. Frankenstein did—those who do not fully contemplate the consequences of their work—risk later encounters with the horror of their own inventions.
At a guess, Venkataraman seems to be assuming that if scientists communicate and make their case that the public will cease to panic with reference moralistic and other concerns. My understanding is that social scientists have found this is not the case. Someone may understand the technology quite well and still oppose it.
Frankenstein and anti-vaxxers
The Jan. 16, 2017 essay by Charles Kenny is the weakest of the lot, so far (Note: Links have been removed),
In 1780, University of Bologna physician Luigi Galvani found something peculiar: When he applied an electric current to the legs of a dead frog, they twitched. Thirty-seven years later, Mary Shelley had Galvani’s experiments in mind as she wrote her fable of Faustian overreach, wherein Dr. Victor Frankenstein plays God by reanimating flesh.
And a little less than halfway between those two dates, English physician Edward Jenner demonstrated the efficacy of a vaccine against smallpox—one of the greatest killers of the age. Given the suspicion with which Romantic thinkers like Shelley regarded scientific progress, it is no surprise that many at the time damned the procedure as against the natural order. But what is surprising is how that suspicion continues to endure, even after two centuries of spectacular successes for vaccination. This anti-vaccination stance—which now infects even the White House—demonstrates the immense harm that can be done by excessive distrust of technological advance.
Kenny employs history as a framing device. Crudely, Galvani’s experiments led to Mary Shelley’s Frankenstein which is a fable about ‘playing God’. (Kenny seems unaware there are many other readings of and perspectives on the book.) As for his statement ” … the suspicion with which Romantic thinkers like Shelley regarded scientific progress … ,” I’m not sure how he arrived at his conclusion about Romantic thinkers. According to Richard Holmes (in his book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science), their relationship to science was more complex. Percy Bysshe Shelley ran ballooning experiments and wrote poetry about science, which included footnotes for the literature and concepts he was referencing; John Keats was a medical student prior to his establishment as a poet; and Samuel Taylor Coleridge (The Rime of the Ancient Mariner, etc.) maintained a healthy correspondence with scientists of the day sometimes influencing their research. In fact, when you analyze the matter, you realize even scientists are, on occasion, suspicious of science.
As for the anti-vaccination wars, I wish this essay had been more thoughtful. Yes, Andrew Wakefield’s research showing a link between MMR (measles, mumps, and rubella) vaccinations and autism is a sham. However, having concerns and suspicions about technology does not render you a fool who hasn’t progressed from 18th/19th Century concerns and suspicions about science and technology. For example, vaccines are being touted for all kinds of things, the latest being a possible antidote to opiate addiction (see Susan Gados’ June 28, 2016 article for ScienceNews). Are we going to be vaccinated for everything? What happens when you keep piling vaccination on top of vaccination? Instead of a debate, the discussion has devolved to: “I’m right and you’re wrong.”
For the record, I’m grateful for the vaccinations I’ve had and the diminishment of diseases that were devastating and seem to be making a comeback with this current anti-vaccination fever. That said, I think there are some important questions about vaccines.
Kenny’s essay could have been a nuanced discussion of vaccines that have clearly raised the bar for public health and some of the concerns regarding the current pursuit of yet more vaccines. Instead, he’s been quite dismissive of anyone who questions vaccination orthodoxy.
The end of this piece
There will be more essays in Slate’s Frankenstein series but I don’t have time to digest and write commentary for all of them.
Please use this piece as a critical counterpoint to some of the series and, if I’ve done my job, you’ll critique this critique. Please do let me know if you find any errors or want to add an opinion or add your own critique in the Comments of this blog.
ETA Jan. 25, 2017: Here’s the Frankenstein webspace on Slate’s Futurography which lists all the essays in this series. It’s well worth looking at the list. There are several that were not covered here.
The conversion of bacteria from an enemy to be vanquished at all costs to a ‘frenemy’, a friendly enemy supplying possible solutions for problems is fascinating. An Oct. 26, 2016 news item on Nanowerk falls into the ‘frenemy’ camp,
A new prototype of a lithium-sulphur battery – which could have five times the energy density of a typical lithium-ion battery – overcomes one of the key hurdles preventing their commercial development by mimicking the structure of the cells which allow us to absorb nutrients.
Researchers have developed a prototype of a next-generation lithium-sulphur battery which takes its inspiration in part from the cells lining the human intestine. The batteries, if commercially developed, would have five times the energy density of the lithium-ion batteries used in smartphones and other electronics.
The new design, by researchers from the University of Cambridge, overcomes one of the key technical problems hindering the commercial development of lithium-sulphur batteries, by preventing the degradation of the battery caused by the loss of material within it. The results are reported in the journal Advanced Functional Materials.
Working with collaborators at the Beijing Institute of Technology, the Cambridge researchers based in Dr Vasant Kumar’s team in the Department of Materials Science and Metallurgy developed and tested a lightweight nanostructured material which resembles villi, the finger-like protrusions which line the small intestine. In the human body, villi are used to absorb the products of digestion and increase the surface area over which this process can take place.
In the new lithium-sulphur battery, a layer of material with a villi-like structure, made from tiny zinc oxide wires, is placed on the surface of one of the battery’s electrodes. This can trap fragments of the active material when they break off, keeping them electrochemically accessible and allowing the material to be reused.
“It’s a tiny thing, this layer, but it’s important,” said study co-author Dr Paul Coxon from Cambridge’s Department of Materials Science and Metallurgy. “This gets us a long way through the bottleneck which is preventing the development of better batteries.”
A typical lithium-ion battery is made of three separate components: an anode (negative electrode), a cathode (positive electrode) and an electrolyte in the middle. The most common materials for the anode and cathode are graphite and lithium cobalt oxide respectively, which both have layered structures. Positively-charged lithium ions move back and forth from the cathode, through the electrolyte and into the anode.
The crystal structure of the electrode materials determines how much energy can be squeezed into the battery. For example, due to the atomic structure of carbon, each carbon atom can take on six lithium ions, limiting the maximum capacity of the battery.
Sulphur and lithium react differently, via a multi-electron transfer mechanism meaning that elemental sulphur can offer a much higher theoretical capacity, resulting in a lithium-sulphur battery with much higher energy density. However, when the battery discharges, the lithium and sulphur interact and the ring-like sulphur molecules transform into chain-like structures, known as a poly-sulphides. As the battery undergoes several charge-discharge cycles, bits of the poly-sulphide can go into the electrolyte, so that over time the battery gradually loses active material.
The Cambridge researchers have created a functional layer which lies on top of the cathode and fixes the active material to a conductive framework so the active material can be reused. The layer is made up of tiny, one-dimensional zinc oxide nanowires grown on a scaffold. The concept was trialled using commercially-available nickel foam for support. After successful results, the foam was replaced by a lightweight carbon fibre mat to reduce the battery’s overall weight.
“Changing from stiff nickel foam to flexible carbon fibre mat makes the layer mimic the way small intestine works even further,” said study co-author Dr Yingjun Liu.
This functional layer, like the intestinal villi it resembles, has a very high surface area. The material has a very strong chemical bond with the poly-sulphides, allowing the active material to be used for longer, greatly increasing the lifespan of the battery.
“This is the first time a chemically functional layer with a well-organised nano-architecture has been proposed to trap and reuse the dissolved active materials during battery charging and discharging,” said the study’s lead author Teng Zhao, a PhD student from the Department of Materials Science & Metallurgy. “By taking our inspiration from the natural world, we were able to come up with a solution that we hope will accelerate the development of next-generation batteries.”
For the time being, the device is a proof of principle, so commercially-available lithium-sulphur batteries are still some years away. Additionally, while the number of times the battery can be charged and discharged has been improved, it is still not able to go through as many charge cycles as a lithium-ion battery. However, since a lithium-sulphur battery does not need to be charged as often as a lithium-ion battery, it may be the case that the increase in energy density cancels out the lower total number of charge-discharge cycles.
“This is a way of getting around one of those awkward little problems that affects all of us,” said Coxon. “We’re all tied in to our electronic devices – ultimately, we’re just trying to make those devices work better, hopefully making our lives a little bit nicer.”
I hadn’t realized this still needed to be proved but it’s always good to have your misconceptions adjusted. Here’s more about the work from the University of Cambridge in a Sept. 30, 2016 news item on phys.org,
Scientists have long suspected that the way materials behave on the nanoscale – that is when particles have dimensions of about 1–100 nanometres – is different from how they behave on any other scale. A new paper in the journal Chemical Science provides concrete proof that this is the case.
The laws of thermodynamics govern the behaviour of materials in the macro world, while quantum mechanics describes behaviour of particles at the other extreme, in the world of single atoms and electrons.
In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.
Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.
And because there are so many particles doing different things at the same time, it’s difficult to simulate all their interactions using a computer. It’s also hard to gather much experimental information, because we haven’t yet developed the capacity to measure behaviour on such a tiny scale.
This conundrum becomes particularly acute when we’re trying to understand crystallisation, the process by which particles, randomly distributed in a solution, can form highly ordered crystal structures, given the right conditions.
Chemists don’t really understand how this works. How do around 1018 molecules, moving around in solution at random, come together to form a micro- to millimetre size ordered crystal? Most remarkable perhaps is the fact that in most cases every crystal is ordered in the same way every time the crystal is formed.
However, it turns out that different conditions can sometimes yield different crystal structures. These are known as polymorphs, and they’re important in many branches of science including medicine – a drug can behave differently in the body depending on which polymorph it’s crystallised in.
What we do know so far about the process, at least according to one widely accepted model, is that particles in solution can come together to form a nucleus, and once a critical mass is reached we see crystal growth. The structure of the nucleus determines the structure of the final crystal, that is, which polymorph we get.
What we have not known until now is what determines the structure of the nucleus in the first place, and that happens on the nanoscale.
In this paper, the authors have used mechanochemistry – that is milling and grinding – to obtain nanosized particles, small enough that surface effects become significant. In other words, the chemistry of the nanoworld – which structures are the most stable at this scale, and what conditions affect their stability, has been studied for the first time with carefully controlled experiments.
And by changing the milling conditions, for example by adding a small amount of solvent, the authors have been able to control which polymorph is the most stable. Professor Jeremy Sanders of the University of Cambridge’s Department of Chemistry, who led the work, said “It is exciting that these simple experiments, when carried out with great care, can unexpectedly open a new door to understanding the fundamental question of how surface effects can control the stability of nanocrystals.”
Joel Bernstein, Global Distinguished Professor of Chemistry at NYU Abu Dhabi, and an expert in crystal growth and structure, explains: “The authors have elegantly shown how to experimentally measure and simulate situations where you have two possible nuclei, say A and B, and determine that A is more stable. And they can also show what conditions are necessary in order for these stabilities to invert, and for B to become more stable than A.”
“This is really news, because you can’t make those predictions using classical thermodynamics, and nor is this the quantum effect. But by doing these experiments, the authors have started to gain an understanding of how things do behave on this size regime, and how we can predict and thus control it. The elegant part of the experiment is that they have been able to nucleate A and B selectively and reversibly.”
One of the key words of chemical synthesis is ‘control’. Chemists are always trying to control the properties of materials, whether that’s to make a better dye or plastic, or a drug that’s more effective in the body. So if we can learn to control how molecules in a solution come together to form solids, we can gain a great deal. This work is a significant first step in gaining that control.
A new prize is being inaugurated, the $US100,000 Nine Dots Prize for creative thinking and it’s open to anyone anywhere in the world. Here’s more from an Oct. 21, 2016 article by Jane Tinkler for the Guardian (Note: Links have been removed),
In the debate over this year’s surprise award to Bob Dylan, it is easy to lose sight of the long history of prizes being used to recognise great writing (in whatever form), great research and other outstanding achievements.
The use of prizes dates back furthest in the sciences. In 1714, the British government famously offered an award of £20,000 (about £2.5 million at today’s value) to the person who could find a way of determining a ship’s longitude. British clockmaker John Harrison won the Longitude Prize and, by doing so, improved the safety of long-distance sea travel.
Prizes are now proliferating. Since 2000, more than sixty prizes of more than $100,000 (US dollars) have been created, and the field of philanthropic prize-giving is estimated to exceed £1 billion each year. Prizes are seen as ways to reward excellence, build networks, support collaboration and direct efforts towards practical and social goals. Those awarding them include philanthropists, governments and companies.
Today [Oct. 21, 2016] sees the launch of the newest kid on the prize-giving block. Drawing its name from a puzzle that can be solved only by lateral thinking, the Nine Dots prize wants to encourage creative thinking and writing that can help to tackle social problems. It is sponsored by the Kadas Prize Foundation, with the support of the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) at the University of Cambridge, and Cambridge University Press.
The Nine Dots prize is a hybrid of [three types of prizes]. There is a recognition [emphasis mine] aspect, but it doesn’t require an extensive back catalogue. The prize will be judged by a board of twelve renowned scholars, thinkers and writers. They will assess applications on an anonymised basis, so whoever wins will have done so not because of past work, but because of the strength of their ideas, and ability to communicate them effectively.
It is an incentive [emphasis mine] prize in that we ask applicants to respond to a defined question. The inaugural question is: “Are digital technologies making politics impossible?” [emphasis mine]. This is not proscriptive: applicants are encouraged to define what the question means to them, and to respond to that. We expect the submissions to be wildly varied. A new question will be set every two years, always with a focus on pressing issues that affect society. The prize’s disciplinary heartland lies in the social sciences, but responses from all fields, sectors and life experiences are welcome.
Finally, it is a resource [emphasis mine] prize in that it does not expect all the answers at the point of application. Applicants need to provide a 3,000-word summary of how they would approach the question. Board members will assess these, and the winner will then be invited to write their ideas up into a short, accessible book, that will be published by Cambridge University Press. A prize award of $100,000 (£82,000) will support the winner to take time out to think and write over a nine month period. The winner will also have the option of a term’s visiting fellowship at the University of Cambridge, to help with the writing process.
With this mix of elements, we hope the Nine Dots prize will encourage creative thinking about some of today’s most pressing issues. The winner’s book will be made freely accessible online; we hope it will capture the public’s imagination and spark a real debate.
The submission deadline is Jan. 31, 2017 and the winner announcement is May 2017. The winner’s book is to be published May 2018.
In a new study, researchers from the Cambridge Crystallographic Data Centre (CCDC) and the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory have teamed up to capture neon within a porous crystalline framework. Neon is well known for being the most unreactive element and is a key component in semiconductor manufacturing, but neon has never been studied within an organic or metal-organic framework until now.
The results (Chemical Communications, “Capturing neon – the first experimental structure of neon trapped within a metal–organic environment”), which include the critical studies carried out at the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne, also point the way towards a more economical and greener industrial process for neon production.
Neon is an element that is well-known to the general public due to its iconic use in neon signs, especially in city centres in the United States from the 1920s to the 1960s. In recent years, the industrial use of neon has become dominated by use in excimer lasers to produce semiconductors. Despite being the fifth most abundant element in the atmosphere, the cost of pure neon gas has risen significantly over the years, increasing the demand for better ways to separate and isolate the gas.
During 2015, CCDC scientists presented a talk at the annual American Crystallographic Association (ACA) meeting on the array of elements that have been studied within an organic or metal-organic environment, challenging the crystallographic community to find the next and possibly last element to be added to the Cambridge Structural Database (CSD). A chance encounter at that meeting with Andrey Yakovenko, a beamline scientist at the Advanced Photon Source, resulted in a collaborative project to capture neon – the 95th element to be observed in the CSD.
Neon’s low reactivity, along with the weak scattering of X-rays due to its relatively low number of electrons, means that conclusive experimental observation of neon captured within a crystalline framework is very challenging. In situ high pressure gas flow experiments performed at X-Ray Science Division beamline 17-BM at the APS using the X-ray powder diffraction technique at low temperatures managed to elucidate the structure of two different metal-organic frameworks with neon gas captured within the materials.
“This is a really exciting moment representing the latest new element to be added to the CSD and quite possibly the last given the experimental and safety challenges associated with the other elements yet to be studied” said Peter Wood, Senior Research Scientist at CCDC and lead author on the paper published in Chemical Communications. “More importantly, the structures reported here show the first observation of a genuine interaction between neon and a transition metal, suggesting the potential for future design of selective neon capture frameworks”.
The structure of neon captured within the framework known as NiMOF-74, a porous framework built from nickel metal centres and organic linkers, shows clear nickel to neon interactions forming at low temperatures significantly shorter than would be expected from a typical weak contact.
Andrey Yakovenko said “These fascinating results show the great capabilities of the scientific program at 17-BM and the Advanced Photon Source. Previously we have been doing experiments at our beamline using other much heavier, and therefore easily detectable, noble gases such as xenon and krypton. However, after meeting co-authors Pete, Colin, Amy and Suzanna at the ACA meeting, we decided to perform these much more complicated experiments using the very light and inert gas – neon. In fact, only by using a combination of in situ X-ray powder diffraction measurements, low temperature and high pressure have we been able to conclusively identify the neon atom positions beyond reasonable doubt”.
Summarising the findings, Chris Cahill, Past President of the ACA and Professor of Chemistry, George Washington University said “This is a really elegant piece of in situ crystallography research and it is particularly pleasing to see the collaboration coming about through discussions at an annual ACA meeting”.
According to Dr. Roman Hovorka and Dr. Hood Thabit of the University of Cambridge, UK, there will be an artificial pancreas assuming issues such as cybersecurity are resolved. From a June 30, 2016 Diabetologia press release on EurekAlert,
The artificial pancreas — a device which monitors blood glucose in patients with type 1 diabetes and then automatically adjusts levels of insulin entering the body — is likely to be available by 2018, conclude authors of a paper in Diabetologia (the journal of the European Association for the Study of Diabetes). Issues such as speed of action of the forms of insulin used, reliability, convenience and accuracy of glucose monitors plus cybersecurity to protect devices from hacking, are among the issues that are being addressed.
The press release describes the current technology available for diabetes type 1 patients and alternatives other than an artificial pancreas,
Currently available technology allows insulin pumps to deliver insulin to people with diabetes after taking a reading or readings from glucose meters, but these two components are separate. It is the joining together of both parts into a ‘closed loop’ that makes an artificial pancreas, explain authors Dr Roman Hovorka and Dr Hood Thabit of the University of Cambridge, UK. “In trials to date, users have been positive about how use of an artificial pancreas gives them ‘time off’ or a ‘holiday’ from their diabetes management, since the system is managing their blood sugar effectively without the need for constant monitoring by the user,” they say.
One part of the clinical need for the artificial pancreas is the variability of insulin requirements between and within individuals — on one day a person could use one third of their normal requirements, and on another 3 times what they normally would. This is dependent on the individual, their diet, their physical activity and other factors. The combination of all these factors together places a burden on people with type 1 diabetes to constantly monitor their glucose levels, to ensure they don’t end up with too much blood sugar (hyperglycaemic) or more commonly, too little (hypoglycaemic). Both of these complications can cause significant damage to blood vessels and nerve endings, making complications such as cardiovascular problems more likely.
There are alternatives to the artificial pancreas, with improvements in technology in both whole pancreas transplantation and also transplants of just the beta cells from the pancreas which produce insulin. However, recipients of these transplants require drugs to supress their immune systems just as in other organ transplants. In the case of whole pancreas transplantation, major surgery is required; and in beta cell islet transplantation, the body’s immune system can still attack the transplanted cells and kill off a large proportion of them (80% in some cases). The artificial pancreas of course avoids the need for major surgery and immunosuppressant drugs.
Researchers are working to solve one of the major problems with an artificial pancreas according to the press release,
Researchers globally continue to work on a number of challenges faced by artificial pancreas technology. One such challenge is that even fast-acting insulin analogues do not reach their peak levels in the bloodstream until 0.5 to 2 hours after injection, with their effects lasting 3 to 5 hours. So this may not be fast enough for effective control in, for example, conditions of vigorous exercise. Use of the even faster acting ‘insulin aspart’ analogue may remove part of this problem, as could use of other forms of insulin such as inhaled insulin. Work also continues to improve the software in closed loop systems to make it as accurate as possible in blood sugar management.
The press release also provides a brief outline of some of the studies being run on one artificial pancreas or another, offers an abbreviated timeline for when the medical device may be found on the market, and notes specific cybersecurity issues,
A number of clinical studies have been completed using the artificial pancreas in its various forms, in various settings such as diabetes camps for children, and real life home testing. Many of these trials have shown as good or better glucose control than existing technologies (with success defined by time spent in a target range of ideal blood glucose concentrations and reduced risk of hypoglycaemia). A number of other studies are ongoing. The authors say: “Prolonged 6- to 24-month multinational closed-loop clinical trials and pivotal studies are underway or in preparation including adults and children. As closed loop devices may be vulnerable to cybersecurity threats such as interference with wireless protocols and unauthorised data retrieval, implementation of secure communications protocols is a must.”
The actual timeline to availability of the artificial pancreas, as with other medical devices, encompasses regulatory approvals with reassuring attitudes of regulatory agencies such as the US Food and Drug Administration (FDA), which is currently reviewing one proposed artificial pancreas with approval possibly as soon as 2017. And a recent review by the UK National Institute of Health Research (NIHR) reported that automated closed-loop systems may be expected to appear in the (European) market by the end of 2018. The authors say: “This timeline will largely be dependent upon regulatory approvals and ensuring that infrastructures and support are in place for healthcare professionals providing clinical care. Structured education will need to continue to augment efficacy and safety.”
The authors say: “Cost-effectiveness of closed-loop is to be determined to support access and reimbursement. In addition to conventional endpoints such as blood sugar control, quality of life is to be included to assess burden of disease management and hypoglycaemia. Future research may include finding out which sub-populations may benefit most from using an artificial pancreas. Research is underway to evaluate these closed-loop systems in the very young, in pregnant women with type 1 diabetes, and in hospital in-patients who are suffering episodes of hyperglycaemia.”
They conclude: “Significant milestones moving the artificial pancreas from laboratory to free-living unsupervised home settings have been achieved in the past decade. Through inter-disciplinary collaboration, teams worldwide have accelerated progress and real-world closed-loop applications have been demonstrated. Given the challenges of beta-cell transplantation, closed-loop technologies are, with continuing innovation potential, destined to provide a viable alternative for existing insulin pump therapy and multiple daily insulin injections.”
“Getting into” as used in the headline is slang for exploring a topic in more depth which is what an international team of researchers did when they ‘got into’ cellulose. From a June 9, 2016 news item on phys.org (Note: Links have been removed),
In the search for low emission plant-based fuels, new research may help avoid having to choose between growing crops for food or fuel.
Scientists have identified new steps in the way plants produce cellulose, the component of plant cell walls that provides strength, and forms insoluble fibre in the human diet.
The findings could lead to improved production of cellulose and guide plant breeding for specific uses such as wood products and ethanol fuel, which are sustainable alternatives to fossil fuel-based products.
Published in the journal Nature Communications today, the work was conducted by an international team of scientists, led by the University of Cambridge and the University of Melbourne.
“Our research identified several proteins that are essential in the assembly of the protein machinery that makes cellulose”, said Melbourne’s Prof Staffan Persson.
“We found that these assembly factors control how much cellulose is made, and so plants without them can not produce cellulose very well and the defect substantially impairs plant biomass production. The ultimate aim of this research would be breed plants that have altered activity of these proteins so that cellulose production can be improved for the range of applications that use cellulose including paper, timber and ethanol fuels.”
The newly discovered proteins are located in an intracellular compartment called the Golgi where proteins are sorted and modified.
“If the function of this protein family is abolished the cellulose synthesizing complexes become stuck in the Golgi and have problems reaching the cell surface where they normally are active” said the lead authors of the study, Drs. Yi Zhang (Max-Planck Institute for Molecular Plant Physiology) and Nino Nikolovski (University of Cambridge).
“We therefore named the new proteins STELLO, which is Greek for to set in place, and deliver.”
“The findings are important to understand how plants produce their biomass”, said Professor Paul Dupree from the University of Cambridge’s Department of Biochemistry.
“Greenhouse-gas emissions from cellulosic ethanol, which is derived from the biomass of plants, are estimated to be roughly 85 percent less than from fossil fuel sources. Research to understand cellulose production in plants is therefore an important part of climate change mitigation.”
“In addition, by using cellulosic plant materials we get around the problem of food-versus-fuel scenario that is problematic when using corn as a basis for bioethanol.”
“It is therefore of great importance to find genes and mechanisms that can improve cellulose production in plants so that we can tailor cellulose production for various needs.”
Previous studies by Profs. Persson’s and Dupree’s research groups have, together with other scientists, identified many proteins that are important for cellulose synthesis and for other cell wall polymers.
With the newly presented research they substantially increase our understanding for how the bulk of a plant’s biomass is produced and is therefore of vast importance to industrial applications.