Monthly Archives: January 2017

Nanotechnology cracks Wall Street (Daily)

David Dittman’s Jan. 11, 2017 article for wallstreetdaily.com portrays a great deal of excitement about nanotechnology and the possibilities (I’m highlighting the article because it showcases Dexter Johnson’s Nanoclast blog),

When we talk about next-generation aircraft, next-generation wearable biomedical devices, and next-generation fiber-optic communication, the consistent theme is nano: nanotechnology, nanomaterials, nanophotonics.

For decades, manufacturers have used carbon fiber to make lighter sports equipment, stronger aircraft, and better textiles.

Now, as Dexter Johnson of IEEE [Institute of Electrical and Electronics Engineers] Spectrum reports [on his Nanoclast blog], carbon nanotubes will help make aerospace composites more efficient:

Now researchers at the University of Surrey’s Advanced Technology Institute (ATI), the University of Bristol’s Advanced Composite Centre for Innovation and Science (ACCIS), and aerospace company Bombardier [headquartered in Montréal, Canada] have collaborated on the development of a carbon nanotube-enabled material set to replace the polymer sizing. The reinforced polymers produced with this new material have enhanced electrical and thermal conductivity, opening up new functional possibilities. It will be possible, say the British researchers, to embed gadgets such as sensors and energy harvesters directly into the material.

When it comes to flight, lighter is better, so building sensors and energy harvesters into the body of aircraft marks a significant leap forward.

Johnson also reports for IEEE Spectrum on a “novel hybrid nanomaterial” based on oscillations of electrons — a major advance in nanophotonics:

Researchers at the University of Texas at Austin have developed a hybrid nanomaterial that enables the writing, erasing and rewriting of optical components. The researchers believe that this nanomaterial and the techniques used in exploiting it could create a new generation of optical chips and circuits.

Of course, the concept of rewritable optics is not altogether new; it forms the basis of optical storage mediums like CDs and DVDs. However, CDs and DVDs require bulky light sources, optical media and light detectors. The advantage of the rewritable integrated photonic circuits developed here is that it all happens on a 2-D material.

“To develop rewritable integrated nanophotonic circuits, one has to be able to confine light within a 2-D plane, where the light can travel in the plane over a long distance and be arbitrarily controlled in terms of its propagation direction, amplitude, frequency and phase,” explained Yuebing Zheng, a professor at the University of Texas who led the research… “Our material, which is a hybrid, makes it possible to develop rewritable integrated nanophotonic circuits.”

Who knew that mixing graphene with homemade Silly Putty would create a potentially groundbreaking new material that could make “wearables” actually useful?

Next-generation biomedical devices will undoubtedly include some of this stuff:

A dash of graphene can transform the stretchy goo known as Silly Putty into a pressure sensor able to monitor a human pulse or even track the dainty steps of a small spider.

The material, dubbed G-putty, could be developed into a device that continuously monitors blood pressure, its inventors hope.

The guys who made G-putty often rely on “household stuff” in their research.

It’s nice to see a blogger’s work be highlighted. Congratulations Dexter.

G-putty was mentioned here in a Dec. 30, 2016 posting which also includes a link to Dexter’s piece on the topic.

Nanotech business news from Turkey and from Northern Ireland

I have two nanotech business news bits, one from Turkey and one from Northern Ireland.

Turkey

A Turkish company has sold one of its microscopes to the US National Aeronautics and Space Administration (NASA), according to a Jan. 20, 2017 news item on dailysabah.com,

Turkish nanotechnology company Nanomanyetik has begun selling a powerful microscope to the U.S. space agency NASA, the company’s general director told Anadolu Agency on Thursday [Jan. 19, 2017].

Dr. Ahmet Oral, who also teaches physics at Middle East Technical University, said Nanomanyetik developed a microscope that is able to map surfaces on the nanometric and atomic levels, or extremely small particles.

Nanomanyetik’s foreign customers are drawn to the microscope because of its higher quality yet cheaper price compared to its competitors.

“There are almost 30 firms doing this work,” according to Oral. “Ten of them are active and we are among these active firms. Our aim is to be in the top three,” he said, adding that Nanomanyetik jumps to the head of the line because of its after-sell service.

In addition to sales to NASA, the Ankara-based firm exports the microscope to Brazil, Chile, France, Iran, Israel, Italy, Japan, Poland, South Korea and Spain.

Electronics giant Samsung is also a customer.

“Where does Samsung use this product? There are pixels in the smartphones’ displays. These pixels are getting smaller each year. Now the smallest pixel is 15X10 microns,” he said. Human hair is between 10 and 100 microns in diameter.

“They are figuring inner sides of pixels so that these pixels can operate much better. These patterns are on the nanometer level. They are using these microscopes to see the results of their works,” Oral said.

Nanomanyetik’s microscopes produces good quality, high resolution images and can even display an object’s atoms and individual DNA fibers, according to Oral.

You can find the English language version of the Nanomanyetik (NanoMagnetics Instruments) website here . For those with the language skills there is the Turkish language version, here.

Northern Ireland

A Jan. 22, 2017 news article by Dominic Coyle for The Irish Times (Note: Links have been removed) shares this business news and mention of a world first,

MOF Technologies has raised £1.5 million (€1.73 million) from London-based venture capital group Excelsa Ventures and Queen’s University Belfast’s Qubis research commercialisation group.

MOF Technologies chief executive Paschal McCloskey welcomed the Excelsa investment.

Established in part by Qubis in 2012 in partnership with inventor Prof Stuart James, MOF Technologies began life in a lab at the School of Chemistry and Chemical Engineering at Queen’s.

Its metal organic framework (MOF) technology is seen as having significant potential in areas including gas storage, carbon capture, transport, drug delivery and heat transformation. Though still in its infancy, the market is forecast to grow to £2.2 billion by 2022, the company says.

MOF Technologies last year became the first company worldwide to successfully commercialise MOFs when it agreed a deal with US fruit and vegetable storage provider Decco Worldwide to commercialise MOFs for use in a food application.

TruPick, designed by Decco and using MOF Technologies’ environmentally friendly technology, enables nanomaterials control the effects of ethylene on fruit produce so it maintains freshness in storage or transport.

MOFs are crystalline, sponge-like materials composed of two components – metal ions and organic molecules known as linkers.

“We very quickly recognised the market potential of MOFs in terms of their unmatched ability for gas storage,” said Moritz Bolle from Excelsa Ventures. “This technology will revolutionise traditional applications and open countless new opportunities for industry. We are confident MOF Technologies is the company that will lead this seismic shift in materials science.

You can find MOF Technologies here.

Keeping up with science is impossible: ruminations on a nanotechnology talk

I think it’s time to give this suggestion again. Always hold a little doubt about the science information you read and hear. Everybody makes mistakes.

Here’s an example of what can happen. George Tulevski who gave a talk about nanotechnology in Nov. 2016 for TED@IBM is an accomplished scientist who appears to have made an error during his TED talk. From Tulevski’s The Next Step in Nanotechnology talk transcript page,

When I was a graduate student, it was one of the most exciting times to be working in nanotechnology. There were scientific breakthroughs happening all the time. The conferences were buzzing, there was tons of money pouring in from funding agencies. And the reason is when objects get really small, they’re governed by a different set of physics that govern ordinary objects, like the ones we interact with. We call this physics quantum mechanics. [emphases mine] And what it tells you is that you can precisely tune their behavior just by making seemingly small changes to them, like adding or removing a handful of atoms, or twisting the material. It’s like this ultimate toolkit. You really felt empowered; you felt like you could make anything.

In September 2016, scientists at Cambridge University (UK) announced they had concrete proof that the physics governing materials at the nanoscale is unique, i.e., it does not follow the rules of either classical or quantum physics. From my Oct. 27, 2016 posting,

A Sept. 29, 2016 University of Cambridge press release, which originated the news item, hones in on the peculiarities of the nanoscale,

In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.

Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.

It is very, very easy to miss new developments no matter how tirelessly you scan for information.

Tulevski is a good, interesting, and informed speaker but I do have one other hesitation regarding his talk. He seems to think that over the last 15 years there should have been more practical applications arising from the field of nanotechnology. There are two aspects here. First, he seems to be dating the ‘nanotechnology’ effort from the beginning of the US National Nanotechnology Initiative and there are many scientists who would object to that as the starting point. Second, 15 or even 30 or more years is a brief period of time especially when you are investigating that which hasn’t been investigated before. For example, you might want to check out the book, “Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life” (published 1985) is a book by Steven Shapin and Simon Schaffer (Wikipedia entry for the book). The amount of time (years) spent on how to make just the glue which held the various experimental apparatuses together was a revelation to me. Of  course, it makes perfect sense that if you’re trying something new, you’re going to have figure out everything.

By the way, I include my blog as one of the sources of information that can be faulty despite efforts to make corrections and to keep up with the latest. Even the scientists at Cambridge University can run into some problems as I noted in my Jan. 28, 2016 posting.

Getting back to Tulevsk, herei’s a link to his lively, informative talk :
https://www.ted.com/talks/george_tulevski_the_next_step_in_nanotechnology#t-562570

ETA Jan. 24, 2017: For some insight into how uncertain, tortuous, and expensive commercializing technology can be read Dexter Johnson’s Jan. 23, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website). Here’s an excerpt (Note: Links have been removed),

The brief description of this odyssey includes US $78 million in financing over 15 years and $50 million in revenues over that period through licensing of its technology and patents. That revenue includes a back-against-the-wall sell-off of a key business unit to Lockheed Martin in 2008.  Another key moment occured back in 2012 when Belgian-based nanoelectronics powerhouse Imec took on the job of further developing Nantero’s carbon-nanotube-based memory back in 2012. Despite the money and support from major electronics players, the big commercial breakout of their NRAM technology seemed ever less likely to happen with the passage of time.

‘Smart’ fabric that’s bony

Researchers at Australia’s University of New South of Wales (UNSW) have devised a means of ‘weaving’ a material that mimics *bone tissue, periosteum according to a Jan. 11, 2017 news item on ScienceDaily,

For the first time, UNSW [University of New South Wales] biomedical engineers have woven a ‘smart’ fabric that mimics the sophisticated and complex properties of one nature’s ingenious materials, the bone tissue periosteum.

Having achieved proof of concept, the researchers are now ready to produce fabric prototypes for a range of advanced functional materials that could transform the medical, safety and transport sectors. Patents for the innovation are pending in Australia, the United States and Europe.

Potential future applications range from protective suits that stiffen under high impact for skiers, racing-car drivers and astronauts, through to ‘intelligent’ compression bandages for deep-vein thrombosis that respond to the wearer’s movement and safer steel-belt radial tyres.

A Jan. 11, 2017 UNSW press release on EurekAlert, which originated the news item, expands on the theme,

Many animal and plant tissues exhibit ‘smart’ and adaptive properties. One such material is the periosteum, a soft tissue sleeve that envelops most bony surfaces in the body. The complex arrangement of collagen, elastin and other structural proteins gives periosteum amazing resilience and provides bones with added strength under high impact loads.

Until now, a lack of scalable ‘bottom-up’ approaches by researchers has stymied their ability to use smart tissues to create advanced functional materials.

UNSW’s Paul Trainor Chair of Biomedical Engineering, Professor Melissa Knothe Tate, said her team had for the first time mapped the complex tissue architectures of the periosteum, visualised them in 3D on a computer, scaled up the key components and produced prototypes using weaving loom technology.

“The result is a series of textile swatch prototypes that mimic periosteum’s smart stress-strain properties. We have also demonstrated the feasibility of using this technique to test other fibres to produce a whole range of new textiles,” Professor Knothe Tate said.

In order to understand the functional capacity of the periosteum, the team used an incredibly high fidelity imaging system to investigate and map its architecture.

“We then tested the feasibility of rendering periosteum’s natural tissue weaves using computer-aided design software,” Professor Knothe Tate said.

The computer modelling allowed the researchers to scale up nature’s architectural patterns to weave periosteum-inspired, multidimensional fabrics using a state-of-the-art computer-controlled jacquard loom. The loom is known as the original rudimentary computer, first unveiled in 1801.

“The challenge with using collagen and elastin is their fibres, that are too small to fit into the loom. So we used elastic material that mimics elastin and silk that mimics collagen,” Professor Knothe Tate said.

In a first test of the scaled-up tissue weaving concept, a series of textile swatch prototypes were woven, using specific combinations of collagen and elastin in a twill pattern designed to mirror periosteum’s weave. Mechanical testing of the swatches showed they exhibited similar properties found in periosteum’s natural collagen and elastin weave.

First author and biomedical engineering PhD candidate, Joanna Ng, said the technique had significant implications for the development of next-generation advanced materials and mechanically functional textiles.

While the materials produced by the jacquard loom have potential manufacturing applications – one tyremaker believes a titanium weave could spawn a new generation of thinner, stronger and safer steel-belt radials – the UNSW team is ultimately focused on the machine’s human potential.

“Our longer term goal is to weave biological tissues – essentially human body parts – in the lab to replace and repair our failing joints that reflect the biology, architecture and mechanical properties of the periosteum,” Ms Ng said.

An NHMRC development grant received in November [2016] will allow the team to take its research to the next phase. The researchers will work with the Cleveland Clinic and the University of Sydney’s Professor Tony Weiss to develop and commercialise prototype bone implants for pre-clinical research, using the ‘smart’ technology, within three years.

In searching for more information about this work, I found a Winter 2015 article (PDF; pp. 8-11) by Amy Coopes and Steve Offner for UNSW Magazine about Knothe Tate and her work (Note: In Australia, winter would be what we in the Northern Hemisphere consider summer),

Tucked away in a small room in UNSW’s Graduate School of Biomedical Engineering sits a 19th century–era weaver’s wooden loom. Operated by punch cards and hooks, the machine was the first rudimentary computer when it was unveiled in 1801. While on the surface it looks like a standard Jacquard loom, it has been enhanced with motherboards integrated into each of the loom’s five hook modules and connected to a computer. This state-of-the-art technology means complex algorithms control each of the 5,000 feed-in fibres with incredible precision.

That capacity means the loom can weave with an extraordinary variety of substances, from glass and titanium to rayon and silk, a development that has attracted industry attention around the world.

The interest lies in the natural advantage woven materials have over other manufactured substances. Instead of manipulating material to create new shades or hues as in traditional weaving, the fabrics’ mechanical properties can be modulated, to be stiff at one end, for example, and more flexible at the other.

“Instead of a pattern of colours we get a pattern of mechanical properties,” says Melissa Knothe Tate, UNSW’s Paul Trainor Chair of Biomedical Engineering. “Think of a rope; it’s uniquely good in tension and in bending. Weaving is naturally strong in that way.”


The interface of mechanics and physiology is the focus of Knothe Tate’s work. In March [2015], she travelled to the United States to present another aspect of her work at a meeting of the international Orthopedic Research Society in Las Vegas. That project – which has been dubbed “Google Maps for the body” – explores the interaction between cells and their environment in osteoporosis and other degenerative musculoskeletal conditions such as osteoarthritis.

Using previously top-secret semiconductor technology developed by optics giant Zeiss, and the same approach used by Google Maps to locate users with pinpoint accuracy, Knothe Tate and her team have created “zoomable” anatomical maps from the scale of a human joint down to a single cell.

She has also spearheaded a groundbreaking partnership that includes the Cleveland Clinic, and Brown and Stanford universities to help crunch terabytes of data gathered from human hip studies – all processed with the Google technology. Analysis that once took 25 years can now be done in a matter of weeks, bringing researchers ever closer to a set of laws that govern biological behaviour. [p. 9]

I gather she was recruited from the US to work at the University of New South Wales and this article was to highlight why they recruited her and to promote the university’s biomedical engineering department, which she chairs.

Getting back to 2017, here’s a link to and citation for the paper,

Scale-up of nature’s tissue weaving algorithms to engineer advanced functional materials by Joanna L. Ng, Lillian E. Knothe, Renee M. Whan, Ulf Knothe & Melissa L. Knothe Tate. Scientific Reports 7, Article number: 40396 (2017) doi:10.1038/srep40396 Published online: 11 January 2017

This paper is open access.

One final comment, that’s a lot of people (three out of five) with the last name Knothe in the author’s list for the paper.

*’the bone tissue’ changed to ‘bone tissue’ on July 17,2017.

Canadian Science Policy Conference inaugurates Lecture Series: Science Advice in a Troubled World

The Canadian Science Policy Centre (CSPC) launched a lecture series on Monday, Jan. 16, 2017 with Sir Peter Gluckman as the first speaker in a talk titled, Science Advice in a Troubled World. From a Jan. 18, 2017 CSPC announcement (received via email),

The inaugural session of the Canadian Science Policy Lecture Series was hosted by ISSP [University of Ottawa’s Institute for Science Society and Policy (ISSP)] on Monday January 16th [2017] at the University of Ottawa. Sir Peter Gluckman, Chief Science Advisor to the Prime Minister of New Zealand gave a presentation titled “Science Advise [sic] in a troubled world”. For a summary of the event, video and pictures please visit the event page.  

The session started with speeches by Monica Gattiner, Director, Institute for Science, Society and Policy, Jacques Frémont, President of the University of Ottawa as well as Mehrdad Hariri, CEO and President of the Canadian Science Policy Centre (CSPC).

The talk itself is about 50 mins. but there are lengthy introductions, including a rather unexpected (by me) reference to the recent US election from the president of the University of Ottawa, Jacques Frémont (formerly the head of Québec’s Human Rights Commission, where the talk was held. There was also a number of questions after the talk. So, the running time for the video 1 hr. 12 mins.

Here’s a bit more information about Sir Peter, from the Science Advice in a Troubled World event page on the CSPC website,

Sir Peter Gluckman ONZ FRS is the first Chief Science Advisor to the Prime Minister of New Zealand, having been appointed in 2009. He is also science envoy and advisor to the Ministry of Foreign Affairs and Trade. He is chair of the International Network of Government Science Advice (INGSA), which operates under the aegis of the international Council of Science (ICSU). He chairs the APEC Chief Science Advisors and Equivalents group and is the coordinator of the secretariat of Small Advanced Economies Initiative.  In 2016 he received the AAAS award in Science Diplomacy. He trained as a pediatric and biomedical scientist and holds a Distinguished University Professorship at the Liggins Institute of the University of Auckland. He has published over 700 scientific papers and several technical and popular science books. He has received the highest scientific (Rutherford medal) and civilian (Order of New Zealand, limited to 20 living persons) honours in NZ and numerous international scientific awards. He is a Fellow of the Royal Society of London, a member of the National Academy of Medicine (USA) and a fellow of the Academy of Medical Sciences (UK).

I listened to the entire video and Gluckman presented a thoughtful, nuanced lecture in which he also mentioned Calestous Juma and his 2016 book, Innovation and Its Enemies (btw, I will be writing a commentary about Juma’s extraordinary effort). He also referenced the concepts of post-truth and post-trust, and made an argument for viewing evidence-based science as part of the larger policymaking process rather than the dominant or only factor. From the Science Advice in a Troubled World event page,

Lecture Introduction

The world is facing many challenges from environmental degradation and climate change to global health issues, and many more.  Societal relationships are changing; sources of information, reliable and otherwise, and their transmission are affecting the nature of public policy.

Within this context the question arises; how can scientific advice to governments help address these emerging issues in a more unstable and uncertain world?
The relationship between science and politics is complex and the challenges at their interface are growing. What does scientific advice mean within this context?
How can science better inform policy where decision making is increasingly made against a background of post-truth polemic?

I’m not in perfect agreement with Gluckman with regard to post-truth as I have been influenced by an essay of Steve Fuller’s suggesting that science too can be post-truth. (Fuller’s essay was highlighted in my Jan. 6, 2017 posting.)

Gluckman seems to be wielding a fair amount of influence on the Canadian scene. This is his second CSPC visit in the last few months. He was an invited speaker at the Eighth Annual CSPC conference in November 2016 and, while he’s here in Jan. 2017, he’s chairing the Canadian Institutes of Health Research (CIHR) International Panel on Peer Review. (The CIHR is one of Canada’s three major government funding agencies for the sciences.)

In other places too, he’s going to be a member of a panel at the University of Oxford Martin School in later January 2017. From the “Is a post-truth world a post-expert world?” event page on the Oxford Martin webspace,

Winston Churchill advised that “experts should be on tap but never on top”. In 2017, is a post-truth world a post-expert world? What does this mean for future debates on difficult policy issues? And what place can researchers usefully occupy in an academic landscape that emphasises policy impact but a political landscape that has become wary of experts? Join us for a lively discussion on academia and the provision of policy advice, examining the role of evidence and experts and exploring how gaps with the public and politicians might be bridged.

This event will be chaired by Achim Steiner, Director of the Oxford Martin School and former Executive Director of the United Nations Environment Programme, with panellists including Oxford Martin Visiting Fellow Professor Sir Peter Gluckman, Chief Science Advisor to the Prime Minister of New Zealand and Chair of the International Network for Government Science Advice; Dr Gemma Harper, Deputy Director for Marine Policy and Evidence and Chief Social Scientist in the Department for Environment, Food and Rural Affairs (Defra), and Professor Stefan Dercon, Chief Economist of the Department for International Development (DFID) and Professor of Economic Policy at the Blavatnik School of Government.

This discussion will be followed by a drinks reception, all welcome.

Here are the logistics should you be lucky enough to be able to attend (from the event page),

25 January 2017 17:00 – 18:15

Lecture Theatre, Oxford Martin School

34 Broad Street (corner of Holywell and Catte Streets)
Oxford
OX1 3BD

Registration ((right hand column) is free.

Finally, Gluckman has published a paper on the digital economy as of Nov. 2016, which can be found here (PDF).

Investigating nanoparticles and their environmental impact for industry?

It seems the Center for the Environmental Implications of Nanotechnology (CEINT) at Duke University (North Carolina, US) is making an adjustment to its focus and opening the door to industry, as well as, government research. It has for some years (my first post about the CEINT at Duke University is an Aug. 15, 2011 post about its mesocosms) been focused on examining the impact of nanoparticles (also called nanomaterials) on plant life and aquatic systems. This Jan. 9, 2017 US National Science Foundation (NSF) news release (h/t Jan. 9, 2017 Nanotechnology Now news item) provides a general description of the work,

We can’t see them, but nanomaterials, both natural and manmade, are literally everywhere, from our personal care products to our building materials–we’re even eating and drinking them.

At the NSF-funded Center for Environmental Implications of Nanotechnology (CEINT), headquartered at Duke University, scientists and engineers are researching how some of these nanoscale materials affect living things. One of CEINT’s main goals is to develop tools that can help assess possible risks to human health and the environment. A key aspect of this research happens in mesocosms, which are outdoor experiments that simulate the natural environment – in this case, wetlands. These simulated wetlands in Duke Forest serve as a testbed for exploring how nanomaterials move through an ecosystem and impact living things.

CEINT is a collaborative effort bringing together researchers from Duke, Carnegie Mellon University, Howard University, Virginia Tech, University of Kentucky, Stanford University, and Baylor University. CEINT academic collaborations include on-going activities coordinated with faculty at Clemson, North Carolina State and North Carolina Central universities, with researchers at the National Institute of Standards and Technology and the Environmental Protection Agency labs, and with key international partners.

The research in this episode was supported by NSF award #1266252, Center for the Environmental Implications of NanoTechnology.

The mention of industry is in this video by O’Brien and Kellan, which describes CEINT’s latest work ,

Somewhat similar in approach although without a direction reference to industry, Canada’s Experimental Lakes Area (ELA) is being used as a test site for silver nanoparticles. Here’s more from the Distilling Science at the Experimental Lakes Area: Nanosilver project page,

Water researchers are interested in nanotechnology, and one of its most commonplace applications: nanosilver. Today these tiny particles with anti-microbial properties are being used in a wide range of consumer products. The problem with nanoparticles is that we don’t fully understand what happens when they are released into the environment.

The research at the IISD-ELA [International Institute for Sustainable Development Experimental Lakes Area] will look at the impacts of nanosilver on ecosystems. What happens when it gets into the food chain? And how does it affect plants and animals?

Here’s a video describing the Nanosilver project at the ELA,

You may have noticed a certain tone to the video and it is due to some political shenanigans, which are described in this Aug. 8, 2016 article by Bartley Kives for the Canadian Broadcasting Corporation’s (CBC) online news.

Prawn (shrimp) shopping bags and saving the earth

Using a material (shrimp shells) that is disposed of as waste to create a biodegradable product (shopping bags) can only be described as a major win. A Jan. 10, 2017 news item on Nanowerk makes the announcement,

Bioengineers at The University of Nottingham are trialling how to use shrimp shells to make biodegradable shopping bags, as a ‘green’ alternative to oil-based plastic, and as a new food packaging material to extend product shelf life.

The new material for these affordable ‘eco-friendly’ bags is being optimised for Egyptian conditions, as effective waste management is one of the country’s biggest challenges.

An expert in testing the properties of materials, Dr Nicola Everitt from the Faculty of Engineering at Nottingham, is leading the research together with academics at Nile University in Egypt.

“Non-degradable plastic packaging is causing environmental and public health problems in Egypt, including contamination of water supplies which particularly affects living conditions of the poor,” explains Dr Everitt.

Natural biopolymer products made from plant materials are a ‘green’ alternative growing in popularity, but with competition for land with food crops, it is not a viable solution in Egypt.

A Jan. 10, 2017 University of Nottingham press release, which originated the news item,expands on the theme,

This new project aims to turn shrimp shells, which are a part of the country’s waste problem into part of the solution.

Dr Everitt said: “Use of a degradable biopolymer made of prawn shells for carrier bags would lead to lower carbon emissions and reduce food and packaging waste accumulating in the streets or at illegal dump sites. It could also make exports more acceptable to a foreign market within a 10-15-year time frame. All priorities at a national level in Egypt.”

Degradable nanocomposite material

The research is being undertaken to produce an innovative biopolymer nanocomposite material which is degradable, affordable and suitable for shopping bags and food packaging.

Chitosan is a man-made polymer derived from the organic compound chitin, which is extracted from shrimp shells, first using acid (to remove the calcium carbonate “backbone” of the crustacean shell) and then alkali (to produce the long molecular chains which make up the biopolymer).

The dried chitosan flakes can then be dissolved into solution and polymer film made by conventional processing techniques.

Chitosan was chosen because it is a promising biodegradable polymer already used in pharmaceutical packaging due to its antimicrobial, antibacterial and biocompatible properties. The second strand of the project is to develop an active polymer film that absorbs oxygen.

Enhancing food shelf life and cutting food waste

This future generation food packaging could have the ability to enhance food shelf life with high efficiency and low energy consumption, making a positive impact on food wastage in many countries.

If successful, Dr Everitt plans to approach UK packaging manufacturers with the product.

Additionally, the research aims to identify a production route by which these degradable biopolymer materials for shopping bags and food packaging could be manufactured.

I also found the funding for this project to be of interest (from the press release),

The project is sponsored by the Newton Fund and the Newton-Mosharafa Fund grant and is one of 13 Newton-funded collaborations for The University of Nottingham.

The collaborations, which are designed to tackle community issues through science and innovation, with links formed with countries such as Brazil, Egypt, Philippines and Indonesia.

Since the Newton Fund was established in 2014, the University has been awarded a total of £4.5m in funding. It also boasts the highest number of institutional-led collaborations.

Professor Nick Miles Pro-Vice-Chancellor for Global Engagement said: “The University of Nottingham has a long and established record in global collaboration and research.

The Newton Fund plays to these strengths and enables us to work with institutions around the world to solve some of the most pressing issues facing communities.”

From a total of 68 universities, The University of Nottingham has emerged as the top awardee of British Council Newton Fund Institutional Links grants (13) and is joint top awardee from a total of 160 institutions competing for British Council Newton Fund Researcher Links Workshop awards (6).

Professor Miles added: “This is testament to the incredible research taking place across the University – both here in the UK and in the campuses in Malaysia and China – and underlines the strength of our research partnerships around the world.”

That’s it!

Panasonic and its next generation makeup mirror

Before leaping to Panasonic’s latest makeup mirror news, here’s an earlier iteration of their product at the 2016 Consumer Electronics Show (CES),

That was posted on Jan. 10, 2016 by Makeup University.

Panasonic has come back in 2017 to hype its “Snow Beauty Mirror,”  a product which builds on its predecessor’s abilities by allowing the mirror to create a makeup look which it then produces for the user. At least, they hope it will—in 2020. From a Jan. 8, 2017 article by Shusuke Murai about the mirror and Japan’s evolving appliances market for The Japan Times,

Panasonic Corp. is developing a “magic” mirror for 2020 that will use nanotechnology for high-definition TVs to offer advice on how to become more beautiful.

The aim of the Snow Beauty Mirror is “to let people become what they want to be,” said Panasonic’s Sachiko Kawaguchi, who is in charge of the product’s development.

“Since 2012 or 2013, many female high school students have taken advantage of blogs and other platforms to spread their own messages,” Kawaguchi said. “Now the trend is that, in this digital era, they change their faces (on a photo) as they like to make them appear as they want to be.”

When one sits in front of the computerized mirror, a camera and sensors start scanning the face to check the skin. It then shines a light to analyze reflection and absorption rates, find flaws like dark spots, wrinkles and large pores, and offer tips on how to improve appearances.

But this is when the real “magic” begins.

Tap print on the results screen and a special printer for the mirror churns out an ultrathin, 100-nanometer makeup-coated patch that is tailor-made for the person examined.

The patch is made of a safe material often used for surgery so it can be directly applied to the face. Once the patch settles, it is barely noticeable and resists falling off unless sprayed with water.

The technologies behind the patch involve Panasonic’s know-how in organic light-emitting diodes (OLED), Kawaguchi said. By using the company’s technology to spray OLED material precisely onto display substrates, the printer connected to the computerized mirror prints a makeup ink that is made of material similar to that used in foundation, she added.

Though the product is still in the early stages of development, Panasonic envisions the mirror allowing users to download their favorite makeups from a database and apply them. It also believes the makeup sheet can be used to cover blemishes and birthmarks.

Before coming up with the smart mirror, Panasonic conducted a survey involving more than 50 middle- to upper-class women from six major Asian cities whose ages ranged from their 20s to 40s about makeup habits and demands.

Some respondents said they were not sure how to care for their skin to make it look its best, while others said they were hesitant to visit makeup counters in department stores.

“As consumer needs are becoming increasingly diverse, the first thing to do is to offer a tailor-made solution to answer each individual’s needs,” Kawaguchi said.

Panasonic aims to introduce the smart mirror and cosmetics sheets at department stores and beauty salons by 2020.

But Kawaguchi said there are many technological and marketing hurdles that must first be overcome — including how to mass-produce the ultrathin sheets.

“We are still at about 30 percent of overall progress,” she said, adding that the company hopes to market the makeup sheet at a price as low as foundation and concealer combined.

“I hope that, by 2020, applying facial sheets will become a major way to do makeup,” she said.

For anyone interested in Japan’s appliances market, please read Murai’s article in its entirety.

US Environmental Protection Agency finalizes its one-time reporting requirements for nanomaterials

The US Environmental Protection Agency (EPA) has announced its one-time reporting requirement for  nanomaterials. From a Jan. 12, 2017 news item on Nanowerk,

The U.S. Environmental Protection Agency (EPA) is requiring one-time reporting and recordkeeping requirements on nanoscale chemical substances in the marketplace. These substances are nano-sized versions of chemicals that are already in the marketplace.
EPA seeks to facilitate innovation while ensuring safety of the substances. EPA currently reviews new chemical substances manufactured or processed as nanomaterials prior to introduction into the marketplace to ensure that they are safe.

For the first time, EPA is using [the] TSCA [Toxic Substances Control Act] to collect existing exposure and health and safety information on chemicals currently in the marketplace when manufactured or processed as nanoscale materials.

The companies will notify EPA of certain information:
– specific chemical identity;
– production volume;
– methods of manufacture; processing, use, exposure, and release information; and,available health and safety data.

Reactions

David Stegon writes about the requirement in a Jan. 12, 2017 posting on Chemical Watch,

The US EPA has finalised its nanoscale materials reporting rule, completing a process that began more than 11 years ago.

The US position contrasts with that of the European Commission, which has rejected the idea of a specific mandatory reporting obligation for nanomaterials. Instead it insists such data can be collected under REACH’s registration rules for substances in general. It has told Echa [ECHA {European Chemicals Agency}] to develop ‘nano observatory’ pages on its website with existing nanomaterial information. Meanwhile, Canada set its reporting requirements in 2015.

The US rule, which comes under section 8(a) of TSCA, will take effect 120 days after publication in the Federal Register.

It defines nanomaterials as chemical substances that are:

  • solids at 25 degrees Celsius at standard atmospheric pressure;
  • manufactured or processed in a form where any particles, including aggregates and agglomerates, are between 1 and 100 nanometers (nm) in at least one dimension; and
  • manufactured or processed to exhibit one or more unique and novel property.

The rule does not apply to chemical substances manufactured or processed in forms that contain less than 1% by weight of any particles between 1 and 100nm.

Taking account of comments received on the rulemaking, the EPA made three changes to the proposed definition:

  • it added the definition of unique and novel properties to help identify substances that act differently at nano sizes;
  • it clarified that a substance is not a nanomaterial if it fits the specified size range, but does not have a size-dependent property that differs from the same chemical at sizes greater than 100nm; and
  • it eliminated part of the nanomaterial definition that had said a reportable chemical may not include a substance that only has trace amounts of primary particles, aggregates, or agglomerates in the size range of 1 to 100nm.

The EPA has added the new information gathering rule (scroll down about 50% of the way) on its Control of Nanoscale Materials under the Toxic Substances Control Act webpage.

There’s also this Jan. 17, 2017 article by Meagan Parrish for the ChemInfo which provides an alternative perspective and includes what appears to be some misinformation (Note: A link has been removed),

It was several years in the making, but in the final stages of its rule-making process for nanomaterial reporting, the Environmental Protection Agency declined to consider feedback from the industry.

Now, with the final language published and the rule set to go into effect in May, some in the industry are concerned that the agency is requiring an unnecessary amount of costly reporting that isn’t likely to reveal potential hazards. The heightened regulations could also hamper the pace of innovation underway in the industry.

“The poster child for nanotechnology is carbon nanotubes,” says James Votaw, a partner with Manatt, Phelps & Phillips, of the form of carbon that is 10,000 smaller than human hair but stronger than steel. “It can be used to make very strong materials and as an additive in plastics to make them electrically conductive or stiffer.”

The EPA has been attempting to define nanomaterials since 2004 and assess the potential for environmental or human health risks associated with their use. In 2008, the EPA launched an effort to collect voluntarily submitted information from key players in the industry, but after a few years, the agency wasn’t happy with amount of responses. The effort to create a mandatory reporting requirement was launched in 2010.

Yet, according to Votaw, after a 2015 proposal of the rule was extensively criticized by the industry for being overly ambiguous and overly inclusive of its coverage, the industry asked the EPA to reopen a dialogue on the rule. The EPA declined.

The new reporting requirement is expected to cost companies about $27.79 million during the first year and $3.09 million in subsequent years. [emphasis mine]

As far as I’m aware, this is a one-time reporting requirement. Although I’m sure many would like to see that change.

As for the Canadian situation, I mentioned the nanomaterials mandatory survey noted in Stegon’s piece in a July 29, 2015 posting. It was one of a series of mandatory surveys (currently, a survey on asbestos is underway) issued as part of Canada’s Chemicals Management Plan. You can find more information about the nanomaterials notice and approach to the survey although there doesn’t appear to have been a report made public but perhaps it’s too soon. From the Nanomaterials Mandatory Survey page,

The Government of Canada is undertaking a stepwise approach to address nanoscale forms of substances on the DSL. The proposed approach consists of three phases:

  • Establishment of a list of existing nanomaterials in Canada (this includes the section 71 Notice);
  • Prioritization of existing nanomaterials for action; and
  • Action on substances identified for further work.

The overall approach was first described in a consultation document entitled Proposed Approach to Address Nanoscale Forms of Substances on the Domestic Substances List, published on March 18, 2015. This consultation document was open for a 60-day public comment period to solicit feedback from stakeholders, particularly on the first phase of the approach.

A second consultation document entitled Proposed Prioritization Approach for Nanoscale Forms of Substances on the Domestic Substances List was published on July 27, 2016. In this document, the approach proposed for prioritization of existing nanomaterials on the DSL is described, taking into consideration the results of the section 71 Notice.  Comments on this consultation document may be submitted prior to September 25, 2016 …

I look forward to discovering a report on the Canadian nanomaterials survey should one be made public.

Essays on Frankenstein

Slate.com is dedicating a month (January 2017) to Frankenstein. This means there were will be one or more essays each week on one aspect or another of Frankenstein and science. These essays are one of a series of initiatives jointly supported by Slate, Arizona State University, and an organization known as New America. It gets confusing since these essays are listed as part of two initiatives:  Futurography and Future Tense.

The really odd part, as far as I’m concerned, is that there is no mention of Arizona State University’s (ASU) The Frankenstein Bicentennial Project (mentioned in my Oct. 26, 2016 posting). Perhaps they’re concerned that people will think ASU is advertising the project?

Introductions

Getting back to the essays, a Jan. 3, 2017 article by Jacob Brogan explains, by means of a ‘Question and Answer’ format article, why the book and the monster maintain popular interest after two centuries (Note: We never do find out who or how many people are supplying the answers),

OK, fine. I get that this book is important, but why are we talking about it in a series about emerging technology?

Though people still tend to weaponize it as a simple anti-scientific screed, Frankenstein, which was first published in 1818, is much richer when we read it as a complex dialogue about our relationship to innovation—both our desire for it and our fear of the changes it brings. Mary Shelley was just a teenager when she began to compose Frankenstein, but she was already grappling with our complex relationship to new forces. Almost two centuries on, the book is just as propulsive and compelling as it was when it was first published. That’s partly because it’s so thick with ambiguity—and so resistant to easy interpretation.

Is it really ambiguous? I mean, when someone calls something frankenfood, they aren’t calling it “ethically ambiguous food.”

It’s a fair point. For decades, Frankenstein has been central to discussions in and about bioethics. Perhaps most notably, it frequently crops up as a reference point in discussions of genetically modified organisms, where the prefix Franken- functions as a sort of convenient shorthand for human attempts to meddle with the natural order. Today, the most prominent flashpoint for those anxieties is probably the clustered regularly interspaced short palindromic repeats, or CRISPR, gene-editing technique [emphasis mine]. But it’s really oversimplifying to suggest Frankenstein is a cautionary tale about monkeying with life.

As we’ll see throughout this month on Futurography, it’s become a lens for looking at the unintended consequences of things like synthetic biology, animal experimentation, artificial intelligence, and maybe even social networking. Facebook, for example, has arguably taken on a life of its own, as its algorithms seem to influence the course of elections. Mark Zuckerberg, who’s sometimes been known to disavow the power of his own platform, might well be understood as a Frankensteinian figure, amplifying his creation’s monstrosity by neglecting its practical needs.

But this book is almost 200 years old! Surely the actual science in it is bad.

Shelley herself would probably be the first to admit that the science in the novel isn’t all that accurate. Early in the novel, Victor Frankenstein meets with a professor who castigates him for having read the wrong works of “natural philosophy.” Shelley’s protagonist has mostly been studying alchemical tomes and otherwise fantastical works, the sort of things that were recognized as pseudoscience, even by the standards of the day. Near the start of the novel, Frankenstein attends a lecture in which the professor declaims on the promise of modern science. He observes that where the old masters “promised impossibilities and performed nothing,” the new scientists achieve far more in part because they “promise very little; they know that metals cannot be transmuted and that the elixir of life is a chimera.”

Is it actually about bad science, though?

Not exactly, but it has been read as a story about bad scientists.

Ultimately, Frankenstein outstrips his own teachers, of course, and pulls off the very feats they derided as mere fantasy. But Shelley never seems to confuse fact and fiction, and, in fact, she largely elides any explanation of how Frankenstein pulls off the miraculous feat of animating dead tissue. We never actually get a scene of the doctor awakening his creature. The novel spends far more dwelling on the broader reverberations of that act, showing how his attempt to create one life destroys countless others. Read in this light, Frankenstein isn’t telling us that we shouldn’t try to accomplish new things, just that we should take care when we do.

This speaks to why the novel has stuck around for so long. It’s not about particular scientific accomplishments but the vagaries of scientific progress in general.

Does that make it into a warning against playing God?

It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.

Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.

I encourage you to read Brogan’s piece in its entirety and perhaps supplement the reading. Mary Shelley has a pretty interesting history. She ran off with Percy Bysshe Shelley who was married to another woman, in 1814  at the age of seventeen years. Her parents were both well known and respected intellectuals and philosophers, William Godwin and Mary Wollstonecraft. By the time Mary Shelley wrote her book, her first baby had died and she had given birth to a second child, a boy.  Percy Shelley was to die a few years later as was her son and a third child she’d given birth to. (Her fourth child born in 1819 did survive.) I mention the births because one analysis I read suggests the novel is also a commentary on childbirth. In fact, the Frankenstein narrative has been examined from many perspectives (other than science) including feminism and LGBTQ studies.

Getting back to the science fiction end of things, the next part of the Futurography series is titled “A Cheat-Sheet Guide to Frankenstein” and that too is written by Jacob Brogan with a publication date of Jan. 3, 2017,

Key Players

Marilyn Butler: Butler, a literary critic and English professor at the University of Cambridge, authored the seminal essay “Frankenstein and Radical Science.”

Jennifer Doudna: A professor of chemistry and biology at the University of California, Berkeley, Doudna helped develop the CRISPR gene-editing technique [emphasis mine].

Stephen Jay Gould: Gould is an evolutionary biologist and has written in defense of Frankenstein’s scientific ambitions, arguing that hubris wasn’t the doctor’s true fault.

Seán Ó hÉigeartaigh: As executive director of the Center for Existential Risk at the University of Cambridge, hÉigeartaigh leads research into technologies that threaten the existience of our species.

Jim Hightower: This columnist and activist helped popularize the term frankenfood to describe genetically modified crops.

Mary Shelley: Shelley, the author of Frankenstein, helped create science fiction as we now know it.

J. Craig Venter: A leading genomic researcher, Venter has pursued a variety of human biotechnology projects.

Lingo

….

Debates

Popular Culture

Further Reading

….

‘Franken’ and CRISPR

The first essay is in a Jan. 6, 2016 article by Kay Waldman focusing on the ‘franken’ prefix (Note: links have been removed),

In a letter to the New York Times on June 2, 1992, an English professor named Paul Lewis lopped off the top of Victor Frankenstein’s surname and sewed it onto a tomato. Railing against genetically modified crops, Lewis put a new generation of natural philosophers on notice: “If they want to sell us Frankenfood, perhaps it’s time to gather the villagers, light some torches and head to the castle,” he wrote.

William Safire, in a 2000 New York Times column, tracked the creation of the franken- prefix to this moment: an academic channeling popular distrust of science by invoking the man who tried to improve upon creation and ended up disfiguring it. “There’s no telling where or how it will end,” he wrote wryly, referring to the spread of the construction. “It has enhanced the sales of the metaphysical novel that Ms. Shelley’s husband, the poet Percy Bysshe Shelley, encouraged her to write, and has not harmed sales at ‘Frank’n’Stein,’ the fast-food chain whose hot dogs and beer I find delectably inorganic.” Safire went on to quote the American Dialect Society’s Laurence Horn, who lamented that despite the ’90s flowering of frankenfruits and frankenpigs, people hadn’t used Frankensense to describe “the opposite of common sense,” as in “politicians’ motivations for a creatively stupid piece of legislation.”

A year later, however, Safire returned to franken- in dead earnest. In an op-ed for the Times avowing the ethical value of embryonic stem cell research, the columnist suggested that a White House conference on bioethics would salve the fears of Americans concerned about “the real dangers of the slippery slope to Frankenscience.”

All of this is to say that franken-, the prefix we use to talk about human efforts to interfere with nature, flips between “funny” and “scary” with ease. Like Shelley’s monster himself, an ungainly patchwork of salvaged parts, it can seem goofy until it doesn’t—until it taps into an abiding anxiety that technology raises in us, a fear of overstepping.

Waldman’s piece hints at how language can shape discussions while retaining a rather playful quality.

This series looks to be a good introduction while being a bit problematic in spots, which roughly sums up my conclusion about their ‘nano’ series in my Oct. 7, 2016 posting titled: Futurography’s nanotechnology series: a digest.

By the way, I noted the mention of CRISPR as it brought up an issue that they don’t appear to be addressing in this series (perhaps they will do this elsewhere?): intellectual property.

There’s a patent dispute over CRISPR as noted in this American Chemical Society’s Chemistry and Engineering News Jan. 9, 2017 video,

Playing God

This series on Frankenstein is taking on other contentious issues. A perennial favourite is ‘playing God’ as noted in Bina Venkataraman’s Jan. 11, 2017 essay on the topic,

Since its publication nearly 200 years ago, Shelley’s gothic novel has been read as a cautionary tale of the dangers of creation and experimentation. James Whale’s 1931 film took the message further, assigning explicitly the hubris of playing God to the mad scientist. As his monster comes to life, Dr. Frankenstein, played by Colin Clive, triumphantly exclaims: “Now I know what it feels like to be God!”

The admonition against playing God has since been ceaselessly invoked as a rhetorical bogeyman. Secular and religious, critic and journalist alike have summoned the term to deride and outright dismiss entire areas of research and technology, including stem cells, genetically modified crops, recombinant DNA, geoengineering, and gene editing. As we near the two-century commemoration of Shelley’s captivating story, we would be wise to shed this shorthand lesson—and to put this part of the Frankenstein legacy to rest in its proverbial grave.

The trouble with the term arises first from its murkiness. What exactly does it mean to play God, and why should we find it objectionable on its face? All but zealots would likely agree that it’s fine to create new forms of life through selective breeding and grafting of fruit trees, or to use in-vitro fertilization to conceive life outside the womb to aid infertile couples. No one objects when people intervene in what some deem “acts of God,” such as earthquakes, to rescue victims and provide relief. People get fully behind treating patients dying of cancer with “unnatural” solutions like chemotherapy. Most people even find it morally justified for humans to mete out decisions as to who lives or dies in the form of organ transplant lists that prize certain people’s survival over others.

So what is it—if not the imitation of a deity or the creation of life—that inspires people to invoke the idea of “playing God” to warn against, or even stop, particular technologies? A presidential commission charged in the early 1980s with studying the ethics of genetic engineering of humans, in the wake of the recombinant DNA revolution, sheds some light on underlying motivations. The commission sought to understand the concerns expressed by leaders of three major religious groups in the United States—representing Protestants, Jews, and Catholics—who had used the phrase “playing God” in a 1980 letter to President Jimmy Carter urging government oversight. Scholars from the three faiths, the commission concluded, did not see a theological reason to flat-out prohibit genetic engineering. Their concerns, it turned out, weren’t exactly moral objections to scientists acting as God. Instead, they echoed those of the secular public; namely, they feared possible negative effects from creating new human traits or new species. In other words, the religious leaders who called recombinant DNA tools “playing God” wanted precautions taken against bad consequences but did not inherently oppose the use of the technology as an act of human hubris.

She presents an interesting argument and offers this as a solution,

The lesson for contemporary science, then, is not that we should cease creating and discovering at the boundaries of current human knowledge. It’s that scientists and technologists ought to steward their inventions into society, and to more rigorously participate in public debate about their work’s social and ethical consequences. Frankenstein’s proper legacy today would be to encourage researchers to address the unsavory implications of their technologies, whether it’s the cognitive and social effects of ubiquitous smartphone use or the long-term consequences of genetically engineered organisms on ecosystems and biodiversity.

Some will undoubtedly argue that this places an undue burden on innovators. Here, again, Shelley’s novel offers a lesson. Scientists who cloister themselves as Dr. Frankenstein did—those who do not fully contemplate the consequences of their work—risk later encounters with the horror of their own inventions.

At a guess, Venkataraman seems to be assuming that if scientists communicate and make their case that the public will cease to panic with reference moralistic and other concerns. My understanding is that social scientists have found this is not the case. Someone may understand the technology quite well and still oppose it.

Frankenstein and anti-vaxxers

The Jan. 16, 2017 essay by Charles Kenny is the weakest of the lot, so far (Note: Links have been removed),

In 1780, University of Bologna physician Luigi Galvani found something peculiar: When he applied an electric current to the legs of a dead frog, they twitched. Thirty-seven years later, Mary Shelley had Galvani’s experiments in mind as she wrote her fable of Faustian overreach, wherein Dr. Victor Frankenstein plays God by reanimating flesh.

And a little less than halfway between those two dates, English physician Edward Jenner demonstrated the efficacy of a vaccine against smallpox—one of the greatest killers of the age. Given the suspicion with which Romantic thinkers like Shelley regarded scientific progress, it is no surprise that many at the time damned the procedure as against the natural order. But what is surprising is how that suspicion continues to endure, even after two centuries of spectacular successes for vaccination. This anti-vaccination stance—which now infects even the White House—demonstrates the immense harm that can be done by excessive distrust of technological advance.

Kenny employs history as a framing device. Crudely, Galvani’s experiments led to Mary Shelley’s Frankenstein which is a fable about ‘playing God’. (Kenny seems unaware there are many other readings of and perspectives on the book.) As for his statement ” … the suspicion with which Romantic thinkers like Shelley regarded scientific progress … ,” I’m not sure how he arrived at his conclusion about Romantic thinkers. According to Richard Holmes (in his book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science), their relationship to science was more complex. Percy Bysshe Shelley ran ballooning experiments and wrote poetry about science, which included footnotes for the literature and concepts he was referencing; John Keats was a medical student prior to his establishment as a poet; and Samuel Taylor Coleridge (The Rime of the Ancient Mariner, etc.) maintained a healthy correspondence with scientists of the day sometimes influencing their research. In fact, when you analyze the matter, you realize even scientists are, on occasion, suspicious of science.

As for the anti-vaccination wars, I wish this essay had been more thoughtful. Yes, Andrew Wakefield’s research showing a link between MMR (measles, mumps, and rubella) vaccinations and autism is a sham. However, having concerns and suspicions about technology does not render you a fool who hasn’t progressed from 18th/19th Century concerns and suspicions about science and technology. For example, vaccines are being touted for all kinds of things, the latest being a possible antidote to opiate addiction (see Susan Gados’ June 28, 2016 article for ScienceNews). Are we going to be vaccinated for everything? What happens when you keep piling vaccination on top of vaccination? Instead of a debate, the discussion has devolved to: “I’m right and you’re wrong.”

For the record, I’m grateful for the vaccinations I’ve had and the diminishment of diseases that were devastating and seem to be making a comeback with this current anti-vaccination fever. That said, I think there are some important questions about vaccines.

Kenny’s essay could have been a nuanced discussion of vaccines that have clearly raised the bar for public health and some of the concerns regarding the current pursuit of yet more vaccines. Instead, he’s been quite dismissive of anyone who questions vaccination orthodoxy.

The end of this piece

There will be more essays in Slate’s Frankenstein series but I don’t have time to digest and write commentary for all of them.

Please use this piece as a critical counterpoint to some of the series and, if I’ve done my job, you’ll critique this critique. Please do let me know if you find any errors or want to add an opinion or add your own critique in the Comments of this blog.

ETA Jan. 25, 2017: Here’s the Frankenstein webspace on Slate’s Futurography which lists all the essays in this series. It’s well worth looking at the list. There are several that were not covered here.