Tag Archives: National Science Foundation

Documentary “NNI Retrospective Video: Creating a National Initiative” celebrates the US National Nanotechnology Initiative (NNI) and a lipid nanoparticle question

i stumbled across an August 4, 2022 tvworldwide.com news release about a video celbrating the US National Nanotechnology Initiative’s (NNI) over 20 years of operation, (Note: A link has been removed),

TV Worldwide, since 1999, a pioneering web-based global TV network, announced that it was releasing a video trailer highlighting a previously released documentary on NNI over the past 20 years, entitled, ‘NNI Retrospective Video: Creating a National Initiative’.

The video and its trailer were produced in cooperation with the National Nanotechnology Initiative (NNI), the National Science Foundation and the University of North Carolina Greensboro.

Video Documentary Synopsis

Nanotechnology is a megatrend in science and technology at the beginning of the 21 Century. The National Nanotechnology Initiative (NNI) has played a key role in advancing the field after it was announced by President Clinton in January 2000. Neil Lane was Presidential Science Advisor. Mike Roco proposed the initiative at the White House in March 1999 on behalf of the Interagency Working Group on Nanotechnology and was named the founding Chair of NSET to implement NNI beginning with Oct. 2000. NSF led the preparation of this initiative together with other agencies including NIH, DoD, DOE, NASA, and EPA. Jim Murday was named the first Director of NNCO to support NSET. The scientific and societal success of NNI has been recognized in the professional communities, National Academies, PCAST, and Congress. Nanoscale science, engineering and technology are strongly connected and collectively called Nanotechnology.

This video documentary was made after the 20th NNI grantees conference at NSF. It is focused on creating and implementing NNI, through video interviews. The interviews focused on three questions: (a) Motivation and how NNI started; (b) The process and reason for the success in creating NNI; (c) Outcomes of NNI after 20 years, and how the initial vision has been realized.

About the National Nanotechnology Initiative (NNI)

The National Nanotechnology Initiative (NNI) is a U.S. Government research and development (R&D) initiative. Over thirty Federal departments, independent agencies, and commissions work together toward the shared vision of a future in which the ability to understand and control matter at the nanoscale leads to ongoing revolutions in technology and industry that benefit society. The NNI enhances interagency coordination of nanotechnology R&D, supports a shared infrastructure, enables leveraging of resources while avoiding duplication, and establishes shared goals, priorities, and strategies that complement agency-specific missions and activities.

The NNI participating agencies work together to advance discovery and innovation across the nanotechnology R&D enterprise. The NNI portfolio encompasses efforts along the entire technology development pathway, from early-stage fundamental science through applications-driven activities. Nanoscience and nanotechnology are prevalent across the R&D landscape, with an ever-growing list of applications that includes nanomedicine, nanoelectronics, water treatment, precision agriculture, transportation, and energy generation and storage. The NNI brings together representatives from multiple agencies to leverage knowledge and resources and to collaborate with academia and the private sector, as appropriate, to promote technology transfer and facilitate commercialization. The breadth of NNI-supported infrastructure enables not only the nanotechnology community but also researchers from related disciplines.

In addition to R&D efforts, the NNI is helping to build the nanotechnology workforce of the future, with focused efforts from K–12 through postgraduate research training. The responsible development of nanotechnology has been an integral pillar of the NNI since its inception, and the initiative proactively considers potential implications and technology applications at the same time. Collectively, these activities ensure that the United States remains not only the place where nanoscience discoveries are made, but also where these discoveries are translated and manufactured into products to benefit society.

I’m embedding the trailer here and a lipid nanoparticle question follows (The origin story told in Vancouver [Canada] is that the work was started at the University of British Columbia by Pieter Quilty.),

I was curious about what involvement the US NNI had with the development of lipid nanoparticles (LNPs) and found a possible answer to that question on Wikipedia The LNP Wikipedia entry certainly gives the bulk of the credit to Quilty but there was work done prior to his involvement (Note: Links have been removed),

A significant obstacle to using LNPs as a delivery vehicle for nucleic acids is that in nature, lipids and nucleic acids both carry a negative electric charge—meaning they do not easily mix with each other.[19] While working at Syntex in the mid-1980s,[20] Philip Felgner [emphasis mine] pioneered the use of artificially-created cationic lipids (positively-charged lipids) to bind lipids to nucleic acids in order to transfect the latter into cells.[21] However, by the late 1990s, it was known from in vitro experiments that this use of cationic lipids had undesired side effects on cell membranes.[22]

During the late 1990s and 2000s, Pieter Cullis of the University of British Columbia [emphasis mine] developed ionizable cationic lipids which are “positively charged at an acidic pH but neutral in the blood.”[8] Cullis also led the development of a technique involving careful adjustments to pH during the process of mixing ingredients in order to create LNPs which could safely pass through the cell membranes of living organisms.[19][23] As of 2021, the current understanding of LNPs formulated with such ionizable cationic lipids is that they enter cells through receptor-mediated endocytosis and end up inside endosomes.[8] The acidity inside the endosomes causes LNPs’ ionizable cationic lipids to acquire a positive charge, and this is thought to allow LNPs to escape from endosomes and release their RNA payloads.[8]

From 2005 into the early 2010s, LNPs were investigated as a drug delivery system for small interfering RNA (siRNA) drugs.[8] In 2009, Cullis co-founded a company called Acuitas Therapeutics to commercialize his LNP research [emphasis mine]; Acuitas worked on developing LNPs for Alnylam Pharmaceuticals’s siRNA drugs.[24] In 2018, the FDA approved Alnylam’s siRNA drug Onpattro (patisiran), the first drug to use LNPs as the drug delivery system.[3][8]

By that point in time, siRNA drug developers like Alnylam were already looking at other options for future drugs like chemical conjugate systems, but during the 2010s, the earlier research into using LNPs for siRNA became a foundation for new research into using LNPs for mRNA.[8] Lipids intended for short siRNA strands did not work well for much longer mRNA strands, which led to extensive research during the mid-2010s into the creation of novel ionizable cationic lipids appropriate for mRNA.[8] As of late 2020, several mRNA vaccines for SARS-CoV-2 use LNPs as their drug delivery system, including both the Moderna COVID-19 vaccine and the Pfizer–BioNTech COVID-19 vaccines.[3] Moderna uses its own proprietary ionizable cationic lipid called SM-102, while Pfizer and BioNTech licensed an ionizable cationic lipid called ALC-0315 from Acuitas.[8] [emphases mine]

You can find out more about Philip Felgner here on his University of California at Irvine (UCI) profile page.

I wish they had been a little more careful about some of the claims that Thomas Kalil made about lipid nanoparticles in both the trailer and video but, getting back to the trailer (approx. 3 mins.) and the full video (approx. 25 mins.), either provides insight into a quite extraordinary effort.

Bravo to the US NNI!

Automated science writing?

It seems that automated science writing is not ready—yet. Still, an April 18, 2019 news item on ScienceDaily suggests that progress is being made,

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT [Massachusetts Institute of Technology] and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

An April 17, 2019 MIT news release, which originated the news item, delves into the research and its implications,

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists [emphasis mine] scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

As usual, this ‘automated writing system’ is framed as a ‘helper’ not an usurper of anyone’s job. However, its potential for changing the nature of the work is there. About five years ago I featured another ‘automated writing’ story in a July 16, 2014 posting titled: ‘Writing and AI or is a robot writing this blog?’ You may have been reading ‘automated’ news stories for years. At the time, the focus was on sports and business.

Getting back to 2019 and science writing, here’s a link to and a citation for the paper,

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications by Rumen Dangovski, Li Jing, Preslav Nakov, Mićo Tatalović and Marin Soljačić. Transactions of the Association for Computational Linguistics Volume 07, 2019 pp.121-138 DOI: https://doi.org/10.1162/tacl_a_00258 Posted Online 2019

© 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

This paper is open access.

Thin-film electronic stickers for the Internet of Things (IoT)

This research is from Purdue University (Indiana, US) and the University of Virginia (US) increases and improves the interactivity between objects in what’s called the Internet of Things (IoT).

Caption: Electronic stickers can turn ordinary toy blocks into high-tech sensors within the ‘internet of things.’ Credit: Purdue University image/Chi Hwan Lee

From a July 16, 2018 news item on ScienceDaily,

Billions of objects ranging from smartphones and watches to buildings, machine parts and medical devices have become wireless sensors of their environments, expanding a network called the “internet of things.”

As society moves toward connecting all objects to the internet — even furniture and office supplies — the technology that enables these objects to communicate and sense each other will need to scale up.

Researchers at Purdue University and the University of Virginia have developed a new fabrication method that makes tiny, thin-film electronic circuits peelable from a surface. The technique not only eliminates several manufacturing steps and the associated costs, but also allows any object to sense its environment or be controlled through the application of a high-tech sticker.

Eventually, these stickers could also facilitate wireless communication. …

A July 16, 2018 University of Purdue news release (also on EurekAlert), which originated the news item, explains more,

“We could customize a sensor, stick it onto a drone, and send the drone to dangerous areas to detect gas leaks, for example,” said Chi Hwan Lee, Purdue assistant professor of biomedical engineering and mechanical engineering.

Most of today’s electronic circuits are individually built on their own silicon “wafer,” a flat and rigid substrate. The silicon wafer can then withstand the high temperatures and chemical etching that are used to remove the circuits from the wafer.

But high temperatures and etching damage the silicon wafer, forcing the manufacturing process to accommodate an entirely new wafer each time.

Lee’s new fabrication technique, called “transfer printing,” cuts down manufacturing costs by using a single wafer to build a nearly infinite number of thin films holding electronic circuits. Instead of high temperatures and chemicals, the film can peel off at room temperature with the energy-saving help of simply water.

“It’s like the red paint on San Francisco’s Golden Gate Bridge – paint peels because the environment is very wet,” Lee said. “So in our case, submerging the wafer and completed circuit in water significantly reduces the mechanical peeling stress and is environmentally-friendly.”

A ductile metal layer, such as nickel, inserted between the electronic film and the silicon wafer, makes the peeling possible in water. These thin-film electronics can then be trimmed and pasted onto any surface, granting that object electronic features.

Putting one of the stickers on a flower pot, for example, made that flower pot capable of sensing temperature changes that could affect the plant’s growth.

Lee’s lab also demonstrated that the components of electronic integrated circuits work just as well before and after they were made into a thin film peeled from a silicon wafer. The researchers used one film to turn on and off an LED light display.

“We’ve optimized this process so that we can delaminate electronic films from wafers in a defect-free manner,” Lee said.

This technology holds a non-provisional U.S. patent. The work was supported by the Purdue Research Foundation, the Air Force Research Laboratory (AFRL-S-114-054-002), the National Science Foundation (NSF-CMMI-1728149) and the University of Virginia.

The researchers have provided a video,

Here’s a link to and a citation for the paper,

Wafer-recyclable, environment-friendly transfer printing for large-scale thin-film nanoelectronics by Dae Seung Wie, Yue Zhang, Min Ku Kim, Bongjoong Kim, Sangwook Park, Young-Joon Kim, Pedro P. Irazoqui, Xiaolin Zheng, Baoxing Xu, and Chi Hwan Lee.
PNAS July 16, 2018 201806640 DOI: https://doi.org/10.1073/pnas.1806640115
published ahead of print July 16, 2018

This paper is behind a paywall.

Dexter Johnson provides some context in his July 25, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electronic and Electrical Engineers] website), Note: A link has been removed,

The Internet of Things (IoT), the interconnection of billions of objects and devices that will be communicating with each other, has been the topic of many futurists’ projections. However, getting the engineering sorted out with the aim of fully realizing the myriad visions for IoT is another story. One key issue to address: How do you get the electronics onto these devices efficiently and economically?

A team of researchers from Purdue University and the University of Virginia has developed a new manufacturing process that could make equipping a device with all the sensors and other electronics that will make it Internet capable as easily as putting a piece of tape on it.

… this new approach makes use of a water environment at room temperature to control the interfacial debonding process. This allows clean, intact delamination of prefabricated thin film devices when they’re pulled away from the original wafer.

The use of mechanical peeling in water rather than etching solution provides a number of benefits in the manufacturing scheme. Among them are simplicity, controllability, and cost effectiveness, says Chi Hwan Lee, assistant professor at Purdue University and coauthor of the paper chronicling the research.

If you have the time, do read Dexter’s piece. He always adds something that seems obvious in retrospect but wasn’t until he wrote it.

2015 daguerreotype exhibit follows problematic 2005 show

In 2005, curators had a horrifying experience when historical images (daguerreotypes) were deteriorating as the 150-year old images were being displayed in an exhibit titled “Young America.” Some 25 of the photographs were affected, five of them sustaining critical damage. The debacle occasioned a research project involving conservators, physicists, and nanotechnology (see my Jan. 10, 2013 posting for more about the 2005 exhibit and resulting research project).

A new daguerreotype exhibit currently taking place showcases the results of that research according to a Nov. 13, 2015 University of Rochester news release,

In 1839, Louis-Jacques-Mandé Daguerre unveiled one of the world’s first successful photographic mediums: the daguerreotype. The process transformed the human experience by providing a means to capture light and record people, places, and events. The University of Rochester is leading groundbreaking nanotechnology research that explores the extraordinary qualities of this photographic process. A new exhibition in Rush Rhees Library showcases the results of this research, while bridging the gap between the sciences and the humanities. …

… From 2010-2014, a National Science Foundation grant supported nanotechnology research conducted by two University of Rochester scientists—Nicholas Bigelow, Lee A. DuBridge Professor of Physics, and Ralph Wiegandt, visiting research scientist and conservator—who explored how environment impacts the survival of these unique, non-reproducible images. In addition to conservation science and cultural research, Bigelow and Wiegandt are also investigating ways in which the chemical and physical processes used to create daguerreotypes can influence modern nanofabrication and nanotechnology.

“The daguerreotype should be considered one of humankind’s most disruptive technological advances,” Bigelow and Wiegandt said. “Not only was it the first successful imaging medium, it was also the first truly engineered nanotechnology. The daguerreotype was a prescient catalyst to the ensuing cascade of discoveries in physics and chemistry over the latter half of the 19th century and into the 20th.”

Blending the past with the future, the exhibition displays the first known daguerreotype of a Rochester graduating class (1853) alongside a 2015 daguerreotype of current University President Joel Seligman, created by Rochester daguerreotypist Irving Pobboravsky.

Both Bigelow and Wiegandt are mentioned in the 2013 posting describing the research project’s inception.

For anyone who’s in the area of New York state where the University of Rochester is located, the exhibit will run until February 29, 2016 in the Friedlander Lobby of Rush Rhees Library.  Plus, there’s this from the news release,

A special presentation about the scientific advances surrounding the daguerreotype and their relationship to cultural preservation will be led by Bigelow, Wiegandt, and Jim Kuhn, assistant dean for Special Collections and Preservation, on December 14 from 7-9 p.m. in the Hawkins-Carlson Room of Rush Rhees Library. For more information visit: http://www.library.rochester.edu/event/daguerreotype-exhibition or call (585).

There’s no indication that the special presentation will be livestreamed or recorded and made available at a later date.

Reactions to Canada’s 2015 election Liberal majority and speculations about science and the new cabinet

The euphoria is dying down and, on balance, there was surprisingly little, the tone being more one of optimism laced with caution on the occasion of the Conservative’s defeat at the hands of the Liberal party in the Oct. 19, 2015 Canadian federal election.

Of course the big question for me and other Canadian science bloggers is:

What about science in the wake of the 2015 Liberal majority government in Canada?

I’ve gathered bits and pieces from various published opinions on the topic. First, there’s Brian Owen, a freelance writer in St. Stephen, New Brunswick (there’s more about him in my Aug. 18, 2015 posting about the upcoming Canadian Science Policy Conference to be held Nov. 25 -27, 2015 in Ottawa [Canada’s capital]) in an Oct. 20, 2015 opinion piece for ScienceInsider,

Many Canadian scientists are celebrating the result of yesterday’s federal election, which saw Stephen Harper’s Conservative government defeated after nearly 10 years in power.

The center-left Liberal Party under Justin Trudeau won an unexpected majority government, taking 184 of the 338 seats in the House of Commons. The Conservatives will form the opposition with 99 seats, while the left-leaning New Democratic Party (NDP) fell to third place with just 44 seats.

“Many scientists will be pleased with the outcome,” says Jim Woodgett, director of research at the Lunenfeld-Tanenbaum Research Institute at Mount Sinai Hospital in Toronto. “The Liberal party has a strong record in supporting science.” [emphasis mine]

I don’t think the Liberal record is that great. If I understand it rightly, the first muzzle placed on government scientists was applied by a then Liberal government to Health Canada. That’s right the Conservatives got the idea from the Liberals and it’s not the only one they got from that source. Omnibus bills were also pioneered by the Liberal government.

However, hope still springs in mine and others’ bosoms as can be seen in an Oct. 21, 2015 essay in the Guardian (UK newspaper) by Michael Halpern of the Center for Science and Democracy at the US-based Union of Concerned Scientists  (Note: Links have been removed),

There was a palpable outpouring of relief from Canadian scientists as the Liberal Party won a majority on Monday night [Oct. 19, 2015], bringing to an end nine years of escalating hostility by the Harper government towards its own research base. Drastic cuts to funding and constraints on scientific freedom have significantly damaged Canadian research and its capacity to develop science-based public health and environmental policies.

Eight hundred scientists from thirty-two countries wrote an open letter urging the prime minster to ease restrictions on scientists and data. In October 2014, a Ryerson University professor wrote in Science magazine that the election presented an “opportunity to reboot the federal government’s controversial approach to science policy and research.”

All of this advocacy worked. Science became a major campaign issue during the election. There were all-party debates on science policy and extensive media coverage. The Green, Liberal and NDP platforms included significant commitments to restore science to its rightful place in society and public policy.

“We’ll reverse the $40 million cut that Harper made to our federal ocean science and monitoring programs,” said Liberal leader Justin Trudeau at a September campaign stop. “The war on science ends with the liberal government.” In tweet after tweet after tweet, opposition candidates argued that they were best positioned to defend scientific integrity.

Now that it’s been elected with a healthy majority, the Liberal Party says it will make data openly available, unmuzzle scientists, bring back the long form census, appoint a chief science officer, and make the agency Statistics Canada fully independent.

In the United States, many celebrated the end of the Bush administration in 2008, thinking that its restrictions on science would evaporate the moment that the Obama administration took office. It wasn’t true. There has been significant progress in protecting scientists from political influence. But the public has still lacked access to scientific information on multiple environmental and public health issues.

So who will keep watch over the new government, as it’s forced to choose among its many priorities? Canadian unions, scientists, policy experts and activists need to continue to push for real change. It’s up to those who care most about science and democracy to keep Trudeau on his toes.

Returning to Owen’s article, there are more pledges from the new Liberal government,

… Trudeau has also said his party will embrace “evidence based policy” and “data-driven decision-making,”  do more to address climate change, protect endangered species, and review the environmental impact of major energy and development projects.

Woodgett welcomes those pledges, but warns that they would not address the larger issue of what he sees as the government’s neglect of basic research funding. “I hope we will see less short-term thinking and much greater support for discovery research going forward,” he says. “We are at serious risk of a lost generation of scientists and it’s critical that younger researchers are given a clear indication that Canada is open to their ideas and needs.”

Science advocates plan to watch the new government closely to ensure it lives up to its promises. “Great to see Harper gone, but another majority is an awfully big blank cheque,” wrote Michael Rennie, a freshwater ecologist at Lakehead University in Thunder Bay, on Twitter.

David Bruggeman in a cautionary Oct. 22, 2015 posting (on his Pasco Phronesis blog) sums things up in this title: Will New Canadian Government Be The Change Its Scientists Can Believe In? (Note: Links have been removed),

… Only one of the four party representatives at the recent science and technology debate managed to win a seat in the upcoming Parliament.  MP Marc Garneau will remain in Parliament, and his experience in the Canadian Space Agency means he may be able to better manage the changes sought in official government (as opposed to Parliamentary) policy.

The Conservatives will now shift to being the Official Opposition (the largest party not in power).  However, the current cabinet minister responsible for science and technology, and at least two of his predecessors, lost their seats.  The party that was the Official Opposition, the New Democratic Party (NDP), lost several seats, returning to the third largest party in Parliament.  (However, they appear to be a more natural ally for the Liberals than the Conservatives) MP Kennedy Stewart, who has championed the establishment of a Parliamentary Science Officer, barely retained his seat.  He will likely remain as the NDP science critic.

… While the policies on media access to government scientists are part of this trend, they may not be the first priority for Trudeau and his cabinet.  It may turn out to be something similar to the transition from the Bush to the Obama Administrations.  Changes to policies concerning so-called political interference with science were promised, but have not gotten the thorough commitment from the Obama Administration that some would have liked and/or expected.

As David notes. we lost significant critical voices when those Conservative MPs failed to get re-elected.

In a post-election Oct. 24, 2015 posting, Sarah Boon offers a call to action on her Watershed Moments blog (Note: Links have been removed),

I think it’s important to realize, however, that the work doesn’t end here.

Canadian scientists found their voice in the run up to the election, but they’d better not lose it now.

In a pre-election editorial on the Science Borealis Blog, Pascal Lapointe suggested that – after the election – the organizations that worked so hard to make science an election issue should join forces and keep pushing the government to keep science as a top priority. These groups include Evidence for Democracy, the Science Integrity Project, Get Science Right, Our Right to Know, the Professional Institute of the Public Service of Canada, and more.

Finally, there’s an Oct. 20, 2015 posting by Canadians Julia Whidden and Rachel Skubel on the Southern Fried Science blog explaining the Canadian election to American colleagues in what begins in a facey style which, thankfully and quickly, switches to informative and opinionated (Note: They have nothing good to say about the Conservatives and science),

Up until this past year, the thought of Canadian politics had probably never crossed your mind. For some of you, your introduction to the topic may have been via the astute criticisms of John Oliver published this past weekend. His YouTube video currently skyrocketing at just under 3 million views in less than 48 hours, may have even been the introduction to Canadian politics for some Canadians. Let’s face it: in comparison to the flashy and sometimes trashy race of our neighbors to the south (ahem, you Americans), Canadian politics are usually tame, boring, and dry. …

We present a few major issues related to marine science and conservation that Harper either dragged down or destroyed, and the complementary response by our new PM Trudeau from his platform. …

Based on the Liberals party’s platform, and their statements throughout the last year, here’s a taste of the contrasts between old and new:

Harper/Conservatives Trudeau/Liberals
Marine Protected AreasCommitted in 2011 to protect 10% of Canada’s coastal marine and coastal areas by 2020 under the International Convention on Biodiversity, but is lagging at a meager 1.3% – and only 0.11% is fully closed to “extractive activities.” 

 

MPApercent

 

Proposed MPAs have been stalled by inaction, failure to cooperate by the federal government or stakeholders, and overall a system which needs an infusion of resources – not cuts – to meet ambitious goals.

“We will increase the amount of Canada’s marine and coastal areas that are protected from 1.3 percent to 5 percent by 2017, and 10 percent by 2020.” Liberal Party’s Protecting our Oceans mandate

There is a bit of misinformation in the Southern Fried Science posting,

The National Research Council (NRC) is Canada’s equivalent of America’s National Science Foundation (NSF).

The closest analogue to the US National Science Foundation is Canada’s Tri-Council Agencies comprised of the Natural Sciences and Engineering Research Council (NSERC), the Social Sciences and Humanities Research Council (SSHRC), and the Canadian Institutes of Health Research (CIHR).

Next step: appointing a cabinet

Oddly, I haven’t found anyone speculating as to what will happen to science when Justin Trudeau announces his cabinet. He has already stated that his cabinet will be significantly smaller than Stephen Harper’s cabinet of 39 ministers. Numbers for the new cabinet range from 25 to 28 to 30. The largest proposed Trudeau cabinet (30) is almost 25% less than the previous one. Clearly, some ministries will have to go or be combined with other ones.

I’m guessing that Science, which is considered a junior ministry, will be rolled into another ministry, possibly Industry, to be renamed, Industry and Science. Or, by appointing a Chief Science Advisor, Trudeau trumpets the new importance of science with this special status and disburses the Science Ministry responsibilities amongst a variety of ministries.

In any event, I look forward to finding out later this week (Nov. 2 – 6, 2015) whether either or neither of my predictions comes true.

*Canadian cabinet update: To see how I got it both wrong and right see my Nov.4, 2015 posting.

ETA Nov. 5, 2015: I found one more piece for this roundup, an Oct. 22, 2015 article by Helen Carmichael for Chemistry World published by the UK’s Royal Society of Chemistry (Note: Links have been removed),

There will likely be a shift in the Canadian government’s target research areas towards areas such as green energy and away from fossil fuels, observers say. In addition, they expect that the Trudeau government will be more hands off when it comes to the science that it funds – giving money to the granting councils and trusting them to disburse those funds via peer review. …

The way that science is funded – the politicisation of science – will be less of an issue for the next while,’ says John Brennan, a chemistry and chemical biology professor at McMaster University in Ontario, Canada, who directs the school’s Biointerfaces Institute.

Trudeau and his Liberal party have promised to appoint a chief science officer similar to the national science adviser position that the Harper government eliminated in 2008. Canada’s new chief science officer would report to the prime minister and ensure that government science is available to the public, that all the country’s scientists are able to speak freely about their work and that scientific analyses are considered when the Canadian government develops policy. The Trudeau government has also said that it will create a central online portal for government-funded scientific research to enable greater public access.

The Liberals offer quite a different vision for the Canadian economy than the Conservatives, planning to run short-term budget deficits to increase government spending on public infrastructure, and to return the country to a balanced budget in 2019–20. The party has committed to C$25 million (£12 million) in funding for National Parks and reversing budget cuts to government ocean science and monitoring programmes.

In addition to proposing initiatives to increase business investment in research and development, the Liberals want a tax credit, and will invest C$200 million annually to support innovation in the forestry, fisheries, mining, energy and agriculture sectors. Public science is particularly important in Canada, where the private sector funds a much lower proportion of research than most industrialised nations.

Provincial governments own Canada’s natural resources, with fossil fuel production largely in Alberta and Saskatchewan. Energy production is a major part of the Canadian economy. Trudeau has committed to set up a C$2 billion fund to help the country transition to a low carbon economy, but meanwhile he is not expected to withdraw support for the proposed Alberta to Texas Keystone XL oil pipeline.

Incoming president and chief executive of the Chemistry Industry Association of Canada (CIAC), Bob Masterson, recently told Chemistry World that rapid policy decisions by Canadian governments and retailers, without sufficient consultation with industry, are not advantageous or based on sound science. He described missed opportunities for the Canadian chemical industry to engage with regulators, coupled with a lack of coordination between various tiers of Canada’s national and regional regulations. On key issues, such as Canada’s Chemical Management Plan, global trade and maintaining competitive corporate tax rates, Masterson says the CIAC believes the liberal positions represent continuity rather than change from the previous government.

Carmichael’s offers a good overview and is the only one of *three* (the others* being from David Bruggeman *and Michael Halpern*) analyses  I’ve found, that are being written by people who are not navel gazing.

*’two’ changed to ‘three’, ‘other’ changed to ‘others’, and ‘and Michael Halpern’ added 1250 PST on Nov. 5, 2015.

Funding trends for US synthetic biology efforts

Less than 1% of total US federal funding for synthetic biology is dedicated to risk research according to a Sept. 16, 2015 Woodrow Wilson International Center for Scholars news release on EurekAlert,

A new analysis by the Synthetic Biology Project at the Wilson Center finds the Defense Department and its Defense Advanced Research Projects Agency (DARPA) fund much of the U.S. government’s research in synthetic biology, with less than 1 percent of total federal funding going to risk research.

The report, U.S. Trends in Synthetic Biology Research, finds that between 2008 and 2014, the United States invested approximately $820 million dollars in synthetic biology research. In that time period, the Defense Department became a key funder of synthetic biology research. DARPA’s investments, for example, increased from near zero in 2010 to more than $100 million in 2014 – more than three times the amount spent by the National Science Foundation (NSF).

The Wilson Center news release can also be found here on the Center’s report publication page where it goes on to provide more detail and where you can download the report,

The report, U.S. Trends in Synthetic Biology Research, finds that between 2008 and 2014, the United States invested approximately $820 million dollars in synthetic biology research. In that time period, the Defense Department became a key funder of synthetic biology research. DARPA’s investments, for example, increased from near zero in 2010 to more than $100 million in 2014 – more than three times the amount spent by the National Science Foundation (NSF).

“The increase in DARPA research spending comes as NSF is winding down its initial investment in the Synthetic Biology Engineering Research Center, or SynBERC,” says Dr. Todd Kuiken, senior program associate with the project. “After the SynBERC funding ends next year, it is unclear if there will be a dedicated synthetic biology research program outside of the Pentagon. There is also little investment addressing potential risks and ethical issues, which can affect public acceptance and market growth as the field advances.”

The new study found that less than one percent of the total U.S. funding is focused on synthetic biology risk research and approximately one percent addresses ethical, legal, and social issues.

Internationally, research funding is increasing. Last year, research investments by the European Commission and research agencies in the United Kingdom exceeded non-defense spending in the United States, the report finds.

The research spending comes at a time of growing interest in synthetic biology, particularly surrounding the potential presented by new gene-editing techniques. Recent research by the industry group SynBioBeta indicated that, so far in 2015, synthetic biology companies raised half a billion dollars – more than the total investments in 2013 and 2014 combined.

In a separate Woodrow Wilson International Center for Scholars Sept. 16, 2015 announcement about the report, an upcoming event notice was included,

Save the date: On Oct. 7, 2015, the Synthetic Biology Project will be releasing a new report on synthetic biology and federal regulations. More details will be forthcoming, but the report release will include a noon event [EST] at the Wilson Center in Washington, DC.

I haven’t been able to find any more information about this proposed report launch but you may want to check the Synthetic Biology Project website for details as they become available. ETA Oct. 1, 2015: The new report titled: Leveraging Synthetic Biology’s Promise and Managing Potential Risk: Are We Getting It Right? will be launched on Oct. 15, 2015 according to an Oct. 1, 2015 notice,

As more applications based on synthetic biology come to market, are the existing federal regulations adequate to address the risks posed by this emerging technology?

Please join us for the release of our new report, Leveraging Synthetic Biology’s Promise and Managing Potential Risk: Are We Getting It Right? Panelists will discuss how synthetic biology applications would be regulated by the U.S. Coordinated Framework for Regulation of Biotechnology, how this would affect the market pathway of these applications and whether the existing framework will protect human health and the environment.

A light lunch will be served.

Speakers

Lynn Bergeson, report author; Managing Partner, Bergeson & Campbell

David Rejeski, Director, Science and Technology Innovation Program

Thursday,October 15th, 2015
12:00pm – 2:00pm

6th Floor Board Room

Directions

Wilson Center
Ronald Reagan Building and
International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania, Ave., NW
Washington, D.C. 20004

Phone: 202.691.4000

RSVP NOW »

$81M for US National Nanotechnology Coordinated Infrastructure (NNCI)

Academics, small business, and industry researchers are the big winners in a US National Science Foundation bonanza according to a Sept. 16, 2015 news item on Nanowerk,

To advance research in nanoscale science, engineering and technology, the National Science Foundation (NSF) will provide a total of $81 million over five years to support 16 sites and a coordinating office as part of a new National Nanotechnology Coordinated Infrastructure (NNCI).

The NNCI sites will provide researchers from academia, government, and companies large and small with access to university user facilities with leading-edge fabrication and characterization tools, instrumentation, and expertise within all disciplines of nanoscale science, engineering and technology.

A Sept. 16, 2015 NSF news release provides a brief history of US nanotechnology infrastructures and describes this latest effort in slightly more detail (Note: Links have been removed),

The NNCI framework builds on the National Nanotechnology Infrastructure Network (NNIN), which enabled major discoveries, innovations, and contributions to education and commerce for more than 10 years.

“NSF’s long-standing investments in nanotechnology infrastructure have helped the research community to make great progress by making research facilities available,” said Pramod Khargonekar, assistant director for engineering. “NNCI will serve as a nationwide backbone for nanoscale research, which will lead to continuing innovations and economic and societal benefits.”

The awards are up to five years and range from $500,000 to $1.6 million each per year. Nine of the sites have at least one regional partner institution. These 16 sites are located in 15 states and involve 27 universities across the nation.

Through a fiscal year 2016 competition, one of the newly awarded sites will be chosen to coordinate the facilities. This coordinating office will enhance the sites’ impact as a national nanotechnology infrastructure and establish a web portal to link the individual facilities’ websites to provide a unified entry point to the user community of overall capabilities, tools and instrumentation. The office will also help to coordinate and disseminate best practices for national-level education and outreach programs across sites.

New NNCI awards:

Mid-Atlantic Nanotechnology Hub for Research, Education and Innovation, University of Pennsylvania with partner Community College of Philadelphia, principal investigator (PI): Mark Allen
Texas Nanofabrication Facility, University of Texas at Austin, PI: Sanjay Banerjee

Northwest Nanotechnology Infrastructure, University of Washington with partner Oregon State University, PI: Karl Bohringer

Southeastern Nanotechnology Infrastructure Corridor, Georgia Institute of Technology with partners North Carolina A&T State University and University of North Carolina-Greensboro, PI: Oliver Brand

Midwest Nano Infrastructure Corridor, University of  Minnesota Twin Cities with partner North Dakota State University, PI: Stephen Campbell

Montana Nanotechnology Facility, Montana State University with partner Carlton College, PI: David Dickensheets
Soft and Hybrid Nanotechnology Experimental Resource,

Northwestern University with partner University of Chicago, PI: Vinayak Dravid

The Virginia Tech National Center for Earth and Environmental Nanotechnology Infrastructure, Virginia Polytechnic Institute and State University, PI: Michael Hochella

North Carolina Research Triangle Nanotechnology Network, North Carolina State University with partners Duke University and University of North Carolina-Chapel Hill, PI: Jacob Jones

San Diego Nanotechnology Infrastructure, University of California, San Diego, PI: Yu-Hwa Lo

Stanford Site, Stanford University, PI: Kathryn Moler

Cornell Nanoscale Science and Technology Facility, Cornell University, PI: Daniel Ralph

Nebraska Nanoscale Facility, University of Nebraska-Lincoln, PI: David Sellmyer

Nanotechnology Collaborative Infrastructure Southwest, Arizona State University with partners Maricopa County Community College District and Science Foundation Arizona, PI: Trevor Thornton

The Kentucky Multi-scale Manufacturing and Nano Integration Node, University of Louisville with partner University of Kentucky, PI: Kevin Walsh

The Center for Nanoscale Systems at Harvard University, Harvard University, PI: Robert Westervelt

The universities are trumpeting this latest nanotechnology funding,

NSF-funded network set to help businesses, educators pursue nanotechnology innovation (North Carolina State University, Duke University, and University of North Carolina at Chapel Hill)

Nanotech expertise earns Virginia Tech a spot in National Science Foundation network

ASU [Arizona State University] chosen to lead national nanotechnology site

UChicago, Northwestern awarded $5 million nanotechnology infrastructure grant

That is a lot of excitement.

Carbon nanotubes sense spoiled food

CNT_FoodSpolage

Courtesy: MIT (Massachusetts Institute of Technology)

I love this .gif; it says a lot without a word. However for details, you need words and here’s what an April 15, 2015 news item on Nanowerk has to say about the research illustrated by the .gif,

MIT [Massachusetts Institute of Technology] chemists have devised an inexpensive, portable sensor that can detect gases emitted by rotting meat, allowing consumers to determine whether the meat in their grocery store or refrigerator is safe to eat.

The sensor, which consists of chemically modified carbon nanotubes, could be deployed in “smart packaging” that would offer much more accurate safety information than the expiration date on the package, says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT.

An April 14, 2015 MIT news release (also on EurekAlert), which originated the news item, offers more from Dr. Swager,

It could also cut down on food waste, he adds. “People are constantly throwing things out that probably aren’t bad,” says Swager, who is the senior author of a paper describing the new sensor this week in the journal Angewandte Chemie.

This latest study is builds on previous work at Swager’s lab (Note: Links have been removed),

The sensor is similar to other carbon nanotube devices that Swager’s lab has developed in recent years, including one that detects the ripeness of fruit. All of these devices work on the same principle: Carbon nanotubes can be chemically modified so that their ability to carry an electric current changes in the presence of a particular gas.

In this case, the researchers modified the carbon nanotubes with metal-containing compounds called metalloporphyrins, which contain a central metal atom bound to several nitrogen-containing rings. Hemoglobin, which carries oxygen in the blood, is a metalloporphyrin with iron as the central atom.

For this sensor, the researchers used a metalloporphyrin with cobalt at its center. Metalloporphyrins are very good at binding to nitrogen-containing compounds called amines. Of particular interest to the researchers were the so-called biogenic amines, such as putrescine and cadaverine, which are produced by decaying meat.

When the cobalt-containing porphyrin binds to any of these amines, it increases the electrical resistance of the carbon nanotube, which can be easily measured.

“We use these porphyrins to fabricate a very simple device where we apply a potential across the device and then monitor the current. When the device encounters amines, which are markers of decaying meat, the current of the device will become lower,” Liu says.

In this study, the researchers tested the sensor on four types of meat: pork, chicken, cod, and salmon. They found that when refrigerated, all four types stayed fresh over four days. Left unrefrigerated, the samples all decayed, but at varying rates.

There are other sensors that can detect the signs of decaying meat, but they are usually large and expensive instruments that require expertise to operate. “The advantage we have is these are the cheapest, smallest, easiest-to-manufacture sensors,” Swager says.

“There are several potential advantages in having an inexpensive sensor for measuring, in real time, the freshness of meat and fish products, including preventing foodborne illness, increasing overall customer satisfaction, and reducing food waste at grocery stores and in consumers’ homes,” says Roberto Forloni, a senior science fellow at Sealed Air, a major supplier of food packaging, who was not part of the research team.

The new device also requires very little power and could be incorporated into a wireless platform Swager’s lab recently developed that allows a regular smartphone to read output from carbon nanotube sensors such as this one.

The funding sources are interesting, as I am appreciating with increasing frequency these days (from the news release),

The researchers have filed for a patent on the technology and hope to license it for commercial development. The research was funded by the National Science Foundation and the Army Research Office through MIT’s Institute for Soldier Nanotechnologies.

Here’s a link to and a citation for the paper,

Single-Walled Carbon Nanotube/Metalloporphyrin Composites for the Chemiresistive Detection of Amines and Meat Spoilage by Sophie F. Liu, Alexander R. Petty, Dr. Graham T. Sazama, and Timothy M. Swager. Angewandte Chemie International Edition DOI: 10.1002/anie.201501434 Article first published online: 13 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

There are other posts here about the quest to create food sensors including this Sept. 26, 2013 piece which features a critique (by another blogger) about trying to create food sensors that may be more expensive than the item they are protecting, a problem Swager claims to have overcome in an April 17, 2015 article by Ben Schiller for Fast Company (Note: Links have been removed),

Swager has set up a company to commercialize the technology and he expects to do the first demonstrations to interested clients this summer. The first applications are likely to be for food workers working with meat and fish, but there’s no reason why consumers shouldn’t get their own devices in due time.

There are efforts to create visual clues for food status. But Swager says his method is better because it doesn’t rely on perception: it produces hard data that can be logged and tracked. And it also has potential to be very cheap.

“The resistance method is a game-changer because it’s two to three orders of magnitude cheaper than other technology. It’s hard to imagine doing this cheaper,” he says.