Monthly Archives: August 2017

Sugar in your bones might be better for you than you think

These days sugar is often  viewed as leading to health problems but there is an instance where it may be useful—bone regeneration. From a June 19, 2017 news item on Nanowerk (Note: A link has been removed),

There hasn’t been a gold standard for how orthopaedic spine surgeons promote new bone growth in patients, but now Northwestern University scientists have designed a bioactive nanomaterial that is so good at stimulating bone regeneration it could become the method surgeons prefer.

While studied in an animal model of spinal fusion, the method for promoting new bone growth could translate readily to humans, the researchers say, where an aging but active population in the U.S. is increasingly receiving this surgery to treat pain due to disc degeneration, trauma and other back problems. Many other procedures could benefit from the nanomaterial, ranging from repair of bone trauma to treatment of bone cancer to bone growth for dental implants.

“Regenerative medicine can improve quality of life by offering less invasive and more successful approaches to promoting bone growth,” said Samuel I. Stupp, who developed the new nanomaterial. “Our method is very flexible and could be adapted for the regeneration of other tissues, including muscle, tendons and cartilage.”

Stupp is director of Northwestern’s Simpson Querrey Institute for BioNanotechnology and the Board of Trustees Professor of Materials Science and Engineering, Chemistry, Medicine and Biomedical Engineering.

For the interdisciplinary study, Stupp collaborated with Dr. Wellington K. Hsu, associate professor of orthopaedic surgery, and Erin L. K. Hsu, research assistant professor of orthopaedic surgery, both at Northwestern University Feinberg School of Medicine. The husband-and-wife team is working to improve clinically employed methods of bone regeneration.

Sugar molecules on the surface of the nanomaterial provide its regenerative power. The researchers studied in vivo the effect of the “sugar-coated” nanomaterial on the activity of a clinically used growth factor, called bone morphogenetic protein 2 (BMP-2). They found the amount of protein needed for a successful spinal fusion was reduced to an unprecedented level: 100 times less of BMP-2 was needed. This is very good news, because the growth factor is known to cause dangerous side effects when used in the amounts required to regenerate high-quality bone, and it is expensive as well.

A June 19, 2017 Northwestern University news release by Megan Fellman, which originated the news item, tells the rest of the story,

Stupp’s biodegradable nanomaterial functions as an artificial extracellular matrix, which mimics what cells in the body usually interact with in their surroundings. BMP-2 activates certain types of stem cells and signals them to become bone cells. The Northwestern matrix, which consists of tiny nanoscale filaments, binds the protein by molecular design in the way that natural sugars bind it in our bodies and then slowly releases it when needed, instead of in one early burst, which can contribute to side effects.

To create the nanostructures, the research team led by Stupp synthesized a specific type of sugar that closely resembles those used by nature to activate BMP-2 when cell signaling is necessary for bone growth. Rapidly moving flexible sugar molecules displayed on the surface of the nanostructures “grab” the protein in a specific spot that is precisely the same one used in biological systems when it is time to deploy the signal. This potentiates the bone-growing signals to a surprising level that surpasses even the naturally occurring sugar polymers in our bodies.

In nature, the sugar polymers are known as sulfated polysaccharides, which have super-complex structures impossible to synthesize at the present time with chemical techniques. Hundreds of proteins in biological systems are known to have specific domains to bind these sugar polymers in order to activate signals. Such proteins include those involved in the growth of blood vessels, cell recruitment and cell proliferation, all very important biologically in tissue regeneration. Therefore, the approach of the Stupp team could be extended to other regenerative targets.

Spinal fusion is a common surgical procedure that joins adjacent vertebra together using a bone graft and growth factors to promote new bone growth, which stabilizes the spine. The bone used in the graft can come from the patient’s pelvis — an invasive procedure — or from a bone bank.

“There is a real need for a clinically efficacious, safe and cost-effective way to form bone,” said Wellington Hsu, a spine surgeon. “The success of this nanomaterial makes me excited that every spine surgeon may one day subscribe to this method for bone graft. Right now, if you poll an audience of spine surgeons, you will get 15 to 20 different answers on what they use for bone graft. We need to standardize choice and improve patient outcomes.”

In the in vivo portion of the study, the nanomaterial was delivered to the spine using a collagen sponge. This is the way surgeons currently deliver BMP-2 clinically to promote bone growth.

The Northwestern research team plans to seek approval from the Food and Drug Administration to launch a clinical trial studying the nanomaterial for bone regeneration in humans.

“We surgeons are looking for optimal carriers for growth factors and cells,” Wellington Hsu said. “With its numerous binding sites, the long filaments of this new nanomaterial is more successful than existing carriers in releasing the growth factor when the body is ready. Timing is critical for success in bone regeneration.”

In the new nanomaterial, the sugars are displayed in a scaffold built from self-assembling molecules known as peptide amphiphiles, first developed by Stupp 15 years ago. These synthetic molecules have been essential in his work on regenerative medicine.

“We focused on bone regeneration to demonstrate the power of the sugar nanostructure to provide a big signaling boost,” Stupp said. “With small design changes, the method could be used with other growth factors for the regeneration of all kinds of tissues. One day we may be able to fully do away with the use of growth factors made by recombinant biotechnology and instead empower the natural ones in our bodies.”

Here’s a link to and a citation for the paper,

Sulfated glycopeptide nanostructures for multipotent protein activation by Sungsoo S. Lee, Timmy Fyrner, Feng Chen, Zaida Álvarez, Eduard Sleep, Danielle S. Chun, Joseph A. Weiner, Ralph W. Cook, Ryan D. Freshman, Michael S. Schallmo, Karina M. Katchko, Andrew D. Schneider, Justin T. Smith, Chawon Yun, Gurmit Singh, Sohaib Z. Hashmi, Mark T. McClendon, Zhilin Yu, Stuart R. Stock, Wellington K. Hsu, Erin L. Hsu, & Samuel I. Stupp. Nature Nanotechnology 12, 821–829 (2017) doi:10.1038/nnano.2017.109 Published online 19 June 2017

This paper is behind a paywall.

Neuristors and brainlike computing

As you might suspect, a neuristor is based on a memristor .(For a description of a memristor there’s this Wikipedia entry and you can search this blog with the tags ‘memristor’ and neuromorphic engineering’ for more here.)

Being new to neuristors ,I needed a little more information before reading the latest and found this Dec. 24, 2012 article by John Timmer for Ars Technica (Note: Links have been removed),

Computing hardware is composed of a series of binary switches; they’re either on or off. The other piece of computational hardware we’re familiar with, the brain, doesn’t work anything like that. Rather than being on or off, individual neurons exhibit brief spikes of activity, and encode information in the pattern and timing of these spikes. The differences between the two have made it difficult to model neurons using computer hardware. In fact, the recent, successful generation of a flexible neural system required that each neuron be modeled separately in software in order to get the sort of spiking behavior real neurons display.

But researchers may have figured out a way to create a chip that spikes. The people at HP labs who have been working on memristors have figured out a combination of memristors and capacitors that can create a spiking output pattern. Although these spikes appear to be more regular than the ones produced by actual neurons, it might be possible to create versions that are a bit more variable than this one. And, more significantly, it should be possible to fabricate them in large numbers, possibly right on a silicon chip.

The key to making the devices is something called a Mott insulator. These are materials that would normally be able to conduct electricity, but are unable to because of interactions among their electrons. Critically, these interactions weaken with elevated temperatures. So, by heating a Mott insulator, it’s possible to turn it into a conductor. In the case of the material used here, NbO2, the heat is supplied by resistance itself. By applying a voltage to the NbO2 in the device, it becomes a resistor, heats up, and, when it reaches a critical temperature, turns into a conductor, allowing current to flow through. But, given the chance to cool off, the device will return to its resistive state. Formally, this behavior is described as a memristor.

To get the sort of spiking behavior seen in a neuron, the authors turned to a simplified model of neurons based on the proteins that allow them to transmit electrical signals. When a neuron fires, sodium channels open, allowing ions to rush into a nerve cell, and changing the relative charges inside and outside its membrane. In response to these changes, potassium channels then open, allowing different ions out, and restoring the charge balance. That shuts the whole thing down, and allows various pumps to start restoring the initial ion balance.

Here’s a link to and a citation for the research paper described in Timmer’s article,

A scalable neuristor built with Mott memristors by Matthew D. Pickett, Gilberto Medeiros-Ribeiro, & R. Stanley Williams. Nature Materials 12, 114–117 (2013) doi:10.1038/nmat3510 Published online 16 December 2012

This paper is behind a paywall.

A July 28, 2017 news item on Nanowerk provides an update on neuristors,

A future android brain like that of Star Trek’s Commander Data might contain neuristors, multi-circuit components that emulate the firings of human neurons.

Neuristors already exist today in labs, in small quantities, and to fuel the quest to boost neuristors’ power and numbers for practical use in brain-like computing, the U.S. Department of Defense has awarded a $7.1 million grant to a research team led by the Georgia Institute of Technology. The researchers will mainly work on new metal oxide materials that buzz electronically at the nanoscale to emulate the way human neural networks buzz with electric potential on a cellular level.

A July 28, 2017 Georgia Tech news release, which originated the news item, delves further into neuristors and the proposed work leading to an artificial retina that can learn (!). This was not where I was expecting things to go,

But let’s walk expectations back from the distant sci-fi future into the scientific present: The research team is developing its neuristor materials to build an intelligent light sensor, and not some artificial version of the human brain, which would require hundreds of trillions of circuits.

“We’re not going to reach circuit complexities of that magnitude, not even a tenth,” said Alan Doolittle, a professor at Georgia Tech’s School of Electrical and Computer Engineering. “Also, currently science doesn’t really know yet very well how the human brain works, so we can’t duplicate it.”

Intelligent retina

But an artificial retina that can learn autonomously appears well within reach of the research team from Georgia Tech and Binghamton University. Despite the term “retina,” the development is not intended as a medical implant, but it could be used in advanced image recognition cameras for national defense and police work.

At the same time, it would significantly advance brain-mimicking, or neuromorphic, computing. The research field that takes its cues from what science already does know about how the brain computes to develop exponentially more powerful computing.

The retina would be comprised of an array of ultra-compact circuits called neuristors (a word combining “neuron” and “transistor”) that sense light, compute an image out of it and store the image. All three of the functions would occur simultaneously and nearly instantaneously.

“The same device senses, computes and stores the image,” Doolittle said. “The device is the sensor, and it’s the processor, and it’s the memory all at the same time.” A neuristor itself is comprised in part of devices called memristors inspired by the way human neurons work.

Brain vs. PC

That cuts out loads of processing and memory lag time that are inherent in traditional computing.

Take the device you’re reading this article on: Its microprocessor has to tap a separate memory component to get data, then do some processing, tap memory again for more data, process some more, etc. “That back-and-forth from memory to microprocessor has created a bottleneck,” Doolittle said.

A neuristor array breaks the bottleneck by emulating the extreme flexibility of biological nervous systems: When a brain computes, it uses a broad set of neural pathways that flash with enormous data. Then, later, to compute the same thing again, it will use quite different neural paths.

Traditional computer pathways, by contrast, are hardwired. For example, look at a present-day processor and you’ll see lines etched into it. Those are pathways that computational signals are limited to.

The new memristor materials at the heart of the neuristor are not etched, and signals flow through the surface very freely, more like they do through the brain, exponentially increasing the number of possible pathways computation can take. That helps the new intelligent retina compute powerfully and swiftly.

Terrorists, missing children

The retina’s memory could also store thousands of photos, allowing it to immediately match up what it sees with the saved images. The retina could pinpoint known terror suspects in a crowd, find missing children, or identify enemy aircraft virtually instantaneously, without having to trawl databases to correctly identify what is in the images.

Even if you take away the optics, the new neuristor arrays still advance artificial intelligence. Instead of light, a surface of neuristors could absorb massive data streams at once, compute them, store them, and compare them to patterns of other data, immediately. It could even autonomously learn to extrapolate further information, like calculating the third dimension out of data from two dimensions.

“It will work with anything that has a repetitive pattern like radar signatures, for example,” Doolittle said. “Right now, that’s too challenging to compute, because radar information is flying out at such a high data rate that no computer can even think about keeping up.”

Smart materials

The research project’s title acronym CEREBRAL may hint at distant dreams of an artificial brain, but what it stands for spells out the present goal in neuromorphic computing: Cross-disciplinary Electronic-ionic Research Enabling Biologically Realistic Autonomous Learning.

The intelligent retina’s neuristors are based on novel metal oxide nanotechnology materials, unique to Georgia Tech. They allow computing signals to flow flexibly across pathways that are electronic, which is customary in computing, and at the same time make use of ion motion, which is more commonly know from the way batteries and biological systems work.

The new materials have already been created, and they work, but the researchers don’t yet fully understand why.

Much of the project is dedicated to examining quantum states in the materials and how those states help create useful electronic-ionic properties. Researchers will view them by bombarding the metal oxides with extremely bright x-ray photons at the recently constructed National Synchrotron Light Source II.

Grant sub-awardee Binghamton University is located close by, and Binghamton physicists will run experiments and hone them via theoretical modeling.

‘Sea of lithium’

The neuristors are created mainly by the way the metal oxide materials are grown in the lab, which has advantages over building neuristors in a more wired way.

This materials-growing approach is conducive to mass production. Also, though neuristors in general free signals to take multiple pathways, Georgia Tech’s neuristors do it much more flexibly thanks to chemical properties.

“We also have a sea of lithium, and it’s like an infinite reservoir of computational ionic fluid,” Doolittle said. The lithium niobite imitates the way ionic fluid bathes biological neurons and allows them to flash with electric potential while signaling. In a neuristor array, the lithium niobite helps computational signaling move in myriad directions.

“It’s not like the typical semiconductor material, where you etch a line, and only that line has the computational material,” Doolittle said.

Commander Data’s brain?

“Unlike any other previous neuristors, our neuristors will adapt themselves in their computational-electronic pulsing on the fly, which makes them more like a neurological system,” Doolittle said. “They mimic biology in that we have ion drift across the material to create the memristors (the memory part of neuristors).”

Brains are far superior to computers at most things, but not all. Brains recognize objects and do motor tasks much better. But computers are much better at arithmetic and data processing.

Neuristor arrays can meld both types of computing, making them biological and algorithmic at once, a bit like Commander Data’s brain.

The research is being funded through the U.S. Department of Defense’s Multidisciplinary University Research Initiatives (MURI) Program under grant number FOA: N00014-16-R-FO05. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of those agencies.

Fascinating, non?

Canadian science policy news and doings (also: some US science envoy news)

I have a couple of notices from the Canadian Science Policy Centre (CSPC), a twitter feed, and an article in online magazine to thank for this bumper crop of news.

 Canadian Science Policy Centre: the conference

The 2017 Canadian Science Policy Conference to be held Nov. 1 – 3, 2017 in Ottawa, Ontario for the third year in a row has a super saver rate available until Sept. 3, 2017 according to an August 14, 2017 announcement (received via email).

Time is running out, you have until September 3rd until prices go up from the SuperSaver rate.

Savings off the regular price with the SuperSaver rate:
Up to 26% for General admission
Up to 29% for Academic/Non-Profit Organizations
Up to 40% for Students and Post-Docs

Before giving you the link to the registration page and assuming that you might want to check out what is on offer at the conference, here’s a link to the programme. They don’t seem to have any events celebrating Canada’s 150th anniversary although they do have a session titled, ‘The Next 150 years of Science in Canada: Embedding Equity, Delivering Diversity/Les 150 prochaine années de sciences au Canada:  Intégrer l’équité, promouvoir la diversité‘,

Enhancing equity, diversity, and inclusivity (EDI) in science, technology, engineering and math (STEM) has been described as being a human rights issue and an economic development issue by various individuals and organizations (e.g. OECD). Recent federal policy initiatives in Canada have focused on increasing participation of women (a designated under-represented group) in science through increased reporting, program changes, and institutional accountability. However, the Employment Equity Act requires employers to act to ensure the full representation of the three other designated groups: Aboriginal peoples, persons with disabilities and members of visible minorities. Significant structural and systemic barriers to full participation and employment in STEM for members of these groups still exist in Canadian institutions. Since data support the positive role of diversity in promoting innovation and economic development, failure to capture the full intellectual capacity of a diverse population limits provincial and national potential and progress in many areas. A diverse international panel of experts from designated groups will speak to the issue of accessibility and inclusion in STEM. In addition, the discussion will focus on evidence-based recommendations for policy initiatives that will promote full EDI in science in Canada to ensure local and national prosperity and progress for Canada over the next 150 years.

There’s also this list of speakers . Curiously, I don’t see Kirsty Duncan, Canada’s Minister of Science on the list, nor do I see any other politicians in the banner for their conference website  This divergence from the CSPC’s usual approach to promoting the conference is interesting.

Moving onto the conference, the organizers have added two panels to the programme (from the announcement received via email),

Friday, November 3, 2017
10:30AM-12:00PM
Open Science and Innovation
Organizer: Tiberius Brastaviceanu
Organization: ACES-CAKE

10:30AM- 12:00PM
The Scientific and Economic Benefits of Open Science
Organizer: Arij Al Chawaf
Organization: Structural Genomics

I think this is the first time there’s been a ‘Tiberius’ on this blog and teamed with the organization’s name, well, I just had to include it.

Finally, here’s the link to the registration page and a page that details travel deals.

Canadian Science Policy Conference: a compendium of documents and articles on Canada’s Chief Science Advisor and Ontario’s Chief Scientist and the pre-2018 budget submissions

The deadline for applications for the Chief Science Advisor position was extended to Feb. 2017 and so far, there’s no word as to whom it might be. Perhaps Minister of Science Kirsty Duncan wants to make a splash with a surprise announcement at the CSPC’s 2017 conference? As for Ontario’s Chief Scientist, this move will make province the third (?) to have a chief scientist, after Québec and Alberta. There is apparently one in Alberta but there doesn’t seem to be a government webpage and his LinkedIn profile doesn’t include this title. In any event, Dr. Fred Wrona is mentioned as the Alberta’s Chief Scientist in a May 31, 2017 Alberta government announcement. *ETA Aug. 25, 2017: I missed the Yukon, which has a Senior Science Advisor. The position is currently held by Dr. Aynslie Ogden.*

Getting back to the compendium, here’s the CSPC’s A Comprehensive Collection of Publications Regarding Canada’s Federal Chief Science Advisor and Ontario’s Chief Scientist webpage. Here’s a little background provided on the page,

On June 2nd, 2017, the House of Commons Standing Committee on Finance commenced the pre-budget consultation process for the 2018 Canadian Budget. These consultations provide Canadians the opportunity to communicate their priorities with a focus on Canadian productivity in the workplace and community in addition to entrepreneurial competitiveness. Organizations from across the country submitted their priorities on August 4th, 2017 to be selected as witness for the pre-budget hearings before the Committee in September 2017. The process will result in a report to be presented to the House of Commons in December 2017 and considered by the Minister of Finance in the 2018 Federal Budget.

NEWS & ANNOUNCEMENT

House of Commons- PRE-BUDGET CONSULTATIONS IN ADVANCE OF THE 2018 BUDGET

https://www.ourcommons.ca/Committees/en/FINA/StudyActivity?studyActivityId=9571255

CANADIANS ARE INVITED TO SHARE THEIR PRIORITIES FOR THE 2018 FEDERAL BUDGET

https://www.ourcommons.ca/DocumentViewer/en/42-1/FINA/news-release/9002784

The deadline for pre-2018 budget submissions was Aug. 4, 2017 and they haven’t yet scheduled any meetings although they are to be held in September. (People can meet with the Standing Committee on Finance in various locations across Canada to discuss their submissions.) I’m not sure where the CSPC got their list of ‘science’ submissions but it’s definitely worth checking as there are some odd omissions such as TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics)), Genome Canada, the Pan-Canadian Artificial Intelligence Strategy, CIFAR (Canadian Institute for Advanced Research), the Perimeter Institute, Canadian Light Source, etc.

Twitter and the Naylor Report under a microscope

This news came from University of British Columbia President Santa Ono’s twitter feed,

 I will join Jon [sic] Borrows and Janet Rossant on Sept 19 in Ottawa at a Mindshare event to discuss the importance of the Naylor Report

The Mindshare event Ono is referring to is being organized by Universities Canada (formerly the Association of Universities and Colleges of Canada) and the Institute for Research on Public Policy. It is titled, ‘The Naylor report under the microscope’. Here’s more from the event webpage,

Join Universities Canada and Policy Options for a lively discussion moderated by editor-in-chief Jennifer Ditchburn on the report from the Fundamental Science Review Panel and why research matters to Canadians.

Moderator

Jennifer Ditchburn, editor, Policy Options.

Jennifer Ditchburn

Editor-in-chief, Policy Options

Jennifer Ditchburn is the editor-in-chief of Policy Options, the online policy forum of the Institute for Research on Public Policy.  An award-winning parliamentary correspondent, Jennifer began her journalism career at the Canadian Press in Montreal as a reporter-editor during the lead-up to the 1995 referendum.  From 2001 and 2006 she was a national reporter with CBC TV on Parliament Hill, and in 2006 she returned to the Canadian Press.  She is a three-time winner of a National Newspaper Award:  twice in the politics category, and once in the breaking news category. In 2015 she was awarded the prestigious Charles Lynch Award for outstanding coverage of national issues. Jennifer has been a frequent contributor to television and radio public affairs programs, including CBC’s Power and Politics, the “At Issue” panel, and The Current. She holds a bachelor of arts from Concordia University, and a master of journalism from Carleton University.

@jenditchburn

Tuesday, September 19, 2017

 12-2 pm

Fairmont Château Laurier,  Laurier  Room
 1 Rideau Street, Ottawa

 rsvp@univcan.ca

I can’t tell if they’re offering lunch or if there is a cost associated with this event so you may want to contact the organizers.

As for the Naylor report, I posted a three-part series on June 8, 2017, which features my comments and the other comments I was able to find on the report:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

One piece not mentioned in my three-part series is Paul Wells’ provocatively titled June 29, 2017 article for MacLean’s magazine, Why Canadian scientists aren’t happy (Note: Links have been removed),

Much hubbub this morning over two interviews Kirsty Duncan, the science minister, has given the papers. The subject is Canada’s Fundamental Science Review, commonly called the Naylor Report after David Naylor, the former University of Toronto president who was its main author.

Other authors include BlackBerry founder Mike Lazaridis, who has bankrolled much of the Waterloo renaissance, and Canadian Nobel physicist Arthur McDonald. It’s as blue-chip as a blue-chip panel could be.

Duncan appointed the panel a year ago. It’s her panel, delivered by her experts. Why does it not seem to be… getting anywhere? Why does it seem to have no champion in government? Therein lies a tale.

Note, first, that Duncan’s interviews—her first substantive comment on the report’s recommendations!—come nearly three months after its April release, which in turn came four months after Duncan asked Naylor to deliver his report, last December. (By March I had started to make fun of the Trudeau government in print for dragging its heels on the report’s release. That column was not widely appreciated in the government, I’m told.)

Anyway, the report was released, at an event attended by no representative of the Canadian government. Here’s the gist of what I wrote at the time:

 

Naylor’s “single most important recommendation” is a “rapid increase” in federal spending on “independent investigator-led research” instead of the “priority-driven targeted research” that two successive federal governments, Trudeau’s and Stephen Harper’s, have preferred in the last 8 or 10 federal budgets.

In English: Trudeau has imitated Harper in favouring high-profile, highly targeted research projects, on areas of study selected by political staffers in Ottawa, that are designed to attract star researchers from outside Canada so they can bolster the image of Canada as a research destination.

That’d be great if it wasn’t achieved by pruning budgets for the less spectacular research that most scientists do.

Naylor has numbers. “Between 2007-08 and 2015-16, the inflation-adjusted budgetary envelope for investigator-led research fell by 3 per cent while that for priority-driven research rose by 35 per cent,” he and his colleagues write. “As the number of researchers grew during this period, the real resources available per active researcher to do investigator-led research declined by about 35 per cent.”

And that’s not even taking into account the way two new programs—the $10-million-per-recipient Canada Excellence Research Chairs and the $1.5 billion Canada First Research Excellence Fund—are “further concentrating resources in the hands of smaller numbers of individuals and institutions.”

That’s the context for Duncan’s remarks. In the Globe, she says she agrees with Naylor on “the need for a research system that promotes equity and diversity, provides a better entry for early career researchers and is nimble in response to new scientific opportunities.” But she also “disagreed” with the call for a national advisory council that would give expert advice on the government’s entire science, research and innovation policy.

This is an asinine statement. When taking three months to read a report, it’s a good idea to read it. There is not a single line in Naylor’s overlong report that calls for the new body to make funding decisions. Its proposed name is NACRI, for National Advisory Council on Research and Innovation. A for Advisory. Its responsibilities, listed on Page 19 if you’re reading along at home, are restricted to “advice… evaluation… public reporting… advice… advice.”

Duncan also didn’t promise to meet Naylor’s requested funding levels: $386 million for research in the first year, growing to $1.3 billion in new money in the fourth year. That’s a big concern for researchers, who have been warning for a decade that two successive government’s—Harper’s and Trudeau’s—have been more interested in building new labs than in ensuring there’s money to do research in them.

The minister has talking points. She gave the same answer to both reporters about whether Naylor’s recommendations will be implemented in time for the next federal budget. “It takes time to turn the Queen Mary around,” she said. Twice. I’ll say it does: She’s reacting three days before Canada Day to a report that was written before Christmas. Which makes me worry when she says elected officials should be in charge of being nimble.

Here’s what’s going on.

The Naylor report represents Canadian research scientists’ side of a power struggle. The struggle has been continuing since Jean Chrétien left office. After early cuts, he presided for years over very large increases to the budgets of the main science granting councils. But since 2003, governments have preferred to put new funding dollars to targeted projects in applied sciences. …

Naylor wants that trend reversed, quickly. He is supported in that call by a frankly astonishingly broad coalition of university administrators and working researchers, who until his report were more often at odds. So you have the group representing Canada’s 15 largest research universities and the group representing all universities and a new group representing early-career researchers and, as far as I can tell, every Canadian scientist on Twitter. All backing Naylor. All fundamentally concerned that new money for research is of no particular interest if it does not back the best science as chosen by scientists, through peer review.

The competing model, the one preferred by governments of all stripes, might best be called superclusters. Very large investments into very large projects with loosely defined scientific objectives, whose real goal is to retain decorated veteran scientists and to improve the Canadian high-tech industry. Vast and sprawling labs and tech incubators, cabinet ministers nodding gravely as world leaders in sexy trendy fields sketch the golden path to Jobs of Tomorrow.

You see the imbalance. On one side, ribbons to cut. On the other, nerds experimenting on tapeworms. Kirsty Duncan, a shaky political performer, transparently a junior minister to the supercluster guy, with no deputy minister or department reporting to her, is in a structurally weak position: her title suggests she’s science’s emissary to the government, but she is not equipped to be anything more than government’s emissary to science.

A government that consistently buys into the market for intellectual capital at the very top of the price curve is a factory for producing white elephants. But don’t take my word for it. Ask Geoffrey Hinton [University of Toronto’s Geoffrey Hinton, a Canadian leader in machine learning].

“There is a lot of pressure to make things more applied; I think it’s a big mistake,” he said in 2015. “In the long run, curiosity-driven research just works better… Real breakthroughs come from people focusing on what they’re excited about.”

I keep saying this, like a broken record. If you want the science that changes the world, ask the scientists who’ve changed it how it gets made. This government claims to be interested in what scientists think. We’ll see.

Incisive and acerbic,  you may want to make time to read this article in its entirety.

Getting back to the ‘The Naylor report under the microscope’ event, I wonder if anyone will be as tough and direct as Wells. Going back even further, I wonder if this is why there’s no mention of Duncan as a speaker at the conference. It could go either way: surprise announcement of a Chief Science Advisor, as I first suggested, or avoidance of a potentially angry audience.

For anyone curious about Geoffrey Hinton, there’s more here in my March 31, 2017 post (scroll down about 20% of the way) and for more about the 2017 budget and allocations for targeted science projects there’s my March 24, 2017 post.

US science envoy quits

An Aug. 23, 2017article by Matthew Rosza for salon.com notes the resignation of one of the US science envoys,

President Donald Trump’s infamous response to the Charlottesville riots — namely, saying that both sides were to blame and that there were “very fine people” marching as white supremacists — has prompted yet another high profile resignation from his administration.

Daniel M. Kammen, who served as a science envoy for the State Department and focused on renewable energy development in the Middle East and Northern Africa, submitted a letter of resignation on Wednesday. Notably, he began the first letter of each paragraph with letters that spelled out I-M-P-E-A-C-H. That followed a letter earlier this month by writer Jhumpa Lahiri and actor Kal Penn to similarly spell R-E-S-I-S-T in their joint letter of resignation from the President’s Committee on Arts and Humanities.

Jeremy Berke’s Aug. 23, 2017 article for BusinessInsider.com provides a little more detail (Note: Links have been removed),

A State Department climate science envoy resigned Wednesday in a public letter posted on Twitter over what he says is President Donald Trump’s “attacks on the core values” of the United States with his response to violence in Charlottesville, Virginia.

“My decision to resign is in response to your attacks on the core values of the United States,” wrote Daniel Kammen, a professor of energy at the University of California, Berkeley, who was appointed as one five science envoys in 2016. “Your failure to condemn white supremacists and neo-Nazis has domestic and international ramifications.”

“Your actions to date have, sadly, harmed the quality of life in the United States, our standing abroad, and the sustainability of the planet,” Kammen writes.

Science envoys work with the State Department to establish and develop energy programs in countries around the world. Kammen specifically focused on renewable energy development in the Middle East and North Africa.

That’s it.

Bubble physics could explain language patterns

According to University of Portsmouth physicist, James Burriidge, determining how linguistic dialects form is a question for physics and mathematics.  Here’s more about Burridge and his latest work on the topic from a July 24, 2017 University of Portsmouth press release (also on EurekAlert),

Language patterns could be predicted by simple laws of physics, a new study has found.

Dr James Burridge from the University of Portsmouth has published a theory using ideas from physics to predict where and how dialects occur.

He said: “If you want to know where you’ll find dialects and why, a lot can be predicted from the physics of bubbles and our tendency to copy others around us.

“Copying causes large dialect regions where one way of speaking dominates. Where dialect regions meet, you get surface tension. Surface tension causes oil and water to separate out into layers, and also causes small bubbles in a bubble bath to merge into bigger ones.

“The bubbles in the bath are like groups of people – they merge into the bigger bubbles because they want to fit in with their neighbours.

“When people speak and listen to each other, they have a tendency to conform to the patterns of speech they hear others using, and therefore align their dialects. Since people typically remain geographically local in their everyday lives, they tend to align with those nearby.”

Dr Burridge from the University’s department of mathematics departs from the existing approaches in studying dialects to formulate a theory of how country shape and population distribution play an important role in how dialect regions evolve.

Traditional dialectologists use the term ‘isogloss’ to describe a line on a map marking an area which has a distinct linguistic feature.

Dr Burridge said: “These isoglosses are like the edges of bubbles – the maths used to describe bubbles can also describe dialects.

“My model shows that dialects tend to move outwards from population centres, which explains why cities have their own dialects. Big cities like London and Birmingham are pushing on the walls of their own bubbles.

“This is why many dialects have a big city at their heart – the bigger the city, the greater this effect. It’s also why new ways of speaking often spread outwards from a large urban centre.

“If people live near a town or city, we assume they experience more frequent interactions with people from the city than with those living outside it, simply because there are more city dwellers to interact with.

His model also shows that language boundaries get smoother and straighter over time, which stabilises dialects.

Dr Burridge’s research is driven by a long-held interest in spatial patterns and the idea that humans and animal behaviour can evolve predictably. His research has been funded by the Leverhulme Trust.

Here’s an image illustrating language distribution in the UK<

Caption: These maps show a simulation of three language variants that are initially distributed throughout Great Britain in a random pattern. As time passes (left to right), the boundaries between language variants tend to shorten in length. One can also see evidence of boundary lines fixing to river inlets and other coastal indentations. Credit: James Burridge, University of Portsmouth

Burridge has written an Aug. 2, 2017 essay for The Conversation which delves into the history of using physics and mathematics to understand social systems and further explains his own theory (Note: Links have been removed),

What do the physics of bubbles have in common with the way you and I speak? Not a lot, you might think. But my recently published research uses the physics of surface tension (the effect that determines the shape of bubbles) to explore language patterns – where and how dialects occur.

This connection between physical and social systems may seem surprising, but connections of this kind have a long history. The 19th century physicist Ludwig Boltzmann spent much of his life trying to explain how the physical world behaves based on some simple assumptions about the atoms from which it is made. His theories, which link atomic behaviour to the large scale properties of matter, are called “statistical mechanics”. At the time, there was considerable doubt that atoms even existed, so Boltzmann’s success is remarkable because the detailed properties of the systems he was studying were unknown.

The idea that details don’t matter when you are considering a very large number of interacting agents is tantalising for those interested in the collective behaviour of large groups of people. In fact, this idea can be traced back to another 19th century great, Leo Tolstoy, who argued in War and Peace:

“To elicit the laws of history we must leave aside kings, ministers, and generals, and select for study the homogeneous, infinitesimal elements which influence the masses.”

Mathematical history

Tolstoy was, in modern terms, advocating a statistical mechanics of history. But in what contexts will this approach work? If we are guided by what worked for Boltzmann, then the answer is quite simple. We need to look at phenomena which arise from large numbers of interactions between individuals rather than phenomena imposed from above by some mighty ruler or political movement.

To test a physical theory, one just needs a lab. But a mathematical historian must look for data that have already been collected, or can be extracted from existing sources. An ideal example is language dialects. For centuries, humans have been drawing maps of the spatial domains in which they live, creating records of their languages, and sometimes combining the two to create linguistic atlases. The geometrical picture which emerges is fascinating. As we travel around a country, the way that people use language, from their choices of words to their pronunciation of vowels, changes. Researchers quantify differences using “linguistic variables”.

For example, in 1950s England, the ulex shrub went by the name “gorse”, “furze”, “whim” or “broom” depending on where you were in the country. If we plot where these names are used on a map, we find large regions where one name is in common use, and comparatively narrow transition regions where the most common word changes. Linguists draw lines, called “isoglosses”, around the edges of regions where one word (or other linguistic variable) is common. As you approach an isogloss, you find people start to use a different word for the same thing.

A similar effect can be seen in sheets of magnetic metal where individual atoms behave like miniature magnets which want to line up with their neighbours. As a result, large regions appear in which the magnetic directions of all atoms are aligned. If we think of magnetic direction as an analogy for choice of linguistic variant – say up is “gorse” and down is “broom” – then aligning direction is like beginning to use the local word for ulex.

Linguistic maths

I made just one assumption about language evolution: that people tend to pick up ways of speaking which they hear in the geographical region where they spend most of their time. Typically, this region will be a few miles or tens of miles wide and centred on their home, but its shape may be skewed by the presence of a nearby city which they visit more often than the surrounding countryside.

For example, in 1950s England, the ulex shrub went by the name “gorse”, “furze”, “whim” or “broom” depending on where you were in the country. If we plot where these names are used on a map, we find large regions where one name is in common use, and comparatively narrow transition regions where the most common word changes. Linguists draw lines, called “isoglosses”, around the edges of regions where one word (or other linguistic variable) is common. As you approach an isogloss, you find people start to use a different word for the same thing.

My equations predict that isoglosses tend to get pushed away from cities, and drawn towards parts of the coast which are indented, like bays or river mouths. The city effect can be explained by imagining you live near an isogloss at the edge of a city. Because there are a lot more people on the city side of the isogloss, you will tend to have more conversations with them than with rural people living on the other side. For this reason, you will probably start using the linguistic variable used in the city. If lots of people do this, then the isogloss will move further out into the countryside.

My one simple assumption – that people pick up local ways of speaking – leading to equations which describe the physics of bubbles, allowed me to gain new insight into the formation of language patterns. Who knows what other linguistic patterns mathematics could explain?

Burridge’s paper can be found here,

Spatial Evolution of Human Dialects by James Burridge. Phys. Rev. X 7, 031008 Vol. 7, Iss. 3 — July – September 2017 Published 17 July 2017

This paper is open access and it is quite readable as these things go. In other words, you may not understand all of the mathematics, physics, or linguistics but it is written so that a relatively well informed person should be able to understand the basics if not all the nuances.

Congratulate China on the world’s first quantum communication network

China has some exciting news about the world’s first quantum network; it’s due to open in late August 2017 so you may want to have your congratulations in order for later this month.

An Aug. 4, 2017 news item on phys.org makes the announcement,

As malicious hackers find ever more sophisticated ways to launch attacks, China is about to launch the Jinan Project, the world’s first unhackable computer network, and a major milestone in the development of quantum technology.

Named after the eastern Chinese city where the technology was developed, the network is planned to be fully operational by the end of August 2017. Jinan is the hub of the Beijing-Shanghai quantum network due to its strategic location between the two principal Chinese metropolises.

“We plan to use the network for national defence, finance and other fields, and hope to spread it out as a pilot that if successful can be used across China and the whole world,” commented Zhou Fei, assistant director of the Jinan Institute of Quantum Technology, who was speaking to Britain’s Financial Times.

An Aug. 3, 2017 CORDIS (Community Research and Development Research Information Service [for the European Commission]) press release, which originated the news item, provides more detail about the technology,

By launching the network, China will become the first country worldwide to implement quantum technology for a real life, commercial end. It also highlights that China is a key global player in the rush to develop technologies based on quantum principles, with the EU and the United States also vying for world leadership in the field.

The network, known as a Quantum Key Distribution (QKD) network, is more secure than widely used electronic communication equivalents. Unlike a conventional telephone or internet cable, which can be tapped without the sender or recipient being aware, a QKD network alerts both users to any tampering with the system as soon as it occurs. This is because tampering immediately alters the information being relayed, with the disturbance being instantly recognisable. Once fully implemented, it will make it almost impossible for other governments to listen in on Chinese communications.

In the Jinan network, some 200 users from China’s military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world’s longest land-based quantum communications network, stretching over 2 000 km.

Also speaking to the ‘Financial Times’, quantum physicist Tim Byrnes, based at New York University’s (NYU) Shanghai campus commented: ‘China has achieved staggering things with quantum research… It’s amazing how quickly China has gotten on with quantum research projects that would be too expensive to do elsewhere… quantum communication has been taken up by the commercial sector much more in China compared to other countries, which means it is likely to pull ahead of Europe and US in the field of quantum communication.’

However, Europe is also determined to also be at the forefront of the ‘quantum revolution’ which promises to be one of the major defining technological phenomena of the twenty-first century. The EU has invested EUR 550 million into quantum technologies and has provided policy support to researchers through the 2016 Quantum Manifesto.

Moreover, with China’s latest achievement (and a previous one already notched up from July 2017 when its quantum satellite – the world’s first – sent a message to Earth on a quantum communication channel), it looks like the race to be crowned the world’s foremost quantum power is well and truly underway…

Prior to this latest announcement, Chinese scientists had published work about quantum satellite communications, a development that makes their imminent terrestrial quantum network possible. Gabriel Popkin wrote about the quantum satellite in a June 15, 2017 article Science magazine,

Quantum entanglement—physics at its strangest—has moved out of this world and into space. In a study that shows China’s growing mastery of both the quantum world and space science, a team of physicists reports that it sent eerily intertwined quantum particles from a satellite to ground stations separated by 1200 kilometers, smashing the previous world record. The result is a stepping stone to ultrasecure communication networks and, eventually, a space-based quantum internet.

“It’s a huge, major achievement,” says Thomas Jennewein, a physicist at the University of Waterloo in Canada. “They started with this bold idea and managed to do it.”

Entanglement involves putting objects in the peculiar limbo of quantum superposition, in which an object’s quantum properties occupy multiple states at once: like Schrödinger’s cat, dead and alive at the same time. Then those quantum states are shared among multiple objects. Physicists have entangled particles such as electrons and photons, as well as larger objects such as superconducting electric circuits.

Theoretically, even if entangled objects are separated, their precarious quantum states should remain linked until one of them is measured or disturbed. That measurement instantly determines the state of the other object, no matter how far away. The idea is so counterintuitive that Albert Einstein mocked it as “spooky action at a distance.”

Starting in the 1970s, however, physicists began testing the effect over increasing distances. In 2015, the most sophisticated of these tests, which involved measuring entangled electrons 1.3 kilometers apart, showed once again that spooky action is real.

Beyond the fundamental result, such experiments also point to the possibility of hack-proof communications. Long strings of entangled photons, shared between distant locations, can be “quantum keys” that secure communications. Anyone trying to eavesdrop on a quantum-encrypted message would disrupt the shared key, alerting everyone to a compromised channel.

But entangled photons degrade rapidly as they pass through the air or optical fibers. So far, the farthest anyone has sent a quantum key is a few hundred kilometers. “Quantum repeaters” that rebroadcast quantum information could extend a network’s reach, but they aren’t yet mature. Many physicists have dreamed instead of using satellites to send quantum information through the near-vacuum of space. “Once you have satellites distributing your quantum signals throughout the globe, you’ve done it,” says Verónica Fernández Mármol, a physicist at the Spanish National Research Council in Madrid. …

Popkin goes on to detail the process for making the discovery in easily accessible (for the most part) writing and in a video and a graphic.

Russell Brandom writing for The Verge in a June 15, 2017 article about the Chinese quantum satellite adds detail about previous work and teams in other countries also working on the challenge (Note: Links have been removed),

Quantum networking has already shown promise in terrestrial fiber networks, where specialized routing equipment can perform the same trick over conventional fiber-optic cable. The first such network was a DARPA-funded connection established in 2003 between Harvard, Boston University, and a private lab. In the years since, a number of companies have tried to build more ambitious connections. The Swiss company ID Quantique has mapped out a quantum network that would connect many of North America’s largest data centers; in China, a separate team is working on a 2,000-kilometer quantum link between Beijing and Shanghai, which would rely on fiber to span an even greater distance than the satellite link. Still, the nature of fiber places strict limits on how far a single photon can travel.

According to ID Quantique, a reliable satellite link could connect the existing fiber networks into a single globe-spanning quantum network. “This proves the feasibility of quantum communications from space,” ID Quantique CEO Gregoire Ribordy tells The Verge. “The vision is that you have regional quantum key distribution networks over fiber, which can connect to each other through the satellite link.”

China isn’t the only country working on bringing quantum networks to space. A collaboration between the UK’s University of Strathclyde and the National University of Singapore is hoping to produce the same entanglement in cheap, readymade satellites called Cubesats. A Canadian team is also developing a method of producing entangled photons on the ground before sending them into space.

I wonder if there’s going to be an invitational event for scientists around the world to celebrate the launch.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

‘Origami organs’ for tissue engineering

This is a different approach to tissue engineering and its the consequence of a serendipitous accident.  From an Aug. 7, 2017 Northwestern University news release (also on EurekAlert),

Northwestern Medicine scientists and engineers have invented a range of bioactive “tissue papers” made of materials derived from organs that are thin and flexible enough to even fold into an origami bird. The new biomaterials can potentially be used to support natural hormone production in young cancer patients and aid wound healing.

The tissue papers are made from structural proteins excreted by cells that give organs their form and structure. The proteins are combined with a polymer to make the material pliable.

In the study, individual types of tissue papers were made from ovarian, uterine, kidney, liver, muscle or heart proteins obtained by processing pig and cow organs. Each tissue paper had specific cellular properties of the organ from which it was made.

The article describing the tissue paper and its function will be published Aug. 7 in the journal Advanced Functional Materials.

“This new class of biomaterials has potential for tissue engineering and regenerative medicine as well as drug discovery and therapeutics,” corresponding author Ramille Shah said. “It’s versatile and surgically friendly.”

Shah is an assistant professor of surgery at the Feinberg School of Medicine and an assistant professor of materials science and engineering at McCormick School of Engineering. She also is a member of the Simpson Querrey Institute for BioNanotechnology.

For wound healing, Shah thinks the tissue paper could provide support and the cell signaling needed to help regenerate tissue to prevent scarring and accelerate healing.

The tissue papers are made from natural organs or tissues. The cells are removed, leaving the natural structural proteins – known as the extracellular matrix – that then are dried into a powder and processed into the tissue papers. Each type of paper contains residual biochemicals and protein architecture from its original organ that can stimulate cells to behave in a certain way.

In the lab of reproductive scientist Teresa Woodruff, the tissue paper made from a bovine ovary was used to grow ovarian follicles when they were cultured in vitro. The follicles (eggs and hormone-producing cells) grown on the tissue paper produced hormones necessary for proper function and maturation.

“This could provide another option to restore normal hormone function to young cancer patients who often lose their hormone function as a result of chemotherapy and radiation,” Woodruff, a study coauthor, said.

A strip of the ovarian paper with the follicles could be implanted under the arm to restore hormone production for cancer patients or even women in menopause.

Woodruff is the director of the Oncofertility Consortium and the Thomas J. Watkins Memorial Professor of Obstetrics and Gynecology at Feinberg.

In addition, the tissue paper made from various organs separately supported the growth of adult human stem cells. Scientists placed human bone marrow stem cells on the tissue paper, and all the stem cells attached and multiplied over four weeks.

“That’s a good sign that the paper supports human stem cell growth,” said first author Adam Jakus, who developed the tissue papers. “It’s an indicator that once we start using tissue paper in animal models it will be biocompatible.”

The tissue papers feel and behave much like standard office paper when they are dry, Jakus said. Jakus simply stacks them in a refrigerator or a freezer. He even playfully folded them into an origami bird.

“Even when wet, the tissue papers maintain their mechanical properties and can be rolled, folded, cut and sutured to tissue,” he said.

Jakus was a Hartwell postdoctoral fellow in Shah’s lab for the study and is now chief technology officer and cofounder of the startup company Dimension Inx, LLC, which was also cofounded by Shah. The company will develop, produce and sell 3-D printable materials primarily for medical applications. The Intellectual Property is owned by Northwestern University and will be licensed to Dimension Inx.

An Accidental Spill Sparked Invention

An accidental spill of 3-D printing ink in Shah’s lab by Jakus sparked the invention of the tissue paper. Jakus was attempting to make a 3-D printable ovary ink similar to the other 3-D printable materials he previously developed to repair and regenerate bone, muscle and nerve tissue. When he went to wipe up the spill, the ovary ink had already formed a dry sheet.

“When I tried to pick it up, it felt strong,” Jakus said. “I knew right then I could make large amounts of bioactive materials from other organs. The light bulb went on in my head. I could do this with other organs.”

“It is really amazing that meat and animal by-products like a kidney, liver, heart and uterus can be transformed into paper-like biomaterials that can potentially regenerate and restore function to tissues and organs,” Jakus said. “I’ll never look at a steak or pork tenderloin the same way again.”

For those who like their news in a video,

As someone who once made baklava, that does not look like filo pastry, where an individual sheet is quite thin and rips easily. Enough said.

Here’s a link to and a citation for the paper,

“Tissue Papers” from Organ-Specific Decellularized Extracellular Matrices by Adam E. Jakus, Monica M. Laronda, Alexandra S. Rashedi, Christina M. Robinson, Chris Lee, Sumanas W. Jordan, Kyle E. Orwig, Teresa K. Woodruff, and Ramille N. Shah. Advnaced Functional Materials DOI: 10.1002/adfm.201700992 Version of Record online: 7 AUG 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Masdar Institute and rainmaking

Water security, of course, is a key issue and of particular concern in many parts of the world including the Middle East. (In the Pacific Northwest, an area described as a temperate rain forest, there tends to be less awareness but even we are sometimes forced to ration water.) According to a July 5, 2017 posting by Bhok Thompson (on the Green Prophet website) scientists at the Masdar Institute of Science and Technology (in Abu Dhabi, United Arab Emirates [UA]E) have applied for a patent on a new technique for rainmaking,

Umbrella sales in the UAE may soon see a surge in pricing. Researchers at the Masdar Institute have filed for a provisional patent with the United States Patent and Trademark Office for their discovery – and innovative cloud seeding material that moves them closer to their goal of producing rain on demand. It appears to be a more practical approach than building artificial mountains.

Dr. Linda Zou is leading the project. A professor of chemical and environmental engineering, she is one of the first scientists to explore nanotechnology to enhance a cloud seeding material’s ability to produce rain. By filing a patent, the team is paving a way to commercialize their discovery, and aligning with Masdar Institute’s aim to position the UAE as a world leader in science and tech, specifically in the realm of environmental sustainability.

A January 31, 2017 posting by Erica Solomon for the Masdar Institute reveals more about the project,

The Masdar Institute research team that was one of the inaugural recipients of the US$ 5 million grant from the UAE Research Program for Rain Enhancement Science last year has made significant progress in their work as evidenced by the filing a provisional patent with the United States Patent and Trademark Office (USPTO).

By filing a patent on their innovative cloud seeding material, the research team is bringing the material in the pathway for commercialization, thereby supporting Masdar Institute’s goal of bolstering the United Arab Emirates’ local intellectual property, which is a key measure of the country’s innovation drive. It also signifies a milestone towards achieving greater water security in the UAE, as rainfall enhancement via cloud seeding can potentially increase rainfall between 10% to 30%, helping to refresh groundwater reserves, boost agricultural production, and reduce the country’s heavy reliance on freshwater produced by energy-intensive seawater desalination.

Masdar Institute Professor of Chemical and Environmental Engineering, Dr. Linda Zou, is the principal investigator of this research project, and one of the first scientists in the world to explore the use of nanotechnology to enhance a cloud seeding material’s ability to produce rain.

“Using nanotechnology to accelerate water droplet formation on a typical cloud seeding material has never been researched before. It is a new approach that could revolutionize the development of cloud seeding materials and make them significantly more efficient and effective,” Dr. Zou remarked.

Conventional cloud seeding materials are small particles such as pure salt crystals, dry ice and silver iodide. These tiny particles, which are a few microns (one-thousandth of a millimeter) in size, act as the core around which water condenses in the clouds, stimulating water droplet growth. Once the air in the cloud reaches a certain level of saturation, it can no longer hold in that moisture, and rain falls. Cloud seeding essentially mimics what naturally occurs in clouds, but enhances the process by adding particles that can stimulate and accelerate the condensation process.

Dr. Zou and her collaborators, Dr. Mustapha Jouiad, Principal Research Scientist in Mechanical and Materials Engineering Department, postdoctoral researcher Dr. Nabil El Hadri and PhD student Haoran Liang, explored ways to improve the process of condensation on a pure salt crystal by layering it with a thin coating of titanium dioxide.

The extremely thin coating measures around 50 nanometers, which is more than one thousand times thinner than a human hair. Despite the coating’s miniscule size, the titanium dioxide’s effect on the salt’s condensation efficiency is significant. Titanium dioxide is a hydrophilic photocatalyst, which means that when in contact with water vapor in the cloud, it helps to initiate and sustain the water vapor adsorption and condensation on the nanoparticle’s surface. This important property of the cloud seeding material speeds up the formation of large water droplets for rainfall.

Dr. Zou’s team found that the titanium dioxide coating improved the salt’s ability to adsorb and condense water vapor over 100 times compared to a pure salt crystal. Such an increase in condensation efficiency could improve a cloud’s ability to produce more precipitation, making rain enhancement operations more efficient and effective. The research will now move to the next stage of simulated cloud and field testing in the future.

Dr. Zou’s research grant covers two more years of research. During this time, her team will continue to study different design concepts and structures for cloud seeding materials inspired by nanotechnology.

To give you a sense of the urgent need for these technologies, here’s the title from my Aug. 24, 2015 posting, The Gaza is running out of water by 2016 if the United Nations predictions are correct. I’ve not come across any updates on the situation in the Gaza Strip but both Israel and Palestine have recently signed a deal concerning water. Dalia Hatuqa’s August 2017 feature on the water deal for Al Jazeera is critical primarily of Israel (as might be expected) but there are one or two subtle criticisms of Palestine too,

Critics have also warned that the plan does not address Israeli restrictions on Palestinian access to water and the development of infrastructure needed to address the water crisis in the occupied West Bank.

Palestinians in the West Bank consume only 70 litres of water per capita per day, well below what the World Health Organization recommends as a minimum (100).

In the most vulnerable communities in Area C – those not connected to the water network – that number further drops to 20, according to EWASH, a coalition of Palestinian and international organisations working on water and sanitation in the Palestinian territories.

The recent bilateral agreement, which does not increase the Palestinians’ quota of water in the Jordan River, makes an untenable situation permanent and guarantees Israel a lion’s share of its water, thus reinforcing the status quo, Buttu [Diana Buttu, a former adviser to the Palestinian negotiating team] said.

“They have moved away from the idea that water is a shared resource and instead adopted the approach that Israel controls and allocates water to Palestinians,” she added. “Israel has been selling water to Palestinians for a long time, but this is enshrining it even further by saying that this is the way to alleviate the water problem.”

Israeli officials say that water problems in the territories could have been addressed had the Palestinians attended the meetings of the joint committee. Palestinians attribute their refusal to conditions set by their counterparts, namely that they must support Israeli settlement water projects for any Palestinian water improvements to be approved.

According to Israeli foreign ministry spokesman Emmanuel Nahshon, “There are many things to be done together to upgrade the water infrastructure in the PA. We are talking about old, leaking pipes, and a more rational use of water.” He also pointed to the illegal tapping into pipes, which he maintained Palestinians did because they did not want to pay for water. “This is something we’ve been wanting to do over the years, and the new water agreement is one of the ways to deal with that. The new agreement … is not only about water quotas; it’s also about more coherent and better use of water, in order to address the needs of the Palestinians.”

But water specialists say that the root cause of the problem is not illegal activity, but the unavailability of water resources to Palestinians and the mismanagement and diversion of the Jordan River.

Access to water is gong to be of increasing urgency should temperatures continue to rise as they have. In many parts of the world, potable water is not easy to find and if temperatures continue to rise areas that did have some water security will lose it and the potential for conflict rises hugely. Palestine and Israel may be a harbinger of what’s to come. As for the commodification of water, I have trouble accepting it; I think everyone has a right to water.

US Dept. of Agriculture announces its nanotechnology research grants

I don’t always stumble across the US Department of Agriculture’s nanotechnology research grant announcements but I’m always grateful when I do as it’s good to find out about  nanotechnology research taking place in the agricultural sector. From a July 21, 2017 news item on Nanowerk,,

The U.S. Department of Agriculture’s (USDA) National Institute of Food and Agriculture (NIFA) today announced 13 grants totaling $4.6 million for research on the next generation of agricultural technologies and systems to meet the growing demand for food, fuel, and fiber. The grants are funded through NIFA’s Agriculture and Food Research Initiative (AFRI), authorized by the 2014 Farm Bill.

“Nanotechnology is being rapidly implemented in medicine, electronics, energy, and biotechnology, and it has huge potential to enhance the agricultural sector,” said NIFA Director Sonny Ramaswamy. “NIFA research investments can help spur nanotechnology-based improvements to ensure global nutritional security and prosperity in rural communities.”

A July 20, 2017 USDA news release, which originated the news item, lists this year’s grants and provides a brief description of a few of the newly and previously funded projects,

Fiscal year 2016 grants being announced include:

Nanotechnology for Agricultural and Food Systems

  • Kansas State University, Manhattan, Kansas, $450,200
  • Wichita State University, Wichita, Kansas, $340,000
  • University of Massachusetts, Amherst, Massachusetts, $444,550
  • University of Nevada, Las Vegas, Nevada,$150,000
  • North Dakota State University, Fargo, North Dakota, $149,000
  • Cornell University, Ithaca, New York, $455,000
  • Cornell University, Ithaca, New York, $450,200
  • Oregon State University, Corvallis, Oregon, $402,550
  • University of Pennsylvania, Philadelphia, Pennsylvania, $405,055
  • Gordon Research Conferences, West Kingston, Rhode Island, $45,000
  • The University of Tennessee,  Knoxville, Tennessee, $450,200
  • Utah State University, Logan, Utah, $450,200
  • The George Washington University, Washington, D.C., $450,200

Project details can be found at the NIFA website (link is external).

Among the grants, a University of Pennsylvania project will engineer cellulose nanomaterials [emphasis mine] with high toughness for potential use in building materials, automotive components, and consumer products. A University of Nevada-Las Vegas project will develop a rapid, sensitive test to detect Salmonella typhimurium to enhance food supply safety.

Previously funded grants include an Iowa State University project in which a low-cost and disposable biosensor made out of nanoparticle graphene that can detect pesticides in soil was developed. The biosensor also has the potential for use in the biomedical, environmental, and food safety fields. University of Minnesota (link is external) researchers created a sponge that uses nanotechnology to quickly absorb mercury, as well as bacterial and fungal microbes from polluted water. The sponge can be used on tap water, industrial wastewater, and in lakes. It converts contaminants into nontoxic waste that can be disposed in a landfill.

NIFA invests in and advances agricultural research, education, and extension and promotes transformative discoveries that solve societal challenges. NIFA support for the best and brightest scientists and extension personnel has resulted in user-inspired, groundbreaking discoveries that combat childhood obesity, improve and sustain rural economic growth, address water availability issues, increase food production, find new sources of energy, mitigate climate variability and ensure food safety. To learn more about NIFA’s impact on agricultural science, visit www.nifa.usda.gov/impacts, sign up for email updates (link is external) or follow us on Twitter @USDA_NIFA (link is external), #NIFAImpacts (link is external).

Given my interest in nanocellulose materials (Canada was/is a leader in the production of cellulose nanocrystals [CNC] but there has been little news about Canadian research into CNC applications), I used the NIFA link to access the table listing the grants and clicked on ‘brief’ in the View column in the University of Pennsylania row to find this description of the project,

ENGINEERING CELLULOSE NANOMATERIALS WITH HIGH TOUGHNESS

NON-TECHNICAL SUMMARY: Cellulose nanofibrils (CNFs) are natural materials with exceptional mechanical properties that can be obtained from renewable plant-based resources. CNFs are stiff, strong, and lightweight, thus they are ideal for use in structural materials. In particular, there is a significant opportunity to use CNFs to realize polymer composites with improved toughness and resistance to fracture. The overall goal of this project is to establish an understanding of fracture toughness enhancement in polymer composites reinforced with CNFs. A key outcome of this work will be process – structure – fracture property relationships for CNF-reinforced composites. The knowledge developed in this project will enable a new class of tough CNF-reinforced composite materials with applications in areas such as building materials, automotive components, and consumer products.The composite materials that will be investigated are at the convergence of nanotechnology and bio-sourced material trends. Emerging nanocellulose technologies have the potential to move biomass materials into high value-added applications and entirely new markets.

It’s not the only nanocellulose material project being funded in this round, there’s this at North Dakota State University, from the NIFA ‘brief’ project description page,

NOVEL NANOCELLULOSE BASED FIRE RETARDANT FOR POLYMER COMPOSITES

NON-TECHNICAL SUMMARY: Synthetic polymers are quite vulnerable to fire.There are 2.4 million reported fires, resulting in 7.8 billion dollars of direct property loss, an estimated 30 billion dollars of indirect loss, 29,000 civilian injuries, 101,000 firefighter injuries and 6000 civilian fatalities annually in the U.S. There is an urgent need for a safe, potent, and reliable fire retardant (FR) system that can be used in commodity polymers to reduce their flammability and protect lives and properties. The goal of this project is to develop a novel, safe and biobased FR system using agricultural and woody biomass. The project is divided into three major tasks. The first is to manufacture zinc oxide (ZnO) coated cellulose nanoparticles and evaluate their morphological, chemical, structural and thermal characteristics. The second task will be to design and manufacture polymer composites containing nano sized zinc oxide and cellulose crystals. Finally the third task will be to test the fire retardancy and mechanical properties of the composites. Wbelieve that presence of zinc oxide and cellulose nanocrystals in polymers will limit the oxygen supply by charring, shielding the surface and cellulose nanocrystals will make composites strong. The outcome of this project will help in developing a safe, reliable and biobased fire retardant for consumer goods, automotive, building products and will help in saving human lives and property damage due to fire.

One day, I hope to hear about Canadian research into applications for nanocellulose materials. (fingers crossed for good luck)

2017 Research as Art Awards at Swansea University (UK)

It’s surprising I haven’t stumbled across Swansea University’s (UK) Research as Art competitions before now. still, I’m happy to have done so now.

Picture: Research as Art winner 2017. “Bioblocks: building for nature”. How the tidal lagoon could be a habitat for marine creatures.

A July 14, 2017 news item on phys.org announces the results of 2017 Research as Art competition,

Fifteen stunning images, and the fascinating stories behind them—such as how a barn owl’s pellets reveal which animals it has eaten, how data can save lives, and how Barbie breaks free—have today been revealed as the winners of the 2017 Research as Art awards.

The overall winner is Dr Ruth Callaway, a research officer from the College of Science. Her entry, Bioblocks: building for nature, illustrates how children and researchers have been exploring ways in which the tidal lagoon proposed for Swansea Bay could become a new habitat for marine creatures.

A July 14, 2017 Swansea University press release, which originated the news item, describes the competition in more detail (Note: Links have been removed),

Research as Art is the only competition of its kind, open to researchers from all subjects, and with an emphasis on telling the research story, as well as composing a striking image.

It offers an outlet for researchers’ creativity, and celebrates the diversity, beauty, and impact of research at Swansea University – a top 30 research university.

86 entries were received from researchers across all Colleges of the University.

A distinguished judging panel of senior figures selected a total of fifteen winners. Along with the overall winner, there were judges’ awards in four categories relating to engagement – imagination, inspiration, illumination, and the natural world – and 10 highly-commended entries.

Judging panel:

Prof. Gail Cardew – Professor of Science, Culture and Society at the Royal Institution
Dan Cressey, Reporter, Nature News
Flora Graham – Digital Editor of NewScientist
Barbara Kiser, Books and Arts Editor, Nature

Overall winner Dr Ruth Callaway described the image in her winning entry:

“Over 200 children used cubes of clay to sculpt ecologically attractive habitats for coastal creatures. These bioblocks demonstrate that humanmade structures can support marine life, while children and their families have gained a better understanding of the unique resilience of sea creatures.

It is hoped that the diverse and complex habitat will enable more species to use this new material as a living space: crevices and holes will provide shelter; variable textures and overhangs will allow animals and seaweed to cling to the material.”

Dr Ruth Callaway added:

“Innovative projects such as the Tidal Lagoon Swansea Bay are inspiring, but they also throw up lots of questions and complex environmental challenges.

For marine scientists, the project creates unprecedented research opportunities to explore how the construction process could reduce negative impact on the coastal environment.

The EU-funded SEACAMS project and the company Tidal Lagoon Power work in collaboration, and we explore novel ways of enhancing biodiversity. Discussing these ideas with the public both informs the wider community about our work and triggers new research ideas.”

Competition founder and Director Dr Richard Johnston, Associate Professor in materials science and engineering at Swansea University, said:

“Research as Art is an opportunity for researchers to reveal hidden aspects of their research to audiences they wouldn’t normally engage with. This may uncover their personal story, their humanity, their inspiration, and emotion.

It can also be a way of presenting their research process, and what it means to be a researcher; fostering dialogue, and dissolving barriers between universities and the wider world.”

You can find out more about the competition, which seems to date from 2012, on the Research as Art competition page and more about the SEACAMS project here.