Featured post

Brief note about changes

June 19,2019: Hello! I apologize for this site’s unavailability over the last 10 days or so (June 7 – 18, 2019). Moving to a new web hosting service meant that the ‘law of unintended consequences’ came into play. Fingers crossed that all the problems have been resolved.

On another matter, I’ve accumulated quite a backlog of postings, which I will be resizing (publishing) over the next few months. I’ve been trying to bring that backlog down to a reasonable size for quite some time now but I see more drastic, focused action is required. I will continue posting some more recent news items along with my older pieces.

Toronto, Sidewalk Labs, smart cities, and timber

The ‘smart city’ initiatives continue to fascinate. During the summer, Toronto’s efforts were described in a June 24, 2019 article by Katharine Schwab for Fast Company (Note: Links have been removed),

Today, Google sister company Sidewalk Labs released a draft of its master plan to transform 12 acres on the Toronto waterfront into a smart city. The document details the neighborhood’s buildings, street design, transportation, and digital infrastructure—as well as how the company plans to construct it.

When a leaked copy of the plan popped up online earlier this year, we learned that Sidewalk Labs plans to build the entire development, called Quayside, out of mass timber. But today’s release of the official plan reveals the key to doing so: Sidewalk proposes investing $80 million to build a timber factory and supply chain that would support its fully timber neighborhood. The company says the factory, which would be focused on manufacturing prefabricated building pieces that could then be assembled into fully modular buildings on site, could reduce building time by 35% compared to more traditional building methods.

“We would fund the creation of [a factory] somewhere in the greater Toronto area that we think could play a role in catalyzing a new industry around mass timber,” says Sidewalk Labs CEO and chairman Dan Doctoroff.

However, the funding of the factory is dependent on Sidewalk Labs being able to expand its development plan to the entire riverfront district. .. [emphasis mine].

Here’s where I think it gets very interesting,

Sidewalk proposes sourcing spruce and fir trees from the forests in Ontario, Quebec, and British Columbia. While Canada has 40% of the world’s sustainable forests, Sidewalk claims, the country has few factories that can turn these trees into the building material. That’s why the company proposes starting a factory to process two kinds of mass timber: Cross-laminated timber (CLT) and glulam beams. The latter is meant specifically to bear the weight of the 30-story buildings Sidewalk hopes to build. While Sidewalk says that 84% of the larger district would be handed over for development by local companies, the plan requires that these companies uphold the same sustainability standards when it comes to performance

Sidewalk says companies wouldn’t be required to build with CLT and glulam, but since the company’s reason for building the mass timber factory is that there aren’t many existing manufacturers to meet the needs for a full-scale development, the company’s plan might ultimately push any third-party developers toward using its [Google] factory to source materials. … [emphasis mine]

If I understand this rightly, Google wants to expand its plan to Toronto’s entire waterfront to make building a factory to produce the type of wood products Google wants to use in its Quayside development financially feasible (profitable). And somehow, local developers will not be forced to build the sames kinds of structures although Google will be managing the entire waterfront development. Hmmm.

Let’s take a look at one of Google’s other ‘city ventures’.

Louisville, Kentucky

First, Alphabet is the name of Google’s parent company and it was Alphabet that offered the city of Louisville an opportunity for cheap, abundant internet service known as Google Fiber. From a May 6, 2019 article by Alex Correa for the The Edge (Note: Links have been removed),

In 2015, Alphabet chose several cities in Kentucky to host its Google Fiber project. Google Fiber is a service providing broadband internet and IPTV directly to a number of locations, and the initiative in Kentucky … . The tech giant dug up city streets to bury fibre optic cables of their own, touting a new technique that would only require the cables to be a few inches beneath the surface. However, after two years of delays and negotiations after the announcement, Google abandoned the project in Louisville, Kentucky.

Like an unwanted pest in a garden, sign of Google’s presence can be seen and felt in the city streets. Metro Councilman Brandon Coan criticized the state of the city’s infrastructure, pointing out that strands of errant, tar-like sealant, used to cover up the cables, are “everywhere.” Speaking outside of a Louisville coffee shop that ran Google Fiber lines before the departure, he said, “I’m confident that Google and the city are going to negotiate a deal… to restore the roads to as good a condition as they were when they got here. Frankly, I think they owe us more than that.”

Google’s disappearance did more than just damage roads [emphasis mine] in Louisville. Plans for promising projects were abandoned, including transformative economic development that could have provided the population with new jobs and vastly different career opportunities than what was available. Add to that the fact that media coverage of the aborted initiative cast Louisville as the site of a failed experiment, creating an impression of the city as an embarrassment. (Google has since announced plans to reimburse the city $3.84 million over 20 months to help repair the damage to the city’s streets and infrastructure.)

A February 22, 2019 article on CBC (Canadian Broadcasting Corporation) Radio news online offers images of the damaged roadways and a particle transcript of a Day 6 radio show hosted by Brent Bambury,

Shortly after it was installed, the sealant on the trenches Google Fiber cut into Louisville roads popped out. (WDRB Louisville) Courtesy: CBC Radio Day 6

Google’s Sidewalk Labs is facing increased pushback to its proposal to build a futuristic neighbourhood in Toronto, after leaked documents revealed the company’s plans are more ambitious than the public had realized.

One particular proposal — which would see Sidewalk Labs taking a cut of property taxes in exchange for building a light rail transit line along Toronto’s waterfront — is especially controversial.

The company has developed an impressive list of promises for its proposed neighbourhood, including mobile pre-built buildings and office towers that tailor themselves to occupants’ behaviour.

But Louisville, Kentucky-based business reporter Chris Otts says that when Google companies come to town, it doesn’t always end well.

What was the promise Google Fiber made to Louisville back in 2015?

Well, it was just to be included as one of their Fiber cities, which was a pretty serious deal for Louisville at the time. A big coup for the mayor, and his administration had been working for years to get Google to consider adding Louisville to that list.

So if the city was eager, what sorts of accommodations were made for Google to entice them to come to Louisville?

Basically, the city did everything it could from a streamlining red tape perspective to get Google here … in terms of, you know, awarding them a franchise, and allowing them to be in the rights of way with this innovative technique they had for burying their cables here.
And then also, they [the city] passed a policy, which, to be sure, they say is just good policy regardless of Google’s support for it. But it had to do with how new Internet companies like Google can access utility poles to install their networks.

And Louisville ended up spending hundreds of thousands of dollars to defend that new policy in court in lawsuits by AT&T and by the traditional cable company here.

When Google Fiber starts doing business, they’re offering cheaper high speed Internet access, and they start burying these cables in the ground.

When did things start to go sideways for this project?

I don’t know if I would say ‘almost immediately,’ but certainly the problems were evident fairly quickly.

So they started their work in 2017. If you picture it, [in] the streets you can see on either side there are these seams. They look like little strings … near the end of the streets on both sides. And there are cuts in the street where they buried the cable and they topped it off with this sealant

And fairly early on — within months, I would say, of them doing that — you could see the sealant popping out. The conduit in there [was] visible or exposed. And so it was fairly evident that there were problems with it pretty quickly

Was this the first time that they had used this system and the sealant that you’re describing?

It was the first time, according to them, that they had used such shallow trenches in the streets.

So these are as shallow as two inches below the pavement surface that they’d bury these cables. It’s the ultra-shallow version of this technique.

And what explanation did Google Fiber offer for their decision to leave Louisville?

That it was basically a business decision; that they were trying this construction method to see if it was sustainable and they just had too many problems with it.

And as they said directly in their … written statement about this, they decided that instead of doing things right and starting over, which they would have to do essentially to keep providing service in Louisville, that it was the better business decision for them to just pick up and leave.

Toronto’s Sidewalk Labs isn’t Google Fiber — but they’re both owned by Google’s parent company, Alphabet.

If Louisville could give Toronto a piece of advice about welcoming a Google infrastructure project to town, what do you think that advice would be?

The biggest lesson from this is that one day they can be next to you at the press conference saying what a great city you are and how happy they are to … provide new service in your market, and then the next day, with almost no notice, they can say, “You know what? This doesn’t make sense for us anymore. And by the way, see ya. Thanks for having us. Sorry it didn’t work out.”

Google’s promises to Toronto

Getting back to Katharine Schwab’s June 24, 2019 fast Company article,

The factory is also key to another of Sidewalk’s promises: Jobs. According to Sidewalk, the factory itself would create 2,500 jobs [emphasis mine] along the entire supply chain over a 20-year period. But even if the Canadian government approves Sidewalk’s plan and commits to building out the entire waterfront district to take advantage of the mass timber factory’s economies of scale, there are other regulatory hurdles to overcome. Right now, the building code in Toronto doesn’t allow for timber buildings over six stories tall. All of Sidewalk’s proposed buildings are over six stories, and many of them go up to 30 stories. Doctoroff said he was optimistic that the company will be able to get regulations changed if the city decides to adopt the plan. There are several examples of timber buildings that are already under construction, with a planned skyscraper in Japan that will be 70 stories.

Sidewalk’s proposal is the result of 18 months of planning, which involved getting feedback from community members and prototyping elements like a building raincoat that the company hopes to include in the final development. It has come under fire from privacy advocates in particular, and the Canadian government is currently facing a lawsuit from a civil liberties group over its decision to allow a corporation to propose public privacy governance standards.

Now that the company has released the plan, it will be up to the Canadian government to decide whether to move forward. And the mass timber factory, in particular, will be dependent on the government adopting Sidewalk’s plan wholesale, far beyond the Quayside development—a reminder that Sidewalk is a corporation that’s here to make money, dangling investment dollars in front of the government to incentivize it to embrace Sidewalk as the developer for the entire area.

A few thoughts

Those folks in Louisville made a lot of accommodations for Google only to have the company abandon them. They will get some money in compensation, finally, but it doesn’t make up for the lost jobs and the national, if not international, loss of face.

I would think that should things go wrong, Google would do exactly the same thing to Toronto. As for the $80M promise, here’s exactly how it’s phrased in the June 24, 2019 Sidewalk Labs news release,

… Together with local partners, Sidewalk proposes to invest up to $80 million in a mass timber factory in Ontario to jumpstart this emerging industry.

So, Alphabet/Google/Sidewalk has proposed up to an $80M investment—with local partners. I wonder how much this factory is supposed to cost and what kinds of accommodations Alphabet/Google/Sidewalk will demand. Possibilities include policy changes, changes in municipal bylaws, and government money. In other words, Canadian taxpayers could end up footing part of the bill and/or local developers could be required to cover and outsize percentage of the costs for the factory as they jockey for the opportunity to develop part of Toronto’s waterfront.

Other than Louisville, what’s the company’s track record with regard to its partnerships with cities and municipalities? I Haven’t found any success stories in my admittedly brief search. Unusually, the company doesn’t seem to be promoting any of its successful city partnerships.

Smart city

While my focus has been on the company’s failure with Louisville and the possible dangers inherent to Toronto in a partnership with this company, it shouldn’t be forgotten that all of this development is in the name of a ‘smart’ city and that means data-driven. My March 28, 2018 posting features some of the issues with the technology, 5G, that will be needed to make cities ‘smart’. There’s also my March 20, 2018 posting (scroll down about 30% of the way) which looks at ‘smart’ cities in Canada with a special emphasis on Vancouver.

You may want to check out David Skok’s February 15, 2019 Maclean’s article (Cracks in the Sidewalk) for a Torontonian’s perspective.

Should you wish to do some delving yourself, there’s Sidewalk Labs website here and a June 24, 2019 article by Matt McFarland for CNN detailing some of the latest news about the backlash in Toronto concerning Sidewalk Labs.

A September 2019 update

Waterfront Toronto’s Digital Strategy Advisory Panel (DSAP) submitted a report to Google in August 2019 which was subsequently published as of September 10, 2019. To sum it up, the panel was not impressed with Google’s June 2019 draft master plan. From a September 11, 2019 news item on the Guardian (Note: Links have been removed),

A controversial smart city development in Canada has hit another roadblock after an oversight panel called key aspects of the proposal “irrelevant”, “unnecessary” and “frustratingly abstract” in a new report.

The project on Toronto’s waterfront, dubbed Quayside, is a partnership between the city and Google’s sister company Sidewalk Labs. It promises “raincoats” for buildings, autonomous vehicles and cutting-edge wood-frame towers, but has faced numerous criticisms in recent months.

A September 11, 2019 article by Ian Bick of Canadian Press published on the CBC (Canadian Broadcasting Corporation) website offers more detail,

Preliminary commentary from Waterfront Toronto’s digital strategy advisory panel (DSAP) released Tuesday said the plan from Google’s sister company Sidewalk is “frustratingly abstract” and that some of the innovations proposed were “irrelevant or unnecessary.”

“The document is somewhat unwieldy and repetitive, spreads discussions of topics across multiple volumes, and is overly focused on the ‘what’ rather than the ‘how,’ ” said the report on the panel’s comments.

Some on the 15-member panel, an arm’s-length body that gives expert advice to Waterfront Toronto, have also found the scope of the proposal to be unclear or “concerning.”

The report says that some members also felt the official Sidewalk plan did not appear to put the citizen at the centre of the design process for digital innovations, and raised issues with the way Sidewalk has proposed to manage data that is generated from the neighbourhood.

The panel’s early report is not official commentary from Waterfront Toronto, the multi-government body that is overseeing the Quayside development, but is meant to indicate areas that needs improvement.

The panel, chaired by University of Ottawa law professor Michael Geist, includes executives, professors, and other experts on technology, privacy, and innovation.

Sidewalk Labs spokeswoman Keerthana Rang said the company appreciates the feedback and already intends to release more details in October on the digital innovations it hopes to implement at Quayside.

I haven’t been able to find the response to DSAP’s September 2019 critique but I did find this Toronto Sidewalk Labs report, Responsible Data Use Assessment Summary :Overview of Collab dated October 16, 2019. Of course, there’s still another 10 days before October 2019 is past.

The latest ‘golden’ age for electronics

I don’t know the dates for the last ‘golden’ age of electronics but I can certainly understand why these Japanese researchers are excited about their work. In any event, I think the ‘golden age’ is more of a play on words. From a June 25, 2019 news item on Nanowerk (Note: A link has been removed),

One way that heat damages electronic equipment is it makes components expand at different rates, resulting in forces that cause micro-cracking and distortion. Plastic components and circuit boards are particularly prone to damage due to changes in volume during heating and cooling cycles. But if a material could be incorporated into the components that compensates for the expansion, the stresses would be reduced and their lifetime increased.

Everybody knows one material that behaves like this: liquid water expands when it freezes and ice contracts when it melts. But liquid water and electronics don’t mix well – instead, what’s needed is a solid with “negative thermal expansion” (NTE).

Although such materials have been known since the 1960s, a number of challenges had to be overcome before the concept would be broadly useful and commercially viable. In terms of both materials and function, these efforts have only had limited success.

The experimental materials had been produced under specialized laboratory conditions using expensive equipment; and even then, the temperature and pressure ranges in which they would exhibit NTE were well outside normal everyday conditions.

Moreover, the amount they expanded and contracted depended on the direction, which induced internal stresses that changed their structure, meaning that the NTE property would not last longer than a few heating and cooling cycles.

A research team led by Koshi Takenaka of Nagoya University has succeeded in overcoming these materials-engineering challenges (APL Materials, “Valence fluctuations and giant isotropic negative thermal expansion in Sm1–xRxS (R = Y, La, Ce, Pr, Nd)”).

A June 22, 2019 Nagoya University press release (also on EurekAlert but published on June 25, 2019), which originated the news item, provides more technical detail,

Inspired by the series of work by Noriaki Sato, also of Nagoya University – whose discovery last year of superconductivity in quasicrystals was considered one of the top ten physics discoveries of the year by Physics World magazine – Professor Takenaka took the rare earth element samarium and its sulfide, samarium monosulfide (SmS), which is known to change phase from the “black phase” to the smaller-volume “golden phase”. The problem was to tune the range of temperatures at which the phase transition occurs. The team’s solution was to replace a small proportion of samarium atoms with another rare earth element, giving Sm1-xRxS, where “R” is any one of the rare earth elements cerium (Ce), neodymium (Nd), praseodymium (Pr) or yttrium (Y). The fraction x the team used was typically 0.2, except for yttrium. These materials showed “giant negative thermal expansion” of up to 8% at ordinary room pressure and a useful range of temperatures (around 150 degrees) including at room temperature and above … . Cerium is the star candidate here because it is relatively cheap.

The nature of the phase transition is such that the materials can be powdered into very small crystal sizes around a micron on a side without losing their negative expansion property. This broadens the industrial applications, particularly within electronics.

While the Nagoya University group’s engineering achievement is impressive, how the negative expansion works is fascinating from a fundamental physics viewpoint. During the black-golden transition, the crystal structure stays the same but the atoms get closer together: the unit cell size becomes smaller because (as is very likely but perhaps not yet 100% certain) the electron structure of the samarium atoms changes and makes them smaller – a process of intra-atomic charge transfer called a “valence transition” or “valence fluctuation” within the samarium atoms … . “My impression,” says Professor Takenaka, “is that the correlation between the lattice volume and the electron structure of samarium is experimentally verified for this class of sulfides.”

More specifically, in the black (lower temperature) phase, the electron configuration of the samarium atoms is (4f)6, meaning that in their outermost shell they have 6 electrons in the f orbitals (with s, p and d orbitals filled); while in the golden phase the electronic configuration is (4f)5(5d)1 -an electron has moved out of a 4f orbital into a 5d orbital. Although a “higher” shell is starting to be occupied, it turns out – through a quirk of the Pauli Exclusion Principle – that the second case gives a smaller atom size, leading to a smaller crystal size and negative expansion.

But this is only part of the fundamental picture. In the black phase, samarium sulfide and its doped offshoots are insulators – they do not conduct electricity; while in the golden phase they turn into conductors (i.e. metals). This is suggesting that during the black-golden phase transition the band structure of the whole crystal is influencing the valance transition within the samarium atoms. Although nobody has done the theoretical calculations for the doped samarium sulfides made by Professor Takenaka’s group, a previous theoretical study has indicated that when electrons leave the samarium atoms’ f orbital, they leave behind a positively charged “hole” which itself interacts repulsively with holes in the crystal’s conduction band, affecting their exchange interaction. This becomes a cooperative effect that then drives the valence transition in the samarium atoms. The exact mechanism, though, is not well understood.

Nevertheless, the Nagoya University-led group’s achievement is one of engineering, not pure physics. “What is important for many engineers is the ability to use the material to reduce device failure due to thermal expansion,” explains Professor Takenaka. “In short, in a certain temperature range – the temperature range in which the intended device operates, typically an interval of dozens of degrees or more – the volume needs to gradually decrease with a rise in temperature and increase as the temperature falls. Of course, I also know that volume expansion on cooling during a phase transition [like water freezing] is a common case for many materials. However, if the volume changes in a very narrow temperature range, there is no engineering value. The present achievement is the result of material engineering, not pure physics.”

Perhaps it even heralds a new “golden” age for electronics.

I worked in a company for a data communications company that produced hardware and network management software. From a hardware perspective, heat was an enemy which distorted your circuit boards and cost you significant money not only for replacements but also when you included fans to keep the equipment cool (or as cool as possible).

Enough with the reminiscences, here’s a link to and a citation for the paper,

Valence fluctuations and giant isotropic negative thermal expansion in Sm1–xRxS (R = Y, La, Ce, Pr, Nd) by D. Asai, Y. Mizuno, H. Hasegawa, Y. Yokoyama, Y. Okamoto, N. Katayama, H. S. Suzuki, Y. Imanaka, and K. Takenaka. Applied Physics Letters > Volume 114, Issue 14 > 10.1063/1.5090546 or Appl. Phys. Lett. 114, 141902 (2019); https://doi.org/10.1063/1.5090546. Published Online: 12 April 2019

This paper is behind a paywall.

Ethics of germline editing special CRISPR journal issue

Caption: The CRISPR Journal delivers groundbreaking multidisciplinary research, advances, and commentary on CRISPR, the extraordinary technology that gives scientists the power to cure disease and sculpt evolution. Credit: Mary Ann Liebert, Inc., publishers

The CRISPR Journal’s publisher, Mary Ann Liebert, Inc., released two notices about their special issue on ethics. I found this October 10, 2019 media alert on EurekAlert a little more informative than the other one,

Highlights from this Issue:

1. Human Germline Genome Editing: An Assessment
In the opening Perspective of the special issue on The Ethics of Human Genome Editing, Stanford Law professor Henry Greely argues that germline editing is not inherently bad or unethical, but the technology is unlikely to be particularly useful, at least in the near future. Greely takes issue with the notion that the human genome is “the heritage of humanity” – the equivalent of The Ark of the Covenant that “cannot be allowed to fall into the wrong hands.” He contrasts germline editing with the practical applications of preimplantation genetic testing and somatic gene therapy. Exceptions for germline editing might be found in the cases of rare couples where both partners have the same recessive disorder or one is homozygous for a dominant disease.

2. Pick Six: Democratic Governance of Germline Editing
Two international commissions, organized by the World Health Organization, the U.S. National Academies, and the Royal Society, have been launched to provide recommendations for the governance of human germline editing, prompted by the actions of He Jiankui and the 2018 CRISPR babies reports. In this Perspective, Jasanoff, Hurlbut, and Saha [Sheila Jasanoff, Harvard University {Cambridge, MA}, J. Benjamin Hurlbut, Arizona State University {Tempe, AZ}, and Krishanu Saha, University of Wisconsin-Madison] argue that such an approach is “premature and problematic.” Global democratic governance “demands a new mechanism for active, sustained reflection by scientists” in partnership with scholars from other disciplines and the public. The authors present six recommendations to promote democratic governance.

3. Just Say No to a Moratorium
In March 2019, Eric Lander, Francoise Baylis [emphasis mine], and colleagues issued a call for a temporary global moratorium on heritable genome editing. In this Perspective, Kerry Macintosh, author of Enhanced Beings, offers three reasons she opposes the imposition of a moratorium: the danger of a temporary ban becoming permanent; a disincentive to support appropriate research to make the technology safer and more effective; and the potential stigmatization of children born with edited genomes. Nations should regulate germline editing for safety and efficacy only, Macintosh says, without distinguishing between therapeutic applications and enhancement.

4. Who Speaks for Future Children?
Law professor Bartha Knoppers and Erika Kleiderman write that the recent calls for a moratorium on germline editing “may create an illusion of control over rogue science and stifle the necessary international debate surrounding an ethically responsible translational path forward.” Focusing efforts on enforcing current laws and fostering public dialogue is a better route, the authors suggest.

5. The Daunting Economics of Therapeutic Genome Editing
Ten years after the first gene editing clinical trial got underway, gene therapy is experiencing a renaissance. Recent approvals for some gene therapy drugs have been accompanied by exorbitant price tags, in one case exceeding $2 million. Looking ahead, Wilson [Ross C. Wilson, PhD, Innovative Genomics Institute, University of California, Berkeley] and Carroll [Dana Carroll, PhD, Department of Biochemistry, University of Utah School of Medicine] ask whether CRISPR can make good on its promise as “a great leveler” and “democratizing force in biomedicine”. They write: “Therapeutic genome editing must avoid several pitfalls that could substantially limit access to its transformative potential, especially in the developing world.” The costs of drug manufacture, testing, and delivery will have to come down to make the benefits of genome editing available to those most in need.

6. The Demand for Germline Editing: View from a Fertility Clinic
A common argument against human germline editing is that there is already a safe, proven technology to help couples have a healthy biological child — preimplantation genetic testing (PGT). In this Perspective, Manuel Viotti and colleagues from a leading IVF clinic in California strive to calculate the likely occurrence of cases where germline editing might offer couples opportunities to have a healthy biological child where PGT would not be applicable. The numbers are very small indeed.

7. Brave New World in the CRISPR Debate
In any discussion or warnings of designer babies and future dystopian societies based on genetic or reproductive technologies, exhibit A is invariably Aldous Huxley’s iconic 1932 novel, Brave New World. Indeed, David Baltimore referred to the novel at both of the international genome editing summits. In this Perspective, Derek So dissects the misuse of Brave New World, particularly regarding genome editing technology, enhancement, and eugenics. So [even offers a few less celebrated, but potentially more appropriate, examples from the sci-fi literature.

I highlighted Françoise Baylis’ name as she has been mentioned on this blog a few times and, if you’re curious, there’s an opportunity to hear her speak in Toronto (Ontario) tonight, Thursday, October 17, 2019. You can find out where and exactly when in my October 14, 2019 posting, under the first subheading, ‘… on the future of life forms …’.

The October 15, 2019 news release on EurekAlert offers much the same information but also includes this link to the journal issue where you can read it for free,

The Ethics of Human Genome Editing is the subject of intensive discussion and debate in a special issue of The CRISPR Journal, a new peer-reviewed journal from Mary Ann Liebert, Inc., publishers. Click here) to read the full-text issue free on The CRISPR Journal.

The issue contains 11 articles: nine Perspectives and two research articles on issues including human rights for the unborn, the economics of gene editing therapies, the pros and cons of a moratorium on genome editing, the real-world cases where germline editing could provide medical utility, and (on a lighter note) the use and misuse of “Brave New World.”

It looks like a very interesting and comprehensive lineup of topics related to ethics and editing the human germline. FYI, I covered the story about the CRISPR twins, Lulu and Nana, here in a November 28, 2018 posting, about the time the news first broke.

Creating nanofibres from your old clothing (cotton waste)

Researchers at the University of British Columbia (UBC; Canada) have discovered a way to turn cotton waste into a potentially higher value product. An October 15, 2019 UBC news release makes the announcement (Note: Links have been removed),

In the materials engineering labs at UBC, surrounded by Bunsen burners, microscopes and spinning machines, professor Frank Ko and research scientist Addie Bahi have developed a simple process for converting waste cotton into much higher-value nanofibres.

These fibres are the building blocks of advanced products like surgical implants, antibacterial wound dressings and fuel cell batteries.

“More than 28 million tonnes of cotton are produced worldwide each year, but very little of that is actually recycled after its useful life,” explains Bahi, a materials engineer who previously worked on recycling waste in the United Kingdom. “We wanted to find a viable way to break down waste cotton and convert it into a value-added product. This is one of the first successful attempts to make nanofibres from fabric scraps – previous research has focused on using a ready cellulose base to make nanofibres.”

Compared to conventional fibres, nanofibres are extremely thin (a nanofibre can be 500 times smaller than the width of the human hair) and so have a high surface-to-volume ratio. This makes them ideal for use in applications ranging from sensors and filtration (think gas sensors and water filters) to protective clothing, tissue engineering and energy storage.
Ko and Bahi developed their process in collaboration with ecologyst, a B.C.-based company that manufactures sustainable outdoor apparel, and with the participation of materials engineering student Kosuke Ayama.

They chopped down waste cotton fabric supplied by ecologyst into tiny strips and soaked it in a chemical bath to remove all additives and artificial dyes from the fabric. The resulting gossamer-thin material was then fed to an electrospinning machine to produce very fine, smooth nanofibres. These can be further processed into various finished products.

“The process itself is relatively simple, but what we’re thrilled about is that we’ve proved you can extract a high-value product from something that would normally go to landfill, where it will eventually be incinerated. It’s estimated that only a fraction of cotton clothing is recycled. The more product we can re-process, the better it will be for the environment,” said lead researcher Frank Ko, a Canada Research Chair in advanced fibrous materials in UBC’s faculty of applied science.

The process Bahi and Ko developed is lab-scale, supported by a grant from the Natural Sciences and Engineering Research Council of Canada. In the future, the pair hope to refine and scale up their process and eventually share their methods with industry partners.

“We started with cotton because it’s one of the most popular fabrics for clothing,” said Bahi. “Once we’re able to develop the process further, we can look at converting other textiles into value-added materials. Achieving zero waste [emphasis mine] for the fashion and textile industries is extremely challenging – this is simply one of the many first steps towards that goal.”

The researchers have a 30 sec. video illustrating the need to recycle cotton materials,

You can find the researchers’ industrial partner, ecologyst here.

At the mention of ‘zero waste’, I was reminded of an upcoming conference, Oct. 30 -31, 2019 in Vancouver (Canada) where UBC is located. It’s called the 2019 Zero Waste Conference and, oddly,there’s no mention of Ko or Bahi or Ayama or ecologyst on the speakers’ list. Maybe I was looking at the wrong list or the organizers didn’t have enough lead time to add more speakers.

One final comment, I wish there was a little more science (i.e., more technical details) in the news release.

Graphene from gum trees

Caption: Eucalyptus bark extract has never been used to synthesise graphene sheets before. Courtesy: RMIT University

It’s been quite educational reading a June 24, 2019 news item on Nanowerk about deriving graphene from Eucalyptus bark (Note: Links have been removed),

Graphene is the thinnest and strongest material known to humans. It’s also flexible, transparent and conducts heat and electricity 10 times better than copper, making it ideal for anything from flexible nanoelectronics to better fuel cells.

The new approach by researchers from RMIT University (Australia) and the National Institute of Technology, Warangal (India), uses Eucalyptus bark extract and is cheaper and more sustainable than current synthesis methods (ACS Sustainable Chemistry & Engineering, “Novel and Highly Efficient Strategy for the Green Synthesis of Soluble Graphene by Aqueous Polyphenol Extracts of Eucalyptus Bark and Its Applications in High-Performance Supercapacitors”).

A June 24, 2019 RMIT University news release (also on EurekAlert), which originated the news item, provides a little more detail,

RMIT lead researcher, Distinguished Professor Suresh Bhargava, said the new method could reduce the cost of production from $USD100 per gram to a staggering $USD0.5 per gram.

“Eucalyptus bark extract has never been used to synthesise graphene sheets before and we are thrilled to find that it not only works, it’s in fact a superior method, both in terms of safety and overall cost,” said Bhargava.

“Our approach could bring down the cost of making graphene from around $USD100 per gram to just 50 cents, increasing it availability to industries globally and enabling the development of an array of vital new technologies.”

Graphene’s distinctive features make it a transformative material that could be used in the development of flexible electronics, more powerful computer chips and better solar panels, water filters and bio-sensors.

Professor Vishnu Shanker from the National Institute of Technology, Warangal, said the ‘green’ chemistry avoided the use of toxic reagents, potentially opening the door to the application of graphene not only for electronic devices but also biocompatible materials.

“Working collaboratively with RMIT’s Centre for Advanced Materials and Industrial Chemistry we’re harnessing the power of collective intelligence to make these discoveries,” he said.

A novel approach to graphene synthesis:

Chemical reduction is the most common method for synthesising graphene oxide as it allows for the production of graphene at a low cost in bulk quantities.

This method however relies on reducing agents that are dangerous to both people and the environment.

When tested in the application of a supercapacitor, the ‘green’ graphene produced using this method matched the quality and performance characteristics of traditionally-produced graphene without the toxic reagents.

Bhargava said the abundance of eucalyptus trees in Australia made it a cheap and accessible resource for producing graphene locally.

“Graphene is a remarkable material with great potential in many applications due to its chemical and physical properties and there’s a growing demand for economical and environmentally friendly large-scale production,” he said.

Here’s a link to and a citation for the paper,

Novel and Highly Efficient Strategy for the Green Synthesis of Soluble Graphene by Aqueous Polyphenol Extracts of Eucalyptus Bark and Its Applications in High-Performance Supercapacitors by Saikumar ManchalaV. S. R. K. Tandava, Deshetti Jampaiah, Suresh K. Bhargava, Vishnu Shanker. ACS Sustainable Chem. Eng.2019XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acssuschemeng.9b01506 Publication Date:June 13, 2019

Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Low-cost carbon sequestration and eco-friendly manufacturing for chemicals with nanobio hybrid organisms

Years ago I was asked about carbon sequestration and nanotechnology and could not come up with any examples. At last I have something for the next time the question is asked. From a June 11, 2019 news item on ScienceDaily,

University of Colorado Boulder researchers have developed nanobio-hybrid organisms capable of using airborne carbon dioxide and nitrogen to produce a variety of plastics and fuels, a promising first step toward low-cost carbon sequestration and eco-friendly manufacturing for chemicals.

By using light-activated quantum dots to fire particular enzymes within microbial cells, the researchers were able to create “living factories” that eat harmful CO2 and convert it into useful products such as biodegradable plastic, gasoline, ammonia and biodiesel.

A June 11, 2019 University of Colorado at Boulder news release (also on EurekAlert) by Trent Knoss, which originated the news item, provides a deeper dive into the research,

“The innovation is a testament to the power of biochemical processes,” said Prashant Nagpal, lead author of the research and an assistant professor in CU Boulder’s Department of Chemical and Biological Engineering. “We’re looking at a technique that could improve CO2 capture to combat climate change and one day even potentially replace carbon-intensive manufacturing for plastics and fuels.”

The project began in 2013, when Nagpal and his colleagues began exploring the broad potential of nanoscopic quantum dots, which are tiny semiconductors similar to those used in television sets. Quantum dots can be injected into cells passively and are designed to attach and self-assemble to desired enzymes and then activate these enzymes on command using specific wavelengths of light.

Nagpal wanted to see if quantum dots could act as a spark plug to fire particular enzymes within microbial cells that have the means to convert airborne CO2 and nitrogen, but do not do so naturally due to a lack of photosynthesis.

By diffusing the specially-tailored dots into the cells of common microbial species found in soil, Nagpal and his colleagues bridged the gap. Now, exposure to even small amounts of indirect sunlight would activate the microbes’ CO2 appetite, without a need for any source of energy or food to carry out the energy-intensive biochemical conversions.

“Each cell is making millions of these chemicals and we showed they could exceed their natural yield by close to 200 percent,” Nagpal said.

The microbes, which lie dormant in water, release their resulting product to the surface, where it can be skimmed off and harvested for manufacturing. Different combinations of dots and light produce different products: Green wavelengths cause the bacteria to consume nitrogen and produce ammonia while redder wavelengths make the microbes feast on CO2 to produce plastic instead.

The process also shows promising signs of being able to operate at scale. The study found that even when the microbial factories were activated consistently for hours at a time, they showed few signs of exhaustion or depletion, indicating that the cells can regenerate and thus limit the need for rotation.

“We were very surprised that it worked as elegantly as it did,” Nagpal said. “We’re just getting started with the synthetic applications.”

The ideal futuristic scenario, Nagpal said, would be to have single-family homes and businesses pipe their CO2 emissions directly to a nearby holding pond, where microbes would convert them to a bioplastic. The owners would be able to sell the resulting product for a small profit while essentially offsetting their own carbon footprint.

“Even if the margins are low and it can’t compete with petrochemicals on a pure cost basis, there is still societal benefit to doing this,” Nagpal said. “If we could convert even a small fraction of local ditch ponds, it would have a sizeable impact on the carbon output of towns. It wouldn’t be asking much for people to implement. Many already make beer at home, for example, and this is no more complicated.”

The focus now, he said, will shift to optimizing the conversion process and bringing on new undergraduate students. Nagpal is looking to convert the project into an undergraduate lab experiment in the fall semester, funded by a CU Boulder Engineering Excellence Fund grant. Nagpal credits his current students with sticking with the project over the course of many years.

“It has been a long journey and their work has been invaluable,” he said. “I think these results show that it was worth it.”

Here’s a link to and a citation for the paper,

Nanorg Microbial Factories: Light-Driven Renewable Biochemical Synthesis Using Quantum Dot-Bacteria Nanobiohybrids by Yuchen Ding, John R. Bertram, Carrie Eckert, Rajesh Reddy Bommareddy, Rajan Patel, Alex Conradie, Samantha Bryan, Prashant Nagpal. J. Am. Chem. Soc.2019XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/jacs.9b02549 Publication Date:June 7, 2019
Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Nanocellulose sensors: 3D printed and biocompatible

I do like to keep up with nanocellulose doings, especially when there’s some Canadian involvement, and an October 8, 2019 news item on Nanowerk alerted me to a newish application for the product,

Physiological parameters in our blood can be determined without painful punctures. Empa researchers are currently working with a Canadian team to develop flexible, biocompatible nanocellulose sensors that can be attached to the skin. The 3D-printed analytic chips made of renewable raw materials will even be biodegradable in future.

The idea of measuring parameters that are relevant for our health via the skin has already taken hold in medical diagnostics. Diabetics, for example, can painlessly determine their blood sugar level with a sensor instead of having to prick their fingers.

An October 8, 2019 Empa (Swiss Federal Laboratories for Materials Science and Technology) press release, which originated the news item, provides more detail,

A transparent foil made of wood

Nanocellulose is an inexpensive, renewable raw material, which can be obtained in form of crystals and fibers, for example from wood. However, the original appearance of a tree no longer has anything to do with the gelatinous substance, which can consist of cellulose nanocrystals and cellulose nanofibers. Other sources of the material are bacteria, algae or residues from agricultural production. Thus, nanocellulose is not only relatively easy and sustainable to obtain. Its mechanical properties also make the “super pudding” an interesting product. For instance, new composite materials based on nanocellulose can be developed that could be used as surface coatings, transparent packaging films or even to produce everyday objects like beverage bottles.

Researchers at Empa’s Cellulose & Wood Materials lab and Woo Soo Kim from the Simon Fraser University [SFU] in Burnaby, Canada, are also focusing on another feature of nanocellulose: biocompatibility. Since the material is obtained from natural resources, it is particularly suitable for biomedical research.

With the aim of producing biocompatible sensors that can measure important metabolic values, the researchers used nanocellulose as an “ink” in 3D printing processes. To make the sensors electrically conductive, the ink was mixed with silver nanowires. The researchers determined the exact ratio of nanocellulose and silver threads so that a three-dimensional network could form.

Just like spaghetti – only a wee bit smaller

It turned out that cellulose nanofibers are better suited than cellulose nanocrystals to produce a cross-linked matrix with the tiny silver wires. “Cellulose nanofibers are flexible similar to cooked spaghetti, but with a diameter of only about 20 nanometers and a length of just a few micrometers,” explains Empa researcher Gilberto Siqueira.

The team finally succeeded in developing sensors that measure medically relevant metabolic parameters such as the concentration of calcium, potassium and ammonium ions. The electrochemical skin sensor sends its results wirelessly to a computer for further data processing. The tiny biochemistry lab on the skin is only half a millimeter thin.

While the tiny biochemistry lab on the skin – which is only half a millimeter thin – is capable of determining ion concentrations specifically and reliably, the researchers are already working on an updated version. “In the future, we want to replace the silver [nano] particles with another conductive material, for example on the basis of carbon compounds,” Siqueira explains. This would make the medical nanocellulose sensor not only biocompatible, but also completely biodegradable.

I like the images from Empa better than the ones from SFU,

Using a 3D printer, the nanocellulose “ink” is applied to a carrier plate. Silver particles provide the electrical conductivity of the material. Image: Empa
Empa researcher Gilberto Siqueira demonstrates the newly printed nanocellulose circuit. After a subsequent drying, the material can be further processed. Image: Empa

SFU produced a news release about this work back in February 2019. Again, I prefer what the Swiss have done because they’re explaining/communicating the science, as well as , communicating benefits. From a February 13, 2019 SFU news release (Note: Links have been removed),

Simon Fraser University and Swiss researchers are developing an eco-friendly, 3D printable solution for producing wireless Internet-of-Things (IoT) sensors that can be used and disposed of without contaminating the environment. Their research has been published as the cover story in the February issue of the journal Advanced Electronic Materials.

SFU professor Woo Soo Kim is leading the research team’s discovery, which uses a wood-derived cellulose material to replace the plastics and polymeric materials currently used in electronics.

Additionally, 3D printing can give flexibility to add or embed functions onto 3D shapes or textiles, creating greater functionality.

“Our eco-friendly, 3D-printed cellulose sensors can wirelessly transmit data during their life, and then can be disposed without concern of environmental contamination,” says Kim, a professor in the School of Mechatronic Systems Engineering. The SFU research is being carried out at PowerTech Labs in Surrey, which houses several state-of-the-art 3D printers used to advance the research.

“This development will help to advance green electronics. For example, the waste from printed circuit boards is a hazardous source of contamination to the environment. If we are able to change the plastics in PCB to cellulose composite materials, recycling of metal components on the board could be collected in a much easier way.”

Kim’s research program spans two international collaborative projects, including the latest focusing on the eco-friendly cellulose material-based chemical sensors with collaborators from the Swiss Federal Laboratories for Materials Science.

He is also collaborating with a team of South Korean researchers from the Daegu Gyeongbuk Institute of Science and Technology’s (DGIST)’s department of Robotics Engineering, and PROTEM Co Inc, a technology-based company, for the development of printable conductive ink materials.

In this second project, researchers have developed a new breakthrough in the embossing process technology, one that can freely imprint fine circuit patterns on flexible polymer substrate, a necessary component of electronic products.

Embossing technology is applied for the mass imprinting of precise patterns at a low unit cost. However, Kim says it can only imprint circuit patterns that are imprinted beforehand on the pattern stamp, and the entire, costly stamp must be changed to put in different patterns.

The team succeeded in developing a precise location control system that can imprint patterns directly resulting in a new process technology. The result will have widespread implications for use in semiconductor processes, wearable devices and the display industry.

This paper was made available online back in December 2018 and then published in print in February 2019. As to why there’d be such large gaps between the paper’s publication dates and the two institution’s news/press releases, it’s a mystery to me. In any event, here’s a link to and a citation for the paper,

3D Printed Disposable Wireless Ion Sensors with Biocompatible Cellulose Composites by Taeil Kim, Chao Bao, Michael Hausmann, Gilberto Siqueira, Tanja Zimmermann, Woo Soo Kim. Advanced Electronic Materials DOI: https://doi.org/10.1002/aelm.201970007 First published online December 19, 2018. First published in print: 08 February 2019 (Adv. Electron. Mater. 2/2109) Volume 5, Issue 2 February 2019 1970007

This paper is behind a paywall.

Gold nanoparticle loaded with CRISPR used to edit genes

CRISPR (clustered regularly interspaced short palindromic repeats) gene editing is usually paired with a virus (9, 12a, etc.) but this time scientists are using a gold nanoparticle. From a May 27, 2019 news item on Nanowerk (Note: Links have been removed),

Scientists at Fred Hutchinson Cancer Research Center took a step toward making gene therapy more practical by simplifying the way gene-editing instructions are delivered to cells. Using a gold nanoparticle instead of an inactivated virus, they safely delivered gene-editing tools in lab models of HIV and inherited blood disorders, as reported in Nature Materials (“Targeted homology-directed repair in blood stem and progenitor cells with CRISPR nanoformulations”).

A May 27, 2019 Fred Hutchinson Cancer Research Center news release (also on EurekAlert) by Jake Siegel, which originated the news item, expands on the theme, provides more detail,

It’s the first time that a gold nanoparticle loaded with CRISPR has been used to edit genes in a rare but powerful subset of blood stem cells, the source of all blood cells. The CRISPR-carrying gold nanoparticle led to successful gene editing in blood stem cells with no toxic effects.

“As gene therapies make their way through clinical trials and become available to patients, we need a more practical approach,” said senior author Dr. Jennifer Adair, an assistant member of the Clinical Research Division at Fred Hutch, adding that current methods of performing gene therapy are inaccessible to millions of people around the world. “I wanted to find something simpler, something that would passively deliver gene editing to blood stem cells.”

While CRISPR has made it faster and easier to precisely deliver genetic modifications to the genome, it still has challenges. Getting cells to accept CRISPR gene-editing tools involves a small electric shock that can damage and even kill the cells. And if precise gene edits are required, then additional molecules must be engineered to deliver them –adding cost and time.

Gold nanoparticles are a promising alternative because the surface of these tiny spheres (around 1 billionth the size of a grain of table salt) allows other molecules to easily stick to them and stay adhered.

“We engineered the gold nanoparticles to quickly cross the cell membrane, dodge cell organelles that seek to destroy them and go right to the cell nucleus to edit genes,” said Dr. Reza Shahbazi, a Fred Hutch postdoctoral researcher who has worked with gold nanoparticles for drug and gene delivery for seven years.

Shahbazi made the gold particles from laboratory-grade gold that is purified and comes as a liquid in a small lab bottle. He mixed the purified gold into a solution that causes the individual gold ions to form tiny particles, which the researchers then measured for size.

They found that a particular size – 19 nanometers wide – was the best for being big and sticky enough to add gene-editing materials to the surface of the particles, while still being small enough for cells to absorb them.

Packed onto the gold particles, the Fred Hutch team added these gene-editing components (diagram available [see below]):

A type of molecular guide called crRNA acts as a genetic GPS to show the CRISPR complex where in the genome to make the cut.

CRISPR nuclease protein, often called “genetic scissors,” makes the cut in the DNA. The CRISPR nuclease protein most often used is Cas9. But the Fred Hutch researchers also studied Cas12a (formerly called Cpf1) because Cas12a makes a staggered cut in DNA. The researchers hoped this would allow the cells to more efficiently repair the cut and while so doing embed the new genetic instructions into the cell. Another advantage of Cas12a over Cas9 is that it only requires one molecular guide, which is important because of space constraints on the nanoparticles. Cas9 requires two molecular guides.

Instructions for what genetic changes to make (“ssDNA”). The Fred Hutch team chose two inherited genetic changes that bestow protection from disease: CCR5, which protects against HIV, and gamma hemoglobin, which protects against blood disorders such as sickle cell disease and thalassemia.

A coating of a polyethylenimine swarms the surface of the particles to give them a more positive charge, which enables them to more readily be absorbed into cells. This is an improvement over another method of getting cells to take up gene editing tools, called electroporation, which involves lightly shocking the cells to get them to open and allow the genetic instructions to enter.

Then the researchers isolated blood stem cells with a protein marker on their surface called CD34. These CD34-positive cells contain the blood-making progenitor cells that give rise to the entire blood and immune system.

“These cells replenish blood in the body every day, making them a good candidate for one-time gene therapy because it will last a lifetime as the cells replace themselves,” Adair said.

Observing human blood stem cells in a lab dish, the researchers found that their fully loaded gold nanoparticles were taken up naturally by cells within six hours of being added and within 24 to 48 hours they could see gene editing happening. They observed that the Cas12a CRISPR protein partner was better at delivering very precise genetic edits to the cells than the more commonly used cas9 protein partner.

The gene-editing effect reached a peak eight weeks after the researchers injected the cells into mouse models; 22 weeks after injection the edited cells were still there. The Fred Hutch researchers also found edited cells in the bone marrow, spleen and thymus of the mouse models, a sign that the dividing blood cells in those organs could carry on the treatment without the mice having to be treated again.

“We believe we have a good candidate for two diseases — HIV and hemoglobinopathies — though we are also evaluating other disease targets where small genetic changes can have a big impact, as well as ways to make bigger genetic changes,” Adair said. “The next step is to increase how much gene editing happens in each cell, which is definitely doable. That will make it closer to being an effective therapy.”

In the study, the researchers report 10 to 20 percent of cells took on the gene edits, which is a promising start, but the researchers would like to aim for 50% or more of the cells being edited, which they believe will have a good chance of combatting these diseases.


Adair and Shahbazi are looking for commercial partners to develop the technology into therapies for people. They hope to begin clinical trials within a few years.

Here’s the diagram of a gold nanoparticle loaded with CRISPR,

Caption: Graphic of a fully loaded gold nanoparticle with CRISPR and other gene editing tools. Credit: Image courtesy of the Adair lab at Fred Hutch.

Here’s a link to and a citation for the paper,

Targeted homology-directed repair in blood stem and progenitor cells with CRISPR nanoformulations by Reza Shahbazi, Gabriella Sghia-Hughes, Jack L. Reid, Sara Kubek, Kevin G. Haworth, Olivier Humbert, Hans-Peter Kiem & Jennifer E. Adair. Nature Materials (2019) DOI https://doi.org/10.1038/s41563-019-0385-5Published 27 May 2019

This paper is behind a paywall.

Safe nanomaterial handling on a tiny budget

A June 3, 2019 news item on Nanowerk describes an inexpensive way to safely handle carbon nanotubes (CNTs), Note: A link has been removed,

With a little practice, it doesn’t take much more than 10 minutes, a couple of bags and a big bucket to keep nanomaterials in their place.

The Rice University lab of chemist Andrew Barron works with bulk carbon nanotubes on a variety of projects. Years ago, members of the lab became concerned that nanotubes could escape into the air, and developed a cheap and clean method to keep them contained as they were transferred from large containers into jars for experimental use.

More recently Barron himself became concerned that too few labs around the world were employing best practices to handle nanomaterials. He decided to share what his Rice team had learned.

“There was a series of studies that said if you’re going to handle nanotubes, you really need to use safety protocols,” Barron said. “Then I saw a study that said many labs didn’t use any form of hood or containment system. In the U.S., it was really bad, and in Asia it was even worse. But there are a significant number of labs scaling up to use these materials at the kilogram scale without taking the proper precautions.”

The lab’s inexpensive method is detailed in an open-access paper in the Springer Nature journal SN Applied Sciences (“The safe handling of bulk low-density nanomaterials”).

Here’s a bag and a bucket,

Caption: A plastic bucket and a plastic bag contain a 5-gallon supply of carbon nanotubes in a lab at Rice University, the beginning of the process to safely transfer the nanotubes for experimental use. The Rice lab published its technique in SN Applied Sciences. Credit: Barron Research Group/Rice University

A June 3, 2019 Rice University news release (also on EurekAlert and received separately by email), which originated the news item, provides more detail,

In bulk form, carbon nanotubes are fluffy and disperse easily if disturbed. The Rice lab typically stores the tubes in 5-gallon plastic buckets, and simply opening the lid is enough to send them flying because of their low density.

Varun Shenoy Gangoli, a research scientist in Barron’s lab, and Pavan Raja, a scientist with Rice’s Nanotechnology-Enabled Water Treatment center, developed for their own use a method that involves protecting the worker and sequestering loose tubes when removing smaller amounts of the material for use in experiments.

Full details are available in the paper, but the precautions include making sure workers are properly attired with long pants, long sleeves, lab coats, full goggles and face masks, along with two pairs of gloves duct-taped to the lab coat sleeves. The improvised glove bag involves a 25-gallon trash bin with a plastic bag taped to the rim. The unopened storage container is placed inside, and then the bin is covered with another transparent trash bag, with small holes cut in the top for access.

After transferring the nanotubes, acetone wipes are used to clean the gloves and more acetone is sprayed inside the barrel so settling nanotubes would stick to the surfaces. These can be recovered and returned to the storage container.

Barron said it took lab members time to learn to use the protocol efficiently, “but now they can get their samples in 5 to 10 minutes.” He’s sure other labs can and will enhance the technique for their own circumstances. He noted a poster presented at the Ninth Guadalupe Workshop on the proper handling of carbon nanotubes earned recognition and discussion among the world’s premier researchers in the field, noting the importance of the work for agencies in general.

“When we decided to write about this, we were originally just going to put it on the web and hope somebody would read it occasionally,” Barron said. “We couldn’t imagine who would publish it, but we heard that an editor at Springer Nature was really keen to have published articles like this.

“I think this is something people will use,” he said. “There’s nothing outrageous but it helps everybody, from high schools and colleges that are starting to use nanoparticles for experiments to small companies. That was the goal: Let’s provide a process that doesn’t cost thousands of dollars to install and allows you to transfer nanomaterials safely and on a large scale. Finally, publish said work in an open-access journal to maximize the reach across the globe.”

Here’s a link to and a citation for the paper,

The safe handling of bulk low-density nanomaterials by Varun Shenoy Gangoli, Pavan M. V. Raja, Gibran Liezer Esquenazi, Andrew R. Barron. SN Applied Sciences June 2019, 1:644 DOI: https://doi.org/10.1007/s42452-019-0647-5 First Online 25 May 2019

This paper is open access.