Category Archives: energy

View Dynamic Glass—intelligent windows sold commercially

At last, commercially available ‘smart’, that is, electrochromic windows.

An April 17, 2018 article by Conor Shine for Dallas News describes a change at the Dallas Fort Worth (DFW) International Airport that has cooled things down,

At DFW International Airport, the coolest seats in the house can be found near Gate A28.

That’s where the airport, working with California-based technology company View, has replaced a bank of tarmac-facing windows with panes coated in microscopic layers of electrochromic ceramic that significantly reduce the amount of heat and glare coming into the terminal.

The technology, referred to as dynamic glass, uses an electrical current to change how much light is let in and has been shown to reduce surface temperatures on gate area seats and carpets by as much as 15 degrees compared to standard windows. All that heat savings add up, with View estimating its product can cut energy costs by as much as 20 percent when the technology is deployed widely in a building.

At DFW Airport, the energy bill runs about $18 million per year, putting the potential savings from dynamic glass into the hundreds of thousands, or even millions of dollars, annually.

Besides the money, it’s an appealing set of characteristics for DFW Airport, which is North America’s only carbon-neutral airport and regularly ranks among the top large airports for customer experience in the world.

After installing the dynamic glass near Gate A28 and a nearby Twisted Root restaurant in September at a cost of $49,000, the airport is now looking at ordering more for use throughout its terminals, although how many and at what cost hasn’t been finalized yet.

On a recent weekday morning, the impact of the dynamic glass was on full display. As sunlight beamed into Gate A25, passengers largely avoided the seats near the standard windows, favoring shadier spots a bit further into the terminal.

A few feet away, the bright natural light takes on a subtle blue hue and the temperature near the windows is noticeably cooler. There, passengers seemed to pay no mind to sitting in the sun, with window-adjacent seats filling up quickly.

As View’s Jeff Platón, the company’s vice president of marketing, notes in the video, there are considerable savings to be had when you cut down on air conditioning,

View’s April 17, 2018 news release (PDF) about a study of their technology in use at the airport provides more detail,

View®, the leader in dynamic glass, today announced the results of a study on the impact of in-terminal passenger experience and its correlation to higher revenues and reduced operational expenses.The study, conducted at Dallas Fort Worth International Airport (DFW), found that terminal windows fitted with View Dynamic Glass overwhelmingly improved passenger comfort over conventional glass, resulting in an 83 percent increase in passenger dwell time at a preferred gate seat and a 102 percent increase in concession spending. The research study was conducted by DFW Airport, View, Inc., and an independent aviation market research group.

It’s been a long time (I’ve been waiting about 10 years) but it seems that commercially available ‘smart’ glass is here—at the airport, anyway.

ht/ April 20, 2018 news item on phys.org

When nanoparticles collide

The science of collisions, although it looks more like kissing to me, at the nanoscale could lead to some helpful discoveries according to an April 5, 2018 news item on Nanowerk,

Helmets that do a better job of preventing concussions and other brain injuries. Earphones that protect people from damaging noises. Devices that convert “junk” energy from airport runway vibrations into usable power.

New research on the events that occur when tiny specks of matter called nanoparticles smash into each other could one day inform the development of such technologies.

Before getting to the news release proper, here’s a gif released by the university,

A digital reconstruction shows how individual atoms in two largely spherical nanoparticles react when the nanoparticles collide in a vacuum. In the reconstruction, the atoms turn blue when they are in contact with the opposing nanoparticle. Credit: Yoichi Takato

An April 4, 2018 University at Buffalo news release (also on EurekAlert) by Charlotte Hsu, which originated the news item, fills in some details,

Using supercomputers, scientists led by the University at Buffalo modeled what happens when two nanoparticles collide in a vacuum. The team ran simulations for nanoparticles with three different surface geometries: those that are largely circular (with smooth exteriors); those with crystal facets; and those that possess sharp edges.

“Our goal was to lay out the forces that control energy transport at the nanoscale,” says study co-author Surajit Sen, PhD, professor of physics in UB’s College of Arts and Sciences. “When you have a tiny particle that’s 10, 20 or 50 atoms across, does it still behave the same way as larger particles, or grains? That’s the guts of the question we asked.”

“The guts of the answer,” Sen adds, “is yes and no.”

“Our research is useful because it builds the foundation for designing materials that either transmit or absorb energy in desired ways,” says first author Yoichi Takato, PhD. Takato, a physicist at AGC Asahi Glass and former postdoctoral scholar at the Okinawa Institute of Science and Technology in Japan, completed much of the study as a doctoral candidate in physics at UB. “For example, you could potentially make an ultrathin material that is energy absorbent. You could imagine that this would be practical for use in helmets and head gear that can help to prevent head and combat injuries.”

The study was published on March 21 in Proceedings of the Royal Society A by Takato, Sen and Michael E. Benson, who completed his portion of the work as an undergraduate physics student at UB. The scientists ran their simulations at the Center for Computational Research, UB’s academic supercomputing facility.

What happens when nanoparticles crash

The new research focused on small nanoparticles — those with diameters of 5 to 15 nanometers. The scientists found that in collisions, particles of this size behave differently depending on their shape.

For example, nanoparticles with crystal facets transfer energy well when they crash into each other, making them an ideal component of materials designed to harvest energy. When it comes to energy transport, these particles adhere to scientific norms that govern macroscopic linear systems — including chains of equal-sized masses with springs in between them — that are visible to the naked eye.

In contrast, nanoparticles that are rounder in shape, with amorphous surfaces, adhere to nonlinear force laws. This, in turn, means they may be especially useful for shock mitigation. When two spherical nanoparticles collide, energy dissipates around the initial point of contact on each one instead of propagating all the way through both. The scientists report that at crash velocities of about 30 meters per second, atoms within each particle shift only near the initial point of contact.

Nanoparticles with sharp edges are less predictable: According to the new study, their behavior varies depending on sharpness of the edges when it comes to transporting energy.
Designing a new generation of materials

“From a very broad perspective, the kind of work we’re doing has very exciting prospects,” Sen says. “It gives engineers fundamental information about nanoparticles that they didn’t have before. If you’re designing a new type of nanoparticle, you can now think about doing it in a way that takes into account what happens when you have very small nanoparticles interacting with each other.”

Though many scientists are working with nanotechnology, the way the tiniest of nanoparticles behave when they crash into each other is largely an open question, Takato says.

“When you’re designing a material, what size do you want the nanoparticle to be? How will you lay out the particles within the material? How compact do you want it to be? Our study can inform these decisions,” Takato says.

Here’s a link to and a citation for the paper,

Small nanoparticles, surface geometry and contact forces by Yoichi Takato, Michael E. Benson, Surajit Sen. Proceedings of the Royal Society A (Mathematical, Physical, and Engineering Sciences) Published 21 March 2018.DOI: 10.1098/rspa.2017.0723

This paper is behind a paywall.

Mixing the unmixable for all new nanoparticles

This news comes out of the University of Maryland and the discovery could led to nanoparticles that have never before been imagined. From a March 29, 2018 news item on ScienceDaily,

Making a giant leap in the ‘tiny’ field of nanoscience, a multi-institutional team of researchers is the first to create nanoscale particles composed of up to eight distinct elements generally known to be immiscible, or incapable of being mixed or blended together. The blending of multiple, unmixable elements into a unified, homogenous nanostructure, called a high entropy alloy nanoparticle, greatly expands the landscape of nanomaterials — and what we can do with them.

This research makes a significant advance on previous efforts that have typically produced nanoparticles limited to only three different elements and to structures that do not mix evenly. Essentially, it is extremely difficult to squeeze and blend different elements into individual particles at the nanoscale. The team, which includes lead researchers at University of Maryland, College Park (UMD)’s A. James Clark School of Engineering, published a peer-reviewed paper based on the research featured on the March 30 [2018] cover of Science.

A March 29, 2018 University of Maryland press release (also on EurekAlert), which originated the news item, delves further (Note: Links have been removed),

“Imagine the elements that combine to make nanoparticles as Lego building blocks. If you have only one to three colors and sizes, then you are limited by what combinations you can use and what structures you can assemble,” explains Liangbing Hu, associate professor of materials science and engineering at UMD and one of the corresponding authors of the paper. “What our team has done is essentially enlarged the toy chest in nanoparticle synthesis; now, we are able to build nanomaterials with nearly all metallic and semiconductor elements.”

The researchers say this advance in nanoscience opens vast opportunities for a wide range of applications that includes catalysis (the acceleration of a chemical reaction by a catalyst), energy storage (batteries or supercapacitors), and bio/plasmonic imaging, among others.

To create the high entropy alloy nanoparticles, the researchers employed a two-step method of flash heating followed by flash cooling. Metallic elements such as platinum, nickel, iron, cobalt, gold, copper, and others were exposed to a rapid thermal shock of approximately 3,000 degrees Fahrenheit, or about half the temperature of the sun, for 0.055 seconds. The extremely high temperature resulted in uniform mixtures of the multiple elements. The subsequent rapid cooling (more than 100,000 degrees Fahrenheit per second) stabilized the newly mixed elements into the uniform nanomaterial.

“Our method is simple, but one that nobody else has applied to the creation of nanoparticles. By using a physical science approach, rather than a traditional chemistry approach, we have achieved something unprecedented,” says Yonggang Yao, a Ph.D. student at UMD and one of the lead authors of the paper.

To demonstrate one potential use of the nanoparticles, the research team used them as advanced catalysts for ammonia oxidation, which is a key step in the production of nitric acid (a liquid acid that is used in the production of ammonium nitrate for fertilizers, making plastics, and in the manufacturing of dyes). They were able to achieve 100 percent oxidation of ammonia and 99 percent selectivity toward desired products with the high entropy alloy nanoparticles, proving their ability as highly efficient catalysts.

Yao says another potential use of the nanoparticles as catalysts could be the generation of chemicals or fuels from carbon dioxide.

“The potential applications for high entropy alloy nanoparticles are not limited to the field of catalysis. With cross-discipline curiosity, the demonstrated applications of these particles will become even more widespread,” says Steven D. Lacey, a Ph.D. student at UMD and also one of the lead authors of the paper.

This research was performed through a multi-institutional collaboration of Prof. Liangbing Hu’s group at the University of Maryland, College Park; Prof. Reza Shahbazian-Yassar’s group at University of Illinois at Chicago; Prof. Ju Li’s group at the Massachusetts Institute of Technology; Prof. Chao Wang’s group at Johns Hopkins University; and Prof. Michael Zachariah’s group at the University of Maryland, College Park.

What outside experts are saying about this research:

“This is quite amazing; Dr. Hu creatively came up with this powerful technique, carbo-thermal shock synthesis, to produce high entropy alloys of up to eight different elements in a single nanoparticle. This is indeed unthinkable for bulk materials synthesis. This is yet another beautiful example of nanoscience!,” says Peidong Yang, the S.K. and Angela Chan Distinguished Professor of Energy and professor of chemistry at the University of California, Berkeley and member of the American Academy of Arts and Sciences.

“This discovery opens many new directions. There are simulation opportunities to understand the electronic structure of the various compositions and phases that are important for the next generation of catalyst design. Also, finding correlations among synthesis routes, composition, and phase structure and performance enables a paradigm shift toward guided synthesis,” says George Crabtree, Argonne Distinguished Fellow and director of the Joint Center for Energy Storage Research at Argonne National Laboratory.

More from the research coauthors:

“Understanding the atomic order and crystalline structure in these multi-element nanoparticles reveals how the synthesis can be tuned to optimize their performance. It would be quite interesting to further explore the underlying atomistic mechanisms of the nucleation and growth of high entropy alloy nanoparticle,” says Reza Shahbazian-Yassar, associate professor at the University of Illinois at Chicago and a corresponding author of the paper.

“Carbon metabolism drives ‘living’ metal catalysts that frequently move around, split, or merge, resulting in a nanoparticle size distribution that’s far from the ordinary, and highly tunable,” says Ju Li, professor at the Massachusetts Institute of Technology and a corresponding author of the paper.

“This method enables new combinations of metals that do not exist in nature and do not otherwise go together. It enables robust tuning of the composition of catalytic materials to optimize the activity, selectivity, and stability, and the application will be very broad in energy conversions and chemical transformations,” says Chao Wang, assistant professor of chemical and biomolecular engineering at Johns Hopkins University and one of the study’s authors.

Here’s a link to and a citation for the paper,

Carbothermal shock synthesis of high-entropy-alloy nanoparticles by Yonggang Yao, Zhennan Huang, Pengfei Xie, Steven D. Lacey, Rohit Jiji Jacob, Hua Xie, Fengjuan Chen, Anmin Nie, Tiancheng Pu, Miles Rehwoldt, Daiwei Yu, Michael R. Zachariah, Chao Wang, Reza Shahbazian-Yassar, Ju Li, Liangbing Hu. Science 30 Mar 2018: Vol. 359, Issue 6383, pp. 1489-1494 DOI: 10.1126/science.aan5412

This paper is behind a paywall.

Removing more than 99% of crude oil from ‘produced’ water (well water)

Should you have an oil well nearby (see The Urban Oil Fields of Los Angeles in an August 28, 2014 photo essay by Alan Taylor for The Atlantic for examples of oil wells in various municipalities and cities associated with LS) , this news from Texas may interest you.

From an August 15, 2018 news item on Nanowerk,

Oil and water tend to separate, but they mix well enough to form stable oil-in-water emulsions in produced water from oil reservoirs to become a problem. Rice University scientists have developed a nanoparticle-based solution that reliably removes more than 99 percent of the emulsified oil that remains after other processing is done.
The Rice lab of chemical engineer Sibani Lisa Biswal made a magnetic nanoparticle compound that efficiently separates crude oil droplets from produced water that have proven difficult to remove with current methods.

An August 15, 2018 Rice University news release (also on EurekAlert), which originated the news item, describes the work in more detail,

Produced water [emphasis mine] comes from production wells along with oil. It often includes chemicals and surfactants pumped into a reservoir to push oil to the surface from tiny pores or cracks, either natural or fractured, deep underground. Under pressure and the presence of soapy surfactants, some of the oil and water form stable emulsions that cling together all the way back to the surface.

While methods exist to separate most of the oil from the production flow, engineers at Shell Global Solutions, which sponsored the project, told Biswal and her team that the last 5 percent of oil tends to remain stubbornly emulsified with little chance to be recovered.

“Injected chemicals and natural surfactants in crude oil can oftentimes chemically stabilize the oil-water interface, leading to small droplets of oil in water which are challenging to break up,” said Biswal, an associate professor of chemical and biomolecular engineering and of materials science and nanoengineering.

The Rice lab’s experience with magnetic particles and expertise in amines, courtesy of former postdoctoral researcher and lead author Qing Wang, led it to combine techniques. The researchers added amines to magnetic iron nanoparticles. Amines carry a positive charge that helps the nanoparticles find negatively charged oil droplets. Once they do, the nanoparticles bind the oil. Magnets are then able to pull the droplets and nanoparticles out of the solution.

“It’s often hard to design nanoparticles that don’t simply aggregate in the high salinities that are typically found in reservoir fluids, but these are quite stable in the produced water,” Biswal said.

The enhanced nanoparticles were tested on emulsions made in the lab with model oil as well as crude oil.

In both cases, researchers inserted nanoparticles into the emulsions, which they simply shook by hand and machine to break the oil-water bonds and create oil-nanoparticle bonds within minutes. Some of the oil floated to the top, while placing the test tube on a magnet pulled the infused nanotubes to the bottom, leaving clear water in between.

Best of all, Biswal said, the nanoparticles can be washed with a solvent and reused while the oil can be recovered. The researchers detailed six successful charge-discharge cycles of their compound and suspect it will remain effective for many more.

She said her lab is designing a flow-through reactor to process produced water in bulk and automatically recycle the nanoparticles. That would be valuable for industry and for sites like offshore oil rigs, where treated water could be returned to the ocean.

It seems to me that ‘produced water’ is another term for polluted water.I guess it’s the reverse to Shakespeare’s “a rose by any other name would smell as sweet” with polluted water by any other name seeming more palatable.

Here’s a link to and a citation for the paper,

Recyclable amine-functionalized magnetic nanoparticles for efficient demulsification of crude oil-in-water emulsions by Qing Wang, Maura C. Puerto, Sumedh Warudkar, Jack Buehler, and Sibani L. Biswal. Environ. Sci.: Water Res. Technol., 2018, Advance Article DOI: 10.1039/C8EW00188J First published on 15 Aug 2018

This paper is behind a paywall.

Rice has included this image amongst others in their news release,

Rice University engineers have developed magnetic nanoparticles that separate the last droplets of oil from produced water at wells. The particles draw in the bulk of the oil and are then attracted to the magnet, as demonstrated here. Photo by Jeff Fitlow

There’s also this video, which, in my book, borders on magical,

Better hair dyes with graphene and a cautionary note

Beauty products aren’t usually the first applications that come to mind when discussing graphene or any other research and development (R&D) as I learned when teaching a course a few years ago. But research and development  in that field are imperative as every company is scrambling for a short-lived competitive advantage for a truly new products or a perceived competitive advantage in a field where a lot of products are pretty much the same.

This March 15, 2018 news item on ScienceDaily describes graphene as a potential hair dye,

Graphene, a naturally black material, could provide a new strategy for dyeing hair in difficult-to-create dark shades. And because it’s a conductive material, hair dyed with graphene might also be less prone to staticky flyaways. Now, researchers have put it to the test. In an article published March 15 [2018] in the journal Chem, they used sheets of graphene to make a dye that adheres to the surface of hair, forming a coating that is resistant to at least 30 washes without the need for chemicals that open up and damage the hair cuticle.

Courtesy: Northwestern University

A March 15, 2018 Cell Press news release on EurekAlert, which originated the news item, fills in more the of the story,

Most permanent hair dyes used today are harmful to hair. “Your hair is covered in these cuticle scales like the scales of a fish, and people have to use ammonia or organic amines to lift the scales and allow dye molecules to get inside a lot quicker,” says senior author Jiaxing Huang, a materials scientist at Northwestern University. But lifting the cuticle makes the strands of the hair more brittle, and the damage is only exacerbated by the hydrogen peroxide that is used to trigger the reaction that synthesizes the dye once the pigment molecules are inside the hair.

These problems could theoretically be solved by a dye that coats rather than penetrates the hair. “However, the obvious problem of coating-based dyes is that they tend to wash out very easily,” says Huang. But when he and his team coated samples of human hair with a solution of graphene sheets, they were able to turn platinum blond hair black and keep it that way for at least 30 washes–the number necessary for a hair dye to be considered “permanent.”

This effectiveness has to do with the structure of graphene: it’s made of up thin, flexible sheets that can adapt to uneven surfaces. “Imagine a piece of paper. A business card is very rigid and doesn’t flex by itself. But if you take a much bigger sheet of newspaper–if you still can find one nowadays–it can bend easily. This makes graphene sheets a good coating material,” he says. And once the coating is formed, the graphene sheets are particularly good at keeping out water during washes, which keeps the water from eroding both the graphene and the polymer binder that the team also added to the dye solution to help with adhesion.

The graphene dye has additional advantages. Each coated hair is like a little wire in that it is able to conduct heat and electricity. This means that it’s easy for graphene-dyed hair to dissipate static electricity, eliminating the problem of flyaways on dry winter days. The graphene flakes are large enough that they won’t absorb through the skin like other dye molecules. And although graphene is typically black, its precursor, graphene oxide, is light brown. But the color of graphene oxide can be gradually darkened with heat or chemical reactions, meaning that this dye could be used for a variety of shades or even for an ombre effect.

What Huang thinks is particularly striking about this application of graphene is that it takes advantage of graphene’s most obvious property. “In many potential graphene applications, the black color of graphene is somewhat undesirable and something of a sore point,” he says. Here, though, it’s applied to a field where creating dark colors has historically been a problem.

The graphene used for hair dye also doesn’t need to be of the same high quality as it does for other applications. “For hair dye, the most important property is graphene being black. You can have graphene that is too lousy for higher-end electronic applications, but it’s perfectly okay for this. So I think this application can leverage the current graphene product as is, and that’s why I think that this could happen a lot sooner than many of the other proposed applications,” he says.

Making it happen is his next goal. He hopes to get funding to continue the research and make these dyes a reality for the people whose lives they would improve. “This is an idea that was inspired by curiosity. It was very fun to do, but it didn’t sound very big and noble when we started working on it,” he says. “But after we deep-dived into studying hair dyes, we realized that, wow, this is actually not at all a small problem. And it’s one that graphene could really help to solve.”

Northwestern University’s Amanda Morris also wrote a March 15, 2018 news release (it’s repetitive but there are some interesting new details; Note: Links have been removed),

It’s an issue that has plagued the beauty industry for more than a century: Dying hair too often can irreparably damage your silky strands.

Now a Northwestern University team has used materials science to solve this age-old problem. The team has leveraged super material graphene to develop a new hair dye that is less harmful [emphasis mine], non-damaging and lasts through many washes without fading. Graphene’s conductive nature also opens up new opportunities for hair, such as turning it into in situ electrodes or integrating it with wearable electronic devices.

Dying hair might seem simple and ordinary, but it’s actually a sophisticated chemical process. Called the cuticle, the outermost layer of a hair is made of cells that overlap in a scale-like pattern. Commercial dyes work by using harsh chemicals, such as ammonia and bleach, to first pry open the cuticle scales to allow colorant molecules inside and then trigger a reaction inside the hair to produce more color. Not only does this process cause hair to become more fragile, some of the small molecules are also quite toxic.

Huang and his team bypassed harmful chemicals altogether by leveraging the natural geometry of graphene sheets. While current hair dyes use a cocktail of small molecules that work by chemically altering the hair, graphene sheets are soft and flexible, so they wrap around each hair for an even coat. Huang’s ink formula also incorporates edible, non-toxic polymer binders to ensure that the graphene sticks — and lasts through at least 30 washes, which is the commercial requirement for permanent hair dye. An added bonus: graphene is anti-static, so it keeps winter-weather flyaways to a minimum.

“It’s similar to the difference between a wet paper towel and a tennis ball,” Huang explained, comparing the geometry of graphene to that of other black pigment particles, such as carbon black or iron oxide, which can only be used in temporary hair dyes. “The paper towel is going to wrap and stick much better. The ball-like particles are much more easily removed with shampoo.”

This geometry also contributes to why graphene is a safer alternative. Whereas small molecules can easily be inhaled or pass through the skin barrier, graphene is too big to enter the body. “Compared to those small molecules used in current hair dyes, graphene flakes are humongous,” said Huang, who is a member of Northwestern’s International Institute of Nanotechnology.

Ever since graphene — the two-dimensional network of carbon atoms — burst onto the science scene in 2004, the possibilities for the promising material have seemed nearly endless. With its ultra-strong and lightweight structure, graphene has potential for many applications in high-performance electronics, high-strength materials and energy devices. But development of those applications often require graphene materials to be as structurally perfect as possible in order to achieve extraordinary electrical, mechanical or thermal properties.

The most important graphene property for Huang’s hair dye, however, is simply its color: black. So Huang’s team used graphene oxide, an imperfect version of graphene that is a cheaper, more available oxidized derivative.

“Our hair dye solves a real-world problem without relying on very high-quality graphene, which is not easy to make,” Huang said. “Obviously more work needs to be done, but I feel optimistic about this application.”

Still, future versions of the dye could someday potentially leverage graphene’s notable properties, including its highly conductive nature.

“People could apply this dye to make hair conductive on the surface,” Huang said. “It could then be integrated with wearable electronics or become a conductive probe. We are only limited by our imagination.”

So far, Huang has developed graphene-based hair dyes in multiple shades of brown and black. Next, he plans to experiment with more colors.

Interestingly, the tiny note of caution”less harmful” doesn’t appear in the Cell Press news release. Never fear, Dr. Andrew Maynard (Director Risk Innovation Lab at Arizona State University) has written a March 20, 2018 essay on The Conversation suggesting a little further investigation (Note: Links have been removed),

Northwestern University’s press release proudly announced, “Graphene finds new application as nontoxic, anti-static hair dye.” The announcement spawned headlines like “Enough with the toxic hair dyes. We could use graphene instead,” and “’Miracle material’ graphene used to create the ultimate hair dye.”

From these headlines, you might be forgiven for getting the idea that the safety of graphene-based hair dyes is a done deal. Yet having studied the potential health and environmental impacts of engineered nanomaterials for more years than I care to remember, I find such overly optimistic pronouncements worrying – especially when they’re not backed up by clear evidence.

Tiny materials, potentially bigger problems

Engineered nanomaterials like graphene and graphene oxide (the particular form used in the dye experiments) aren’t necessarily harmful. But nanomaterials can behave in unusual ways that depend on particle size, shape, chemistry and application. Because of this, researchers have long been cautious about giving them a clean bill of health without first testing them extensively. And while a large body of research to date doesn’t indicate graphene is particularly dangerous, neither does it suggest it’s completely safe.

A quick search of scientific papers over the past few years shows that, since 2004, over 2,000 studies have been published that mention graphene toxicity; nearly 500 were published in 2017 alone.

This growing body of research suggests that if graphene gets into your body or the environment in sufficient quantities, it could cause harm. A 2016 review, for instance, indicated that graphene oxide particles could result in lung damage at high doses (equivalent to around 0.7 grams of inhaled material). Another review published in 2017 suggested that these materials could affect the biology of some plants and algae, as well as invertebrates and vertebrates toward the lower end of the ecological pyramid. The authors of the 2017 study concluded that research “unequivocally confirms that graphene in any of its numerous forms and derivatives must be approached as a potentially hazardous material.”

These studies need to be approached with care, as the precise risks of graphene exposure will depend on how the material is used, how exposure occurs and how much of it is encountered. Yet there’s sufficient evidence to suggest that this substance should be used with caution – especially where there’s a high chance of exposure or that it could be released into the environment.

Unfortunately, graphene-based hair dyes tick both of these boxes. Used in this way, the substance is potentially inhalable (especially with spray-on products) and ingestible through careless use. It’s also almost guaranteed that excess graphene-containing dye will wash down the drain and into the environment.

Undermining other efforts?

I was alerted to just how counterproductive such headlines can be by my colleague Tim Harper, founder of G2O Water Technologies – a company that uses graphene oxide-coated membranes to treat wastewater. Like many companies in this area, G2O has been working to use graphene responsibly by minimizing the amount of graphene that ends up released to the environment.

Yet as Tim pointed out to me, if people are led to believe “that bunging a few grams of graphene down the drain every time you dye your hair is OK, this invalidates all the work we are doing making sure the few nanograms of graphene on our membranes stay put.” Many companies that use nanomaterials are trying to do the right thing, but it’s hard to justify the time and expense of being responsible when someone else’s more cavalier actions undercut your efforts.

Overpromising results and overlooking risk

This is where researchers and their institutions need to move beyond an “economy of promises” that spurs on hyperbole and discourages caution, and think more critically about how their statements may ultimately undermine responsible and beneficial development of a technology. They may even want to consider using guidelines, such as the Principles for Responsible Innovation developed by the organization Society Inside, for instance, to guide what they do and say.

If you have time, I encourage you to read Andrew’s piece in its entirety.

Here’s a link to and a citation for the paper,

Multifunctional Graphene Hair Dye by Chong Luo, Lingye Zhou, Kevin Chiou, and Jiaxing Huang. Chem DOI: https://doi.org/10.1016/j.chempr.2018.02.02 Publication stage: In Press Corrected Proof

This paper appears to be open access.

*Two paragraphs (repetitions) were deleted from the excerpt of Dr. Andrew Maynard’s essay on August 14, 2018

More memory, less space and a walk down the cryptocurrency road

Libraries, archives, records management, oral history, etc. there are many institutions and names for how we manage collective and personal memory. You might call it a peculiarly human obsession stretching back into antiquity. For example, there’s the Library of Alexandria (Wikipedia entry) founded in the third, or possibly 2nd, century BCE (before the common era) and reputed to store all the knowledge in the world. It was destroyed although accounts differ as to when and how but its loss remains a potent reminder of memory’s fragility.

These days, the technology community is terribly concerned with storing ever more bits of data on materials that are reaching their limits for storage.I have news of a possible solution,  an interview of sorts with the researchers working on this new technology, and some very recent research into policies for cryptocurrency mining and development. That bit about cryptocurrency makes more sense when you read what the response to one of the interview questions.

Memory

It seems University of Alberta researchers may have found a way to increase memory exponentially, from a July 23, 2018 news item on ScienceDaily,

The most dense solid-state memory ever created could soon exceed the capabilities of current computer storage devices by 1,000 times, thanks to a new technique scientists at the University of Alberta have perfected.

“Essentially, you can take all 45 million songs on iTunes and store them on the surface of one quarter,” said Roshan Achal, PhD student in Department of Physics and lead author on the new research. “Five years ago, this wasn’t even something we thought possible.”

A July 23, 2018 University of Alberta news release (also on EurekAlert) by Jennifer-Anne Pascoe, which originated the news item, provides more information,

Previous discoveries were stable only at cryogenic conditions, meaning this new finding puts society light years closer to meeting the need for more storage for the current and continued deluge of data. One of the most exciting features of this memory is that it’s road-ready for real-world temperatures, as it can withstand normal use and transportation beyond the lab.

“What is often overlooked in the nanofabrication business is actual transportation to an end user, that simply was not possible until now given temperature restrictions,” continued Achal. “Our memory is stable well above room temperature and precise down to the atom.”

Achal explained that immediate applications will be data archival. Next steps will be increasing readout and writing speeds, meaning even more flexible applications.

More memory, less space

Achal works with University of Alberta physics professor Robert Wolkow, a pioneer in the field of atomic-scale physics. Wolkow perfected the art of the science behind nanotip technology, which, thanks to Wolkow and his team’s continued work, has now reached a tipping point, meaning scaling up atomic-scale manufacturing for commercialization.

“With this last piece of the puzzle now in-hand, atom-scale fabrication will become a commercial reality in the very near future,” said Wolkow. Wolkow’s Spin-off [sic] company, Quantum Silicon Inc., is hard at work on commercializing atom-scale fabrication for use in all areas of the technology sector.

To demonstrate the new discovery, Achal, Wolkow, and their fellow scientists not only fabricated the world’s smallest maple leaf, they also encoded the entire alphabet at a density of 138 terabytes, roughly equivalent to writing 350,000 letters across a grain of rice. For a playful twist, Achal also encoded music as an atom-sized song, the first 24 notes of which will make any video-game player of the 80s and 90s nostalgic for yesteryear but excited for the future of technology and society.

As noted in the news release, there is an atom-sized song, which is available in this video,

As for the nano-sized maple leaf, I highlighted that bit of whimsy in a June 30, 2017 posting.

Here’s a link to and a citation for the paper,

Lithography for robust and editable atomic-scale silicon devices and memories by Roshan Achal, Mohammad Rashidi, Jeremiah Croshaw, David Churchill, Marco Taucer, Taleana Huff, Martin Cloutier, Jason Pitters, & Robert A. Wolkow. Nature Communicationsvolume 9, Article number: 2778 (2018) DOI: https://doi.org/10.1038/s41467-018-05171-y Published 23 July 2018

This paper is open access.

For interested parties, you can find Quantum Silicon (QSI) here. My Edmonton geography is all but nonexistent, still, it seems to me the company address on Saskatchewan Drive is a University of Alberta address. It’s also the address for the National Research Council of Canada. Perhaps this is a university/government spin-off company?

The ‘interview’

I sent some questions to the researchers at the University of Alberta who very kindly provided me with the following answers. Roshan Achal passed on one of the questions to his colleague Taleana Huff for her response. Both Achal and Huff are associated with QSI.

Unfortunately I could not find any pictures of all three researchers (Achal, Huff, and Wolkow) together.

Roshan Achal (left) used nanotechnology perfected by his PhD supervisor, Robert Wolkow (right) to create atomic-scale computer memory that could exceed the capacity of today’s solid-state storage drives by 1,000 times. (Photo: Faculty of Science)

(1) SHRINKING THE MANUFACTURING PROCESS TO THE ATOMIC SCALE HAS
ATTRACTED A LOT OF ATTENTION OVER THE YEARS STARTING WITH SCIENCE
FICTION OR RICHARD FEYNMAN OR K. ERIC DREXLER, ETC. IN ANY EVENT, THE
ORIGINS ARE CONTESTED SO I WON’T PUT YOU ON THE SPOT BY ASKING WHO
STARTED IT ALL INSTEAD ASKING HOW DID YOU GET STARTED?

I got started in this field about 6 years ago, when I undertook a MSc
with Dr. Wolkow here at the University of Alberta. Before that point, I
had only ever heard of a scanning tunneling microscope from what was
taught in my classes. I was aware of the famous IBM logo made up from
just a handful of atoms using this machine, but I didn’t know what
else could be done. Here, Dr. Wolkow introduced me to his line of
research, and I saw the immense potential for growth in this area and
decided to pursue it further. I had the chance to interact with and
learn from nanofabrication experts and gain the skills necessary to
begin playing around with my own techniques and ideas during my PhD.

(2) AS I UNDERSTAND IT, THESE ARE THE PIECES YOU’VE BEEN
WORKING ON: (1) THE TUNGSTEN MICROSCOPE TIP, WHICH MAKE[s] (2) THE SMALLEST
QUANTUM DOTS (SINGLE ATOMS OF SILICON), (3) THE AUTOMATION OF THE
QUANTUM DOT PRODUCTION PROCESS, AND (4) THE “MOST DENSE SOLID-STATE
MEMORY EVER CREATED.” WHAT’S MISSING FROM THE LIST AND IS THAT WHAT
YOU’RE WORKING ON NOW?

One of the things missing from the list, that we are currently working
on, is the ability to easily communicate (electrically) from the
macroscale (our world) to the nanoscale, without the use of a scanning
tunneling microscope. With this, we would be able to then construct
devices using the other pieces we’ve developed up to this point, and
then integrate them with more conventional electronics. This would bring
us yet another step closer to the realization of atomic-scale
electronics.

(3) PERHAPS YOU COULD CLARIFY SOMETHING FOR ME. USUALLY WHEN SOLID STATE
MEMORY IS MENTIONED, THERE’S GREAT CONCERN ABOUT MOORE’S LAW. IS
THIS WORK GOING TO CREATE A NEW LAW? AND, WHAT IF ANYTHING DOES
;YOUR MEMORY DEVICE HAVE TO DO WITH QUANTUM COMPUTING?

That is an interesting question. With the density we’ve achieved,
there are not too many surfaces where atomic sites are more closely
spaced to allow for another factor of two improvement. In that sense, it
would be difficult to improve memory densities further using these
techniques alone. In order to continue Moore’s law, new techniques, or
storage methods would have to be developed to move beyond atomic-scale
storage.

The memory design itself does not have anything to do with quantum
computing, however, the lithographic techniques developed through our
work, may enable the development of certain quantum-dot-based quantum
computing schemes.

(4) THIS MAY BE A LITTLE OUT OF LEFT FIELD (OR FURTHER OUT THAN THE
OTHERS), COULD;YOUR MEMORY DEVICE HAVE AN IMPACT ON THE
DEVELOPMENT OF CRYPTOCURRENCY AND BLOCKCHAIN? IF SO, WHAT MIGHT THAT
IMPACT BE?

I am not very familiar with these topics, however, co-author Taleana
Huff has provided some thoughts:

Taleana Huff (downloaded from https://ca.linkedin.com/in/taleana-huff]

“The memory, as we’ve designed it, might not have too much of an
impact in and of itself. Cryptocurrencies fall into two categories.
Proof of Work and Proof of Stake. Proof of Work relies on raw
computational power to solve a difficult math problem. If you solve it,
you get rewarded with a small amount of that coin. The problem is that
it can take a lot of power and energy for your computer to crunch
through that problem. Faster access to memory alone could perhaps
streamline small parts of this slightly, but it would be very slight.
Proof of Stake is already quite power efficient and wouldn’t really
have a drastic advantage from better faster computers.

Now, atomic-scale circuitry built using these new lithographic
techniques that we’ve developed, which could perform computations at
significantly lower energy costs, would be huge for Proof of Work coins.
One of the things holding bitcoin back, for example, is that mining it
is now consuming power on the order of the annual energy consumption
required by small countries. A more efficient way to mine while still
taking the same amount of time to solve the problem would make bitcoin
much more attractive as a currency.”

Thank you to Roshan Achal and Taleana Huff for helping me to further explore the implications of their work with Dr. Wolkow.

Comments

As usual, after receiving the replies I have more questions but these people have other things to do so I’ll content myself with noting that there is something extraordinary in the fact that we can imagine a near future where atomic scale manufacturing is possible and where as Achal says, ” … storage methods would have to be developed to move beyond atomic-scale [emphasis mine] storage”. In decades past it was the stuff of science fiction or of theorists who didn’t have the tools to turn the idea into a reality. With Wolkow’s, Achal’s, Hauff’s, and their colleagues’ work, atomic scale manufacturing is attainable in the foreseeable future.

Hopefully we’ll be wiser than we have been in the past in how we deploy these new manufacturing techniques. Of course, before we need the wisdom, scientists, as  Achal notes,  need to find a new way to communicate between the macroscale and the nanoscale.

As for Huff’s comments about cryptocurrencies and cyptocurrency and blockchain technology, I stumbled across this very recent research, from a July 31, 2018 Elsevier press release (also on EurekAlert),

A study [behind a paywall] published in Energy Research & Social Science warns that failure to lower the energy use by Bitcoin and similar Blockchain designs may prevent nations from reaching their climate change mitigation obligations under the Paris Agreement.

The study, authored by Jon Truby, PhD, Assistant Professor, Director of the Centre for Law & Development, College of Law, Qatar University, Doha, Qatar, evaluates the financial and legal options available to lawmakers to moderate blockchain-related energy consumption and foster a sustainable and innovative technology sector. Based on this rigorous review and analysis of the technologies, ownership models, and jurisdictional case law and practices, the article recommends an approach that imposes new taxes, charges, or restrictions to reduce demand by users, miners, and miner manufacturers who employ polluting technologies, and offers incentives that encourage developers to create less energy-intensive/carbon-neutral Blockchain.

“Digital currency mining is the first major industry developed from Blockchain, because its transactions alone consume more electricity than entire nations,” said Dr. Truby. “It needs to be directed towards sustainability if it is to realize its potential advantages.

“Many developers have taken no account of the environmental impact of their designs, so we must encourage them to adopt consensus protocols that do not result in high emissions. Taking no action means we are subsidizing high energy-consuming technology and causing future Blockchain developers to follow the same harmful path. We need to de-socialize the environmental costs involved while continuing to encourage progress of this important technology to unlock its potential economic, environmental, and social benefits,” explained Dr. Truby.

As a digital ledger that is accessible to, and trusted by all participants, Blockchain technology decentralizes and transforms the exchange of assets through peer-to-peer verification and payments. Blockchain technology has been advocated as being capable of delivering environmental and social benefits under the UN’s Sustainable Development Goals. However, Bitcoin’s system has been built in a way that is reminiscent of physical mining of natural resources – costs and efforts rise as the system reaches the ultimate resource limit and the mining of new resources requires increasing hardware resources, which consume huge amounts of electricity.

Putting this into perspective, Dr. Truby said, “the processes involved in a single Bitcoin transaction could provide electricity to a British home for a month – with the environmental costs socialized for private benefit.

“Bitcoin is here to stay, and so, future models must be designed without reliance on energy consumption so disproportionate on their economic or social benefits.”

The study evaluates various Blockchain technologies by their carbon footprints and recommends how to tax or restrict Blockchain types at different phases of production and use to discourage polluting versions and encourage cleaner alternatives. It also analyzes the legal measures that can be introduced to encourage technology innovators to develop low-emissions Blockchain designs. The specific recommendations include imposing levies to prevent path-dependent inertia from constraining innovation:

  • Registration fees collected by brokers from digital coin buyers.
  • “Bitcoin Sin Tax” surcharge on digital currency ownership.
  • Green taxes and restrictions on machinery purchases/imports (e.g. Bitcoin mining machines).
  • Smart contract transaction charges.

According to Dr. Truby, these findings may lead to new taxes, charges or restrictions, but could also lead to financial rewards for innovators developing carbon-neutral Blockchain.

The press release doesn’t fully reflect Dr. Truby’s thoughtfulness or the incentives he has suggested. it’s not all surcharges, taxes, and fees constitute encouragement.  Here’s a sample from the conclusion,

The possibilities of Blockchain are endless and incentivisation can help solve various climate change issues, such as through the development of digital currencies to fund climate finance programmes. This type of public-private finance initiative is envisioned in the Paris Agreement, and fiscal tools can incentivize innovators to design financially rewarding Blockchain technology that also achieves environmental goals. Bitcoin, for example, has various utilitarian intentions in its White Paper, which may or may not turn out to be as envisioned, but it would not have been such a success without investors seeking remarkable returns. Embracing such technology, and promoting a shift in behaviour with such fiscal tools, can turn the industry itself towards achieving innovative solutions for environmental goals.

I realize Wolkow, et. al, are not focused on cryptocurrency and blockchain technology per se but as Huff notes in her reply, “… new lithographic techniques that we’ve developed, which could perform computations at significantly lower energy costs, would be huge for Proof of Work coins.”

Whether or not there are implications for cryptocurrencies, energy needs, climate change, etc., it’s the kind of innovative work being done by scientists at the University of Alberta which may have implications in fields far beyond the researchers’ original intentions such as more efficient computation and data storage.

ETA Aug. 6, 2018: Dexter Johnson weighed in with an August 3, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

Researchers at the University of Alberta in Canada have developed a new approach to rewritable data storage technology by using a scanning tunneling microscope (STM) to remove and replace hydrogen atoms from the surface of a silicon wafer. If this approach realizes its potential, it could lead to a data storage technology capable of storing 1,000 times more data than today’s hard drives, up to 138 terabytes per square inch.

As a bit of background, Gerd Binnig and Heinrich Rohrer developed the first STM in 1986 for which they later received the Nobel Prize in physics. In the over 30 years since an STM first imaged an atom by exploiting a phenomenon known as tunneling—which causes electrons to jump from the surface atoms of a material to the tip of an ultrasharp electrode suspended a few angstroms above—the technology has become the backbone of so-called nanotechnology.

In addition to imaging the world on the atomic scale for the last thirty years, STMs have been experimented with as a potential data storage device. Last year, we reported on how IBM (where Binnig and Rohrer first developed the STM) used an STM in combination with an iron atom to serve as an electron-spin resonance sensor to read the magnetic pole of holmium atoms. The north and south poles of the holmium atoms served as the 0 and 1 of digital logic.

The Canadian researchers have taken a somewhat different approach to making an STM into a data storage device by automating a known technique that uses the ultrasharp tip of the STM to apply a voltage pulse above an atom to remove individual hydrogen atoms from the surface of a silicon wafer. Once the atom has been removed, there is a vacancy on the surface. These vacancies can be patterned on the surface to create devices and memories.

If you have the time, I recommend reading Dexter’s posting as he provides clear explanations, additional insight into the work, and more historical detail.

The mystifying physics of paint-on semiconductors

I was not expecting a Canadian connection but it seems we are heavily invested in this research at the Georgia Institute of Technology (Georgia Tech), from a March 19, 2018 news item on ScienceDaily,

Some novel materials that sound too good to be true turn out to be true and good. An emergent class of semiconductors, which could affordably light up our future with nuanced colors emanating from lasers, lamps, and even window glass, could be the latest example.

These materials are very radiant, easy to process from solution, and energy-efficient. The nagging question of whether hybrid organic-inorganic perovskites (HOIPs) could really work just received a very affirmative answer in a new international study led by physical chemists at the Georgia Institute of Technology.

A March 19,. 2018 Georgia Tech news release (also on EurekAlert), which originated the news item, provides more detail,

The researchers observed in an HOIP a “richness” of semiconducting physics created by what could be described as electrons dancing on chemical underpinnings that wobble like a funhouse floor in an earthquake. That bucks conventional wisdom because established semiconductors rely upon rigidly stable chemical foundations, that is to say, quieter molecular frameworks, to produce the desired quantum properties.

“We don’t know yet how it works to have these stable quantum properties in this intense molecular motion,” said first author Felix Thouin, a graduate research assistant at Georgia Tech. “It defies physics models we have to try to explain it. It’s like we need some new physics.”

Quantum properties surprise

Their gyrating jumbles have made HOIPs challenging to examine, but the team of researchers from a total of five research institutes in four countries succeeded in measuring a prototypical HOIP and found its quantum properties on par with those of established, molecularly rigid semiconductors, many of which are graphene-based.

“The properties were at least as good as in those materials and may be even better,” said Carlos Silva, a professor in Georgia Tech’s School of Chemistry and Biochemistry. Not all semiconductors also absorb and emit light well, but HOIPs do, making them optoelectronic and thus potentially useful in lasers, LEDs, other lighting applications, and also in photovoltaics.

The lack of molecular-level rigidity in HOIPs also plays into them being more flexibly produced and applied.

Silva co-led the study with physicist Ajay Ram Srimath Kandada. Their team published the results of their study on two-dimensional HOIPs on March 8, 2018, in the journal Physical Review Materials. Their research was funded by EU Horizon 2020, the Natural Sciences and Engineering Research Council of Canada, the Fond Québécois pour la Recherche, the [National] Research Council of Canada, and the National Research Foundation of Singapore. [emphases mine]

The ‘solution solution’

Commonly, semiconducting properties arise from static crystalline lattices of neatly interconnected atoms. In silicon, for example, which is used in most commercial solar cells, they are interconnected silicon atoms. The same principle applies to graphene-like semiconductors.

“These lattices are structurally not very complex,” Silva said. “They’re only one atom thin, and they have strict two-dimensional properties, so they’re much more rigid.”

“You forcefully limit these systems to two dimensions,” said Srimath Kandada, who is a Marie Curie International Fellow at Georgia Tech and the Italian Institute of Technology. “The atoms are arranged in infinitely expansive, flat sheets, and then these very interesting and desirable optoelectronic properties emerge.”

These proven materials impress. So, why pursue HOIPs, except to explore their baffling physics? Because they may be more practical in important ways.

“One of the compelling advantages is that they’re all made using low-temperature processing from solutions,” Silva said. “It takes much less energy to make them.”

By contrast, graphene-based materials are produced at high temperatures in small amounts that can be tedious to work with. “With this stuff (HOIPs), you can make big batches in solution and coat a whole window with it if you want to,” Silva said.

Funhouse in an earthquake

For all an HOIP’s wobbling, it’s also a very ordered lattice with its own kind of rigidity, though less limiting than in the customary two-dimensional materials.

“It’s not just a single layer,” Srimath Kandada said. “There is a very specific perovskite-like geometry.” Perovskite refers to the shape of an HOIPs crystal lattice, which is a layered scaffolding.

“The lattice self-assembles,” Srimath Kandada said, “and it does so in a three-dimensional stack made of layers of two-dimensional sheets. But HOIPs still preserve those desirable 2D quantum properties.”

Those sheets are held together by interspersed layers of another molecular structure that is a bit like a sheet of rubber bands. That makes the scaffolding wiggle like a funhouse floor.

“At room temperature, the molecules wiggle all over the place. That disrupts the lattice, which is where the electrons live. It’s really intense,” Silva said. “But surprisingly, the quantum properties are still really stable.”

Having quantum properties work at room temperature without requiring ultra-cooling is important for practical use as a semiconductor.

Going back to what HOIP stands for — hybrid organic-inorganic perovskites – this is how the experimental material fit into the HOIP chemical class: It was a hybrid of inorganic layers of a lead iodide (the rigid part) separated by organic layers (the rubber band-like parts) of phenylethylammonium (chemical formula (PEA)2PbI4).

The lead in this prototypical material could be swapped out for a metal safer for humans to handle before the development of an applicable material.

Electron choreography

HOIPs are great semiconductors because their electrons do an acrobatic square dance.

Usually, electrons live in an orbit around the nucleus of an atom or are shared by atoms in a chemical bond. But HOIP chemical lattices, like all semiconductors, are configured to share electrons more broadly.

Energy levels in a system can free the electrons to run around and participate in things like the flow of electricity and heat. The orbits, which are then empty, are called electron holes, and they want the electrons back.

“The hole is thought of as a positive charge, and of course, the electron has a negative charge,” Silva said. “So, hole and electron attract each other.”

The electrons and holes race around each other like dance partners pairing up to what physicists call an “exciton.” Excitons act and look a lot like particles themselves, though they’re not really particles.

Hopping biexciton light

In semiconductors, millions of excitons are correlated, or choreographed, with each other, which makes for desirable properties, when an energy source like electricity or laser light is applied. Additionally, excitons can pair up to form biexcitons, boosting the semiconductor’s energetic properties.

“In this material, we found that the biexciton binding energies were high,” Silva said. “That’s why we want to put this into lasers because the energy you input ends up to 80 or 90 percent as biexcitons.”

Biexcitons bump up energetically to absorb input energy. Then they contract energetically and pump out light. That would work not only in lasers but also in LEDs or other surfaces using the optoelectronic material.

“You can adjust the chemistry (of HOIPs) to control the width between biexciton states, and that controls the wavelength of the light given off,” Silva said. “And the adjustment can be very fine to give you any wavelength of light.”

That translates into any color of light the heart desires.

###

Coauthors of this paper were Stefanie Neutzner and Annamaria Petrozza from the Italian Institute of Technology (IIT); Daniele Cortecchia from IIT and Nanyang Technological University (NTU), Singapore; Cesare Soci from the Centre for Disruptive Photonic Technologies, Singapore; Teddy Salim and Yeng Ming Lam from NTU; and Vlad Dragomir and Richard Leonelli from the University of Montreal. …

Three Canadian science funding agencies plus European and Singaporean science funding agencies but not one from the US ? That’s a bit unusual for research undertaken at a US educational institution.

In any event, here’s a link to and a citation for the paper,

Stable biexcitons in two-dimensional metal-halide perovskites with strong dynamic lattice disorder by Félix Thouin, Stefanie Neutzner, Daniele Cortecchia, Vlad Alexandru Dragomir, Cesare Soci, Teddy Salim, Yeng Ming Lam, Richard Leonelli, Annamaria Petrozza, Ajay Ram Srimath Kandada, and Carlos Silva. Phys. Rev. Materials 2, 034001 – Published 8 March 2018

This paper is behind a paywall.

‘Lilliputian’ skyscraper: white graphene for hydrogen storage

This story comes from Rice University (Texas, US). From a March 12, 2018 news item on Nanowerk,

Rice University engineers have zeroed in on the optimal architecture for storing hydrogen in “white graphene” nanomaterials — a design like a Lilliputian skyscraper with “floors” of boron nitride sitting one atop another and held precisely 5.2 angstroms apart by boron nitride pillars.

Caption Thousands of hours of calculations on Rice University’s two fastest supercomputers found that the optimal architecture for packing hydrogen into “white graphene” involves making skyscraper-like frameworks of vertical columns and one-dimensional floors that are about 5.2 angstroms apart. In this illustration, hydrogen molecules (white) sit between sheet-like floors of graphene (gray) that are supported by boron-nitride pillars (pink and blue). Researchers found that identical structures made wholly of boron-nitride had unprecedented capacity for storing readily available hydrogen. Credit Lei Tao/Rice University

A March 12, 2018 Rice University news release (also on EurekAlert), which originated the news item, goes into extensive detail about the work,

“The motivation is to create an efficient material that can take up and hold a lot of hydrogen — both by volume and weight — and that can quickly and easily release that hydrogen when it’s needed,”  [emphasis mine] said the study’s lead author, Rouzbeh Shahsavari, assistant professor of civil and environmental engineering at Rice.

Hydrogen is the lightest and most abundant element in the universe, and its energy-to-mass ratio — the amount of available energy per pound of raw material, for example — far exceeds that of fossil fuels. It’s also the cleanest way to generate electricity: The only byproduct is water. A 2017 report by market analysts at BCC Research found that global demand for hydrogen storage materials and technologies will likely reach $5.4 billion annually by 2021.

Hydrogen’s primary drawbacks relate to portability, storage and safety. While large volumes can be stored under high pressure in underground salt domes and specially designed tanks, small-scale portable tanks — the equivalent of an automobile gas tank — have so far eluded engineers.

Following months of calculations on two of Rice’s fastest supercomputers, Shahsavari and Rice graduate student Shuo Zhao found the optimal architecture for storing hydrogen in boron nitride. One form of the material, hexagonal boron nitride (hBN), consists of atom-thick sheets of boron and nitrogen and is sometimes called white graphene because the atoms are spaced exactly like carbon atoms in flat sheets of graphene.

Previous work in Shahsavari’s Multiscale Materials Lab found that hybrid materials of graphene and boron nitride could hold enough hydrogen to meet the Department of Energy’s storage targets for light-duty fuel cell vehicles.

“The choice of material is important,” he said. “Boron nitride has been shown to be better in terms of hydrogen absorption than pure graphene, carbon nanotubes or hybrids of graphene and boron nitride.

“But the spacing and arrangement of hBN sheets and pillars is also critical,” he said. “So we decided to perform an exhaustive search of all the possible geometries of hBN to see which worked best. We also expanded the calculations to include various temperatures, pressures and dopants, trace elements that can be added to the boron nitride to enhance its hydrogen storage capacity.”

Zhao and Shahsavari set up numerous “ab initio” tests, computer simulations that used first principles of physics. Shahsavari said the approach was computationally intense but worth the extra effort because it offered the most precision.

“We conducted nearly 4,000 ab initio calculations to try and find that sweet spot where the material and geometry go hand in hand and really work together to optimize hydrogen storage,” he said.

Unlike materials that store hydrogen through chemical bonding, Shahsavari said boron nitride is a sorbent that holds hydrogen through physical bonds, which are weaker than chemical bonds. That’s an advantage when it comes to getting hydrogen out of storage because sorbent materials tend to discharge more easily than their chemical cousins, Shahsavari said.

He said the choice of boron nitride sheets or tubes and the corresponding spacing between them in the superstructure were the key to maximizing capacity.

“Without pillars, the sheets sit naturally one atop the other about 3 angstroms apart, and very few hydrogen atoms can penetrate that space,” he said. “When the distance grew to 6 angstroms or more, the capacity also fell off. At 5.2 angstroms, there is a cooperative attraction from both the ceiling and floor, and the hydrogen tends to clump in the middle. Conversely, models made of purely BN tubes — not sheets — had less storage capacity.”

Shahsavari said models showed that the pure hBN tube-sheet structures could hold 8 weight percent of hydrogen. (Weight percent is a measure of concentration, similar to parts per million.) Physical experiments are needed to verify that capacity, but that the DOE’s ultimate target is 7.5 weight percent, and Shahsavari’s models suggests even more hydrogen can be stored in his structure if trace amounts of lithium are added to the hBN.

Finally, Shahsavari said, irregularities in the flat, floor-like sheets of the structure could also prove useful for engineers.

“Wrinkles form naturally in the sheets of pillared boron nitride because of the nature of the junctions between the columns and floors,” he said. “In fact, this could also be advantageous because the wrinkles can provide toughness. If the material is placed under load or impact, that buckled shape can unbuckle easily without breaking. This could add to the material’s safety, which is a big concern in hydrogen storage devices.

“Furthermore, the high thermal conductivity and flexibility of BN may provide additional opportunities to control the adsorption and release kinetics on-demand,” Shahsavari said. “For example, it may be possible to control release kinetics by applying an external voltage, heat or an electric field.”

I may be wrong but this “The motivation is to create an efficient material that can take up and hold a lot of hydrogen — both by volume and weight — and that can quickly and easily release that hydrogen when it’s needed, …”  sounds like a supercapacitor. One other comment, this research appears to be ‘in silico’, i.e., all the testing has been done as computer simulations and the proposed materials themselves have yet to be tested.

Here’s a link to and a citation for the paper,

Merger of Energetic Affinity and Optimal Geometry Provides New Class of Boron Nitride Based Sorbents with Unprecedented Hydrogen Storage Capacity by Rouzbeh Shahsavari and Shuo Zhao. Small Vol. 14 Issue 10 DOI: 10.1002/smll.201702863 Version of Record online: 8 MAR 2018

© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Do you want that coffee with some graphene on toast?

These scientists are excited:

For those who prefer text, here’s the Rice University Feb. 13, 2018 news release (received via email and available online here and on EurekAlert here) Note: Links have been removed),

Rice University scientists who introduced laser-induced graphene (LIG) have enhanced their technique to produce what may become a new class of edible electronics.

The Rice lab of chemist James Tour, which once turned Girl Scout cookies into graphene, is investigating ways to write graphene patterns onto food and other materials to quickly embed conductive identification tags and sensors into the products themselves.

“This is not ink,” Tour said. “This is taking the material itself and converting it into graphene.”

The process is an extension of the Tour lab’s contention that anything with the proper carbon content can be turned into graphene. In recent years, the lab has developed and expanded upon its method to make graphene foam by using a commercial laser to transform the top layer of an inexpensive polymer film.

The foam consists of microscopic, cross-linked flakes of graphene, the two-dimensional form of carbon. LIG can be written into target materials in patterns and used as a supercapacitor, an electrocatalyst for fuel cells, radio-frequency identification (RFID) antennas and biological sensors, among other potential applications.

The new work reported in the American Chemical Society journal ACS Nano demonstrated that laser-induced graphene can be burned into paper, cardboard, cloth, coal and certain foods, even toast.

“Very often, we don’t see the advantage of something until we make it available,” Tour said. “Perhaps all food will have a tiny RFID tag that gives you information about where it’s been, how long it’s been stored, its country and city of origin and the path it took to get to your table.”

He said LIG tags could also be sensors that detect E. coli or other microorganisms on food. “They could light up and give you a signal that you don’t want to eat this,” Tour said. “All that could be placed not on a separate tag on the food, but on the food itself.”

Multiple laser passes with a defocused beam allowed the researchers to write LIG patterns into cloth, paper, potatoes, coconut shells and cork, as well as toast. (The bread is toasted first to “carbonize” the surface.) The process happens in air at ambient temperatures.

“In some cases, multiple lasing creates a two-step reaction,” Tour said. “First, the laser photothermally converts the target surface into amorphous carbon. Then on subsequent passes of the laser, the selective absorption of infrared light turns the amorphous carbon into LIG. We discovered that the wavelength clearly matters.”

The researchers turned to multiple lasing and defocusing when they discovered that simply turning up the laser’s power didn’t make better graphene on a coconut or other organic materials. But adjusting the process allowed them to make a micro supercapacitor in the shape of a Rice “R” on their twice-lased coconut skin.

Defocusing the laser sped the process for many materials as the wider beam allowed each spot on a target to be lased many times in a single raster scan. That also allowed for fine control over the product, Tour said. Defocusing allowed them to turn previously unsuitable polyetherimide into LIG.

“We also found we could take bread or paper or cloth and add fire retardant to them to promote the formation of amorphous carbon,” said Rice graduate student Yieu Chyan, co-lead author of the paper. “Now we’re able to take all these materials and convert them directly in air without requiring a controlled atmosphere box or more complicated methods.”

The common element of all the targeted materials appears to be lignin, Tour said. An earlier study relied on lignin, a complex organic polymer that forms rigid cell walls, as a carbon precursor to burn LIG in oven-dried wood. Cork, coconut shells and potato skins have even higher lignin content, which made it easier to convert them to graphene.

Tour said flexible, wearable electronics may be an early market for the technique. “This has applications to put conductive traces on clothing, whether you want to heat the clothing or add a sensor or conductive pattern,” he said.

Rice alumnus Ruquan Ye is co-lead author of the study. Co-authors are Rice graduate student Yilun Li and postdoctoral fellow Swatantra Pratap Singh and Professor Christopher Arnusch of Ben-Gurion University of the Negev, Israel. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of computer science and of materials science and nanoengineering at Rice.

The Air Force Office of Scientific Research supported the research.

Here’s a link to and a citation for the paper,

Laser-Induced Graphene by Multiple Lasing: Toward Electronics on Cloth, Paper, and Food by Yieu Chyan, Ruquan Ye†, Yilun Li, Swatantra Pratap Singh, Christopher J. Arnusch, and James M. Tour. ACS Nano DOI: 10.1021/acsnano.7b08539 Publication Date (Web): February 13, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

h/t Feb. 13, 2018 news item on Nanowerk

The devil’s (i.e., luciferase) in the bioluminescent plant

The American Chemical Society (ACS) and the Massachusetts Institute of Technology (MIT) have both issued news releases about the latest in bioluminescence.The researchers tested their work on watercress, a vegetable that was viewed in almost sacred terms in my family; it was not easily available in Vancouver (Canada) when I was child.

My father would hunt down fresh watercress by checking out the Chinese grocery stores. He could spot the fresh stuff from across the street while driving at 30 miles or more per hour. Spotting it entailed an immediate hunt for parking (my father hated to pay so we might have go around the block a few times or more) and a dash out of the car to ensure that he got his watercress before anyone else spotted it. These days it’s much more easily available and, thankfully, my father has passed on so he won’t have to think about glowing watercress.

Getting back to bioluninescent vegetable research, the American Chemical Society’s Dec. 13, 2017 news release on EurekAlert (and as a Dec. 13, 2017 news item on ScienceDaily) makes the announcement,

The 2009 film “Avatar” created a lush imaginary world, illuminated by magical, glowing plants. Now researchers are starting to bring this spellbinding vision to life to help reduce our dependence on artificial lighting. They report in ACS’ journal Nano Letters a way to infuse plants with the luminescence of fireflies.

Nature has produced many bioluminescent organisms, however, plants are not among them. Most attempts so far to create glowing greenery — decorative tobacco plants in particular — have relied on introducing the genes of luminescent bacteria or fireflies through genetic engineering. But getting all the right components to the right locations within the plants has been a challenge. To gain better control over where light-generating ingredients end up, Michael S. Strano and colleagues recently created nanoparticles that travel to specific destinations within plants. Building on this work, the researchers wanted to take the next step and develop a “nanobionic,” glowing plant.

The team infused watercress and other plants with three different nanoparticles in a pressurized bath. The nanoparticles were loaded with light-emitting luciferin; luciferase, which modifies luciferin and makes it glow; and coenzyme A, which boosts luciferase activity. Using size and surface charge to control where the sets of nanoparticles could go within the plant tissues, the researchers could optimize how much light was emitted. Their watercress was half as bright as a commercial 1 microwatt LED and 100,000 times brighter than genetically engineered tobacco plants. Also, the plant could be turned off by adding a compound that blocks luciferase from activating luciferin’s glow.

Here’s a video from MIT detailing their research,

A December 13, 2017 MIT news release (also on EurekAlert) casts more light on the topic (I couldn’t resist the word play),

Imagine that instead of switching on a lamp when it gets dark, you could read by the light of a glowing plant on your desk.

MIT engineers have taken a critical first step toward making that vision a reality. By embedding specialized nanoparticles into the leaves of a watercress plant, they induced the plants to give off dim light for nearly four hours. They believe that, with further optimization, such plants will one day be bright enough to illuminate a workspace.

“The vision is to make a plant that will function as a desk lamp — a lamp that you don’t have to plug in. The light is ultimately powered by the energy metabolism of the plant itself,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study

This technology could also be used to provide low-intensity indoor lighting, or to transform trees into self-powered streetlights, the researchers say.

MIT postdoc Seon-Yeong Kwak is the lead author of the study, which appears in the journal Nano Letters.

Nanobionic plants

Plant nanobionics, a new research area pioneered by Strano’s lab, aims to give plants novel features by embedding them with different types of nanoparticles. The group’s goal is to engineer plants to take over many of the functions now performed by electrical devices. The researchers have previously designed plants that can detect explosives and communicate that information to a smartphone, as well as plants that can monitor drought conditions.

Lighting, which accounts for about 20 percent of worldwide energy consumption, seemed like a logical next target. “Plants can self-repair, they have their own energy, and they are already adapted to the outdoor environment,” Strano says. “We think this is an idea whose time has come. It’s a perfect problem for plant nanobionics.”

To create their glowing plants, the MIT team turned to luciferase, the enzyme that gives fireflies their glow. Luciferase acts on a molecule called luciferin, causing it to emit light. Another molecule called co-enzyme A helps the process along by removing a reaction byproduct that can inhibit luciferase activity.

The MIT team packaged each of these three components into a different type of nanoparticle carrier. The nanoparticles, which are all made of materials that the U.S. Food and Drug Administration classifies as “generally regarded as safe,” help each component get to the right part of the plant. They also prevent the components from reaching concentrations that could be toxic to the plants.

The researchers used silica nanoparticles about 10 nanometers in diameter to carry luciferase, and they used slightly larger particles of the polymers PLGA and chitosan to carry luciferin and coenzyme A, respectively. To get the particles into plant leaves, the researchers first suspended the particles in a solution. Plants were immersed in the solution and then exposed to high pressure, allowing the particles to enter the leaves through tiny pores called stomata.

Particles releasing luciferin and coenzyme A were designed to accumulate in the extracellular space of the mesophyll, an inner layer of the leaf, while the smaller particles carrying luciferase enter the cells that make up the mesophyll. The PLGA particles gradually release luciferin, which then enters the plant cells, where luciferase performs the chemical reaction that makes luciferin glow.

The researchers’ early efforts at the start of the project yielded plants that could glow for about 45 minutes, which they have since improved to 3.5 hours. The light generated by one 10-centimeter watercress seedling is currently about one-thousandth of the amount needed to read by, but the researchers believe they can boost the light emitted, as well as the duration of light, by further optimizing the concentration and release rates of the components.

Plant transformation

Previous efforts to create light-emitting plants have relied on genetically engineering plants to express the gene for luciferase, but this is a laborious process that yields extremely dim light. Those studies were performed on tobacco plants and Arabidopsis thaliana, which are commonly used for plant genetic studies. However, the method developed by Strano’s lab could be used on any type of plant. So far, they have demonstrated it with arugula, kale, and spinach, in addition to watercress.

For future versions of this technology, the researchers hope to develop a way to paint or spray the nanoparticles onto plant leaves, which could make it possible to transform trees and other large plants into light sources.

“Our target is to perform one treatment when the plant is a seedling or a mature plant, and have it last for the lifetime of the plant,” Strano says. “Our work very seriously opens up the doorway to streetlamps that are nothing but treated trees, and to indirect lighting around homes.”

The researchers have also demonstrated that they can turn the light off by adding nanoparticles carrying a luciferase inhibitor. This could enable them to eventually create plants that shut off their light emission in response to environmental conditions such as sunlight, the researchers say.

Here’s a link to and a citation for the paper,

A Nanobionic Light-Emitting Plant by Seon-Yeong Kwak, Juan Pablo Giraldo, Min Hao Wong, Volodymyr B. Koman, Tedrick Thomas Salim Lew, Jon Ell, Mark C. Weidman, Rosalie M. Sinclair, Markita P. Landry, William A. Tisdale, and Michael S. Strano. Nano Lett., 2017, 17 (12), pp 7951–7961 DOI: 10.1021/acs.nanolett.7b04369 Publication Date (Web): November 17, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.