Firefighters everywhere are likely to appreciate the efforts of researchers at Texas A&M University (US) to a develop a non-toxic fire retardant coating. From a February 12, 2019 news item on Nanowerk (Note: A link has been removed),
Texas A&M University researchers are developing a new kind of flame-retardant coating using renewable, nontoxic materials readily found in nature, which could provide even more effective fire protection for several widely used materials.
Dr. Jaime Grunlan, the Linda & Ralph Schmidt ’68 Professor in the J. Mike Walker ’66 Department of Mechanical Engineering at Texas A&M, led the recently published research that is featured on the cover of a recent issue of the journal Advanced Materials Interfaces (“Super Gas Barrier and Fire Resistance of Nanoplatelet/Nanofibril Multilayer Thin Films”).
Successful development and implementation of the coating could provide better fire protection to materials including upholstered furniture, textiles and insulation.
“These coatings offer the opportunity to reduce the flammability of the polyurethane foam used in a variety of furniture throughout most people’s homes,” Grunlan noted.
The project is a result of an ongoing collaboration between Grunlan and a group of researchers at KTH Royal Institute of Technology in Stockholm, Sweden, led by Lars Wagberg. The group, which specializes in utilizing nanocellulose, provided Grunlan with the ingredients he needed to complement his water-based coating procedure.
In nature, both the cellulose – a component of wood and various sea creatures – and clay – a component in soil and rock formations – act as mechanical reinforcements for the structures in which they are found.
“The uniqueness in this current study lies in the use of two naturally occurring nanomaterials, clay nanoplatelets and cellulose nanofibrils,” Grunlan said. “To the best of our knowledge, these ingredients have never been used to make a heat shielding or flame-retardant coating as a multilayer thin film deposited from water.”
Among the benefits gained from using this method include the coating’s ability to create an excellent oxygen barrier to plastic films – commonly used for food packaging – and better fire protection at a lower cost than other, more toxic ingredients traditionally used flame-retardant treatments.
To test the coatings, Grunlan and his colleagues applied the flexible polyurethane foam – often used in furniture cushions – and exposed it to fire using a butane torch to determine the level of protection the compounds provided.
While uncoated polyurethane foam immediately melts when exposed to flame, the foam treated with the researchers’ coating prevented the fire from damaging any further than surface level, leaving the foam underneath undamaged.
“The nanobrick wall structure of the coating reduces the temperature experienced by the underlying foam, which delays combustion,” Grunlan said. “This coating also serves to promote insulating char formation and reduces the release of fumes that feed a fire.”
With the research completed, Grunlan said the next step for the overall flame-retardant project is to transition the methods into industry for implementation and further development.
I did not want to cash in (so to speak) on someone else’s fun headline so I played with it. Hre is the original head, which was likely written by either David Ruth or Mike Williams at Rice University (Texas, US), “Lettuce show you how to restore oil-soaked soil.”
Rice University engineers have figured out how soil contaminated by heavy oil can not only be cleaned but made fertile again.
How do they know it works? They grew lettuce.
Rice engineers Kyriacos Zygourakis and Pedro Alvarez and their colleagues have fine-tuned their method to remove petroleum contaminants from soil through the age-old process of pyrolysis. The technique gently heats soil while keeping oxygen out, which avoids the damage usually done to fertile soil when burning hydrocarbons cause temperature spikes.
While large-volume marine spills get most of the attention, 98 percent of oil spills occur on land, Alvarez points out, with more than 25,000 spills a year reported to the Environmental Protection Agency. That makes the need for cost-effective remediation clear, he said.
“We saw an opportunity to convert a liability, contaminated soil, into a commodity, fertile soil,” Alvarez said.
The key to retaining fertility is to preserve the soil’s essential clays, Zygourakis said. “Clays retain water, and if you raise the temperature too high, you basically destroy them,” he said. “If you exceed 500 degrees Celsius (900 degrees Fahrenheit), dehydration is irreversible.
The researchers put soil samples from Hearne, Texas, contaminated in the lab with heavy crude, into a kiln to see what temperature best eliminated the most oil, and how long it took.
Their results showed heating samples in the rotating drum at 420 C (788 F) for 15 minutes eliminated 99.9 percent of total petroleum hydrocarbons (TPH) and 94.5 percent of polycyclic aromatic hydrocarbons (PAH), leaving the treated soils with roughly the same pollutant levels found in natural, uncontaminated soil.
The paper appears in the American Chemical Society journal Environmental Science and Technology. It follows several papers by the same group that detailed the mechanism by which pyrolysis removes contaminants and turns some of the unwanted hydrocarbons into char, while leaving behind soil almost as fertile as the original. “While heating soil to clean it isn’t a new process,” Zygourakis said, “we’ve proved we can do it quickly in a continuous reactor to remove TPH, and we’ve learned how to optimize the pyrolysis conditions to maximize contaminant removal while minimizing soil damage and loss of fertility.
“We also learned we can do it with less energy than other methods, and we have detoxified the soil so that we can safely put it back,” he said.
Heating the soil to about 420 C represents the sweet spot for treatment, Zygourakis said. Heating it to 470 C (878 F) did a marginally better job in removing contaminants, but used more energy and, more importantly, decreased the soil’s fertility to the degree that it could not be reused.
“Between 200 and 300 C (392-572 F), the light volatile compounds evaporate,” he said. “When you get to 350 to 400 C (662-752 F), you start breaking first the heteroatom bonds, and then carbon-carbon and carbon-hydrogen bonds triggering a sequence of radical reactions that convert heavier hydrocarbons to stable, low-reactivity char.”
The true test of the pilot program came when the researchers grew Simpson black-seeded lettuce, a variety for which petroleum is highly toxic, on the original clean soil, some contaminated soil and several pyrolyzed soils. While plants in the treated soils were a bit slower to start, they found that after 21 days, plants grown in pyrolyzed soil with fertilizer or simply water showed the same germination rates and had the same weight as those grown in clean soil.
“We knew we had a process that effectively cleans up oil-contaminated soil and restores its fertility,” Zygourakis said. “But, had we truly detoxified the soil?”
To answer this final question, the Rice team turned to Bhagavatula Moorthy, a professor of neonatology at Baylor College of Medicine, who studies the effects of airborne contaminants on neonatal development. Moorthy and his lab found that extracts taken from oil-contaminated soils were toxic to human lung cells, while exposing the same cell lines to extracts from treated soils had no adverse effects. The study eased concerns that pyrolyzed soil could release airborne dust particles laced with highly toxic pollutants like PAHs.
”One important lesson we learned is that different treatment objectives for regulatory compliance, detoxification and soil-fertility restoration need not be mutually exclusive and can be simultaneously achieved,” Alvarez said.
Computation is a ubiquitous concept in physical sciences, biology, and engineering, where it provides many critical capabilities. Historically, there have been ongoing efforts to merge computation with “unusual” matters across many length scales, from microscopic droplets (Science 315, 832, 2007) to DNA nanostructures (Science 335, 831, 2012; Nat. Chem. 9, 1056, 2017) and molecules (Science 266, 1021, 1994; Science 314, 1585, 2006; Nat. Nanotech. 2, 399, 2007; Nature 375, 368, 2011).
However, the implementation of complex computation in particle systems, especially in nanoparticles, remains challenging, despite a wide range of potential applications that would benefit from algorithmically controlling their unique and potentially useful intrinsic features (such as photonic, plasmonic, catalytic, photothermal, optoelectronic, electrical, magnetic and material properties) without human interventions.
This challenge is not due to the lack of sophistication in the the current state-of-the-art of stimuli-responsive nanoparticles, many of which can conceptually function as elementary logic gates. This is mostly due to the lack of scalable architectures that would enable systematic integration and wiring of the gates into a large integrated circuit.
Previous approaches are limited to (i) demonstrating one simple logic operation per test tube or (ii) relying on complicated enzyme-based molecular circuits in solution. It should be also noted that modular and scalable aspects are key challenges in DNA computing for practical and widespread use.
In nature, the cell membrane is analogous to a circuit board, as it organizes a wide range of biological nanostructures (e.g. proteins) as (computational) units and allows them to dynamically interact with each other on the fluidic 2D surface to carry out complex functions as a network and often induce signaling intracellular signaling cascades. For example, the membrane proteins on the membrane take chemical/physical cues as inputs (e.g. binding with chemical agents, mechanical stimuli) and change their conformations and/or dimerize as outputs. Most importantly, such biological “computing” processes occur in a massively parallel fashion. Information processing on living cell membranes is a key to how biological systems adapt to changes in external environments.
This manuscript reports the development of a nanoparticle-lipid bilayer hybrid-based computing platform termed lipid nanotablet (LNT), in which nanoparticles, each programmed with surface chemical ligands (DNA in this case), are tethered to a supported lipid bilayer to carry out computation. Taking inspirations from parallel computing processes on cellular membranes, we exploited supported lipid bilayers (SLBs)–synthetic mimics for cell surfaces–as chemical circuit boards to construct nanoparticle circuits. This “nano?bio” computing, which occurs at the interface of nanostructures and biomolecules, translates molecular information in solution (input) into dynamic assembly/disassembly of nanoparticles on a lipid bilayer (output).
We introduced two types of nanoparticles to a lipid bilayer that differ in mobility: mobile Nano-Floaters and immobile Nano-Receptors. Due to high mobility, floaters actively interact with receptors across space and time, functioning as active units of computation. The nanoparticles are functionalized with specially designed DNA [deoxyribonucleic acid] ligands, and the surface ligands render receptor-floater interactions programmable, thereby transforming a pair of receptor and floater into a logic gate. A nanoparticle logic gate takes DNA strands in solution as inputs and generates nanoparticle assembly or disassembly events as outputs. The nanoparticles and their interactions can be imaged and tracked by dark-field microscopy with single-nanoparticle resolution because of strong and stable scattering signals from plasmonic nanoparticles. Using this approach (termed “interface programming”), we first demonstrated that a pair of nanoparticles (that is, two nanoparticles on a lipid bilayer) can carry out AND, OR, INHIBIT logic operations and take multiple inputs (fan-in) and generate multiple outputs (fan-out). Also, multiple logic gates can be modularly wired with AND or OR logic via floaters, as the mobility of floaters enables the information cascade among several nanoparticle logic gates. We termed this strategy “network programming.” By combining these two strategies (interfacial and network programming), we were able to implement complex logic circuits such as multiplexer.
The most important contributions of our paper are the conceptual one and the major advances in modular and scalable molecular computing (DNA computing in this case). LNT platform, for the first time, introduces the idea of using lipid bilayer membranes as key components for information processing. As the two-dimensional (2D) fluidic lipid membrane is bio-compatible and chemically modifiable, any nanostructures can be potentially introduced and used as computing units. When tethered to the lipid bilayer “chip”, these nanostructures can be visualized and become controllable at the single-particle level; this dimensionality reduction, bringing the nanostructures from freely diffusible solution phase (3D) to fluidic membrane (2D), transforms a collection of nanostructures into a programmable, analyzable reaction network. Moreover, we also developed a digitized imaging method and software for quantitative and massively parallel analysis of interacting nanoparticles. In addition, LNT platform provides many practical merits to current state-of-the-art in molecular computing and nanotechnology. On LNT platforms, a network of nanoparticles (each with unique and beneficial properties) can be design to autonomously respond to molecular information; such capability to algorithmically control nanoparticle networks will be very useful for addressing many challenges with molecular computing and developing new computing platforms. As the title of our manuscript suggests, this nano-bio computing will lead to exciting opportunities in biocomputation, nanorobotics, DNA nanotechnology, artificial bio-interfaces, smart biosensors, molecular diagnostics, and intelligent nanomaterials. In summary, the operating and design principles of lipid nanotablet platform are as follows:
(1) LNT uses single nanoparticles as units of computation. By tracking numerous nanoparticles and their actions with dark-field microscopy at the single-particle level, we could treat a single nanoparticle as a two-state device representing a bit. A nanoparticle provides a discrete, in situ optical readout of its interaction (e.g. association or dissociation) with another particle as an output of logic computation.
(2) Nanoparticles on LNT function as Boolean logic gates. We exploited the programmable bonding interaction within particle-particle interfaces to transform two interacting nanoparticles into a Boolean logic gate. The gate senses single-stranded DNA as inputs and triggers an assembly or disassembly reaction of the pair as an output. We demonstrated two-input AND, two-input OR and INHIBIT logic operations, and fan-in/fan-out of logic gates.
(3) LNT enables modular wiring of multiple nanoparticle logic gates into a combinational circuit. We exploited parallel, single-particle imaging to program nanoparticle networks and thereby wire multiple logic gates into a combinational circuit. We demonstrate a multiplexer MUX2to1 circuit built from the network wiring rules.
Here’s a link to and a citation for the team’s latest paper,
Nano-bio-computing lipid nanotablet by Jinyoung Seo, Sungi Kim, Ha H. Park, Da Yeon Choi, and Jwa-Min Nam. Science Advances 22 Feb 2019: Vol. 5, no. 2, eaau2124 DOI: 10.1126/sciadv.aau2124
I have two stories about lungs and they are entirely different with the older one being a bioengineering story from the US and the more recent one being an artificial tissue story from the University of Toronto and the University of Ottawa (both in Canada).
Lab grown lungs
The Canadian Broadcasting Corporation’s Quirks and Quarks radio programme posted a December 29, 2018 news item (with embedded radio files) about bioengineered lunjgs,
There are two major components to building an organ: the structure and the right cells on that structure. A team led by Dr. Joan Nichols, a Professor of Internal Medicine, Microbiology and Immunology at the University of Texas Medical Branch in Galveston, were able to tackle both parts of the problem
In their experiment they used a donor organ for the structure. They took a lung from an unrelated pig, and stripped it of its cells, leaving a scaffold of collagen, a tough, flexible protein. This provided a pre-made appropriate structure, though in future they think it may be possible to use 3-D printing technology to get the same result.
They then added cultured cells from the animal who would be receiving the transplant – so the lung was made of the animal’s own cells. Cultured lung and blood vessel cells were placed on the scaffold and it was placed in a tank for 30 days with a cocktail of nutrients to help the cells stick to the scaffold and proliferate. The result was a kind of baby lung.
They then transplanted the bio-engineered, though immature, lung into the recipient animal where they hoped it would continue to develop and mature – growing to become a healthy, functioning organ.
The recipients of the bio-engineered lungs were four pigs adult pigs, which appeared to tolerate the transplants well. In order to study the development of the bio-engineered lungs, they euthanized the animals at different times: 10 hours, two weeks, one month and two months after transplantation.
They found that as early as two weeks, the bio-engineered lung had integrated into the recipient animals’ body, building a strong network of blood vessels essential for the lung to survive. There was no evidence of pulmonary edema, the build of fluid in the lungs, which is usually a sign of the blood vessels not working efficiently. There was no sign of rejection of the transplanted organs, and the pigs were healthy up to the point where they were euthanized.
One lingering concern is how well the bio-engineered lungs delivered oxygen. The four pigs who received the trasplant [sic] had one original functioning lung, so they didn’t depend on their new bio-engineered lung for breathing. The scientists were not sure that the bio-engineered lung was mature enough to handle the full load of oxygen on its own.
You can hear Bob McDonald’s (host of Quirks & Quarks, a Canadian Broadcasting Corporation science radio programme) interview lead scientist, Dr. Joan Nichols if you go to here. (Note: I find he overmodulates his voice but some may find he has a ‘friendly’ voice.)
This is an image of the lung scaffold produced by the team,
In 2014, Joan Nichols and Joaquin Cortiella from The University of Texas Medical Branch at Galveston were the first research team to successfully bioengineer human lungs in a lab. In a paper now available in Science Translational Medicine, they provide details of how their work has progressed from 2014 to the point no complications have occurred in the pigs as part of standard preclinical testing.
“The number of people who have developed severe lung injuries has increased worldwide, while the number of available transplantable organs have decreased,” said Cortiella, professor of pediatric anesthesia. “Our ultimate goal is to eventually provide new options for the many people awaiting a transplant,” said Nichols, professor of internal medicine and associate director of the Galveston National Laboratory at UTMB.
To produce a bioengineered lung, a support scaffold is needed that meets the structural needs of a lung. A support scaffold was created using a lung from an unrelated animal that was treated using a special mixture of sugar and detergent to eliminate all cells and blood in the lung, leaving only the scaffolding proteins or skeleton of the lung behind. This is a lung-shaped scaffold made totally from lung proteins.
The cells used to produce each bioengineered lung came from a single lung removed from each of the study animals. This was the source of the cells used to produce a tissue-matched bioengineered lung for each animal in the study. The lung scaffold was placed into a tank filled with a carefully blended cocktail of nutrients and the animals’ own cells were added to the scaffold following a carefully designed protocol or recipe. The bioengineered lungs were grown in a bioreactor for 30 days prior to transplantation. Animal recipients were survived for 10 hours, two weeks, one month and two months after transplantation, allowing the research team to examine development of the lung tissue following transplantation and how the bioengineered lung would integrate with the body.
All of the pigs that received a bioengineered lung stayed healthy. As early as two weeks post-transplant, the bioengineered lung had established the strong network of blood vessels needed for the lung to survive.
“We saw no signs of pulmonary edema, which is usually a sign of the vasculature not being mature enough,” said Nichols and Cortiella. “The bioengineered lungs continued to develop post-transplant without any infusions of growth factors, the body provided all of the building blocks that the new lungs needed.”
Nichols said that the focus of the study was to learn how well the bioengineered lung adapted and continued to mature within a large, living body. They didn’t evaluate how much the bioengineered lung provided oxygenation to the animal.
“We do know that the animals had 100 percent oxygen saturation, as they had one normal functioning lung,” said Cortiella. “Even after two months, the bioengineered lung was not yet mature enough for us to stop the animal from breathing on the normal lung and switch to just the bioengineered lung.”
For this reason, future studies will look at long-term survival and maturation of the tissues as well as gas exchange capability.
The researchers said that with enough funding, they could grow lungs to transplant into people in compassionate use circumstances within five to 10 years.
“It has taken a lot of heart and 15 years of research to get us this far, our team has done something incredible with a ridiculously small budget and an amazingly dedicated group of people,” Nichols and Cortiella said.
Here’s a citation and another link for the paper,
Production and transplantation of bioengineered lung into a large-animal model by Joan E. Nichols, Saverio La Francesca, Jean A. Niles, Stephanie P. Vega, Lissenya B. Argueta, Luba Frank, David C. Christiani, Richard B. Pyles, Blanca E. Himes, Ruyang Zhang, Su Li, Jason Sakamoto, Jessica Rhudy, Greg Hendricks, Filippo Begarani, Xuewu Liu, Igor Patrikeev, Rahul Pal, Emiliya Usheva, Grace Vargas, Aaron Miller, Lee Woodson, Adam Wacher, Maria Grimaldo, Daniil Weaver, Ron Mlcak, and Joaquin Cortiella. Science Translational Medicine 01 Aug 2018: Vol. 10, Issue 452, eaao3926 DOI: 10.1126/scitranslmed.aao3926
This paper is behind a paywall.
Artificial lung cancer tissue
The research teams at the University of Toronto and the University of Ottawa worked on creating artificial lung tissue but other applications are possible too. First, there’s the announcement in a February 25, 2019 news item on phys.org,
A 3-D hydrogel created by researchers in U of T Engineering Professor Molly Shoichet’s lab is helping University of Ottawa researchers to quickly screen hundreds of potential drugs for their ability to fight highly invasive cancers.
Cell invasion is a critical hallmark of metastatic cancers, such as certain types of lung and brain cancer. Fighting these cancers requires therapies that can both kill cancer cells as well as prevent cell invasion of healthy tissue. Today, most cancer drugs are only screened for their ability to kill cancer cells.
“In highly invasive diseases, there is a crucial need to screen for both of these functions,” says Shoichet. “We now have a way to do this.”
In their latest research, the team used hydrogels to mimic the environment of lung cancer, selectively allowing cancer cells, and not healthy cells, to invade. In their latest research, the team used hydrogels to mimic the environment of lung cancer, selectively allowing cancer cells, and not healthy cells, to invade. This emulated environment enabled their collaborators in Professor Bill Stanford’s lab at University of Ottawa to screen for both cancer-cell growth and invasion. The study, led by Roger Y. Tam, a research associate in Shochet’s lab, was recently published in Advanced Materials.
“We can conduct this in a 384-well plate, which is no bigger than your hand. And with image-analysis software, we can automate this method to enable quick, targeted screenings for hundreds of potential cancer treatments,” says Shoichet.
One example is the researchers’ drug screening for lymphangioleiomyomatosis (LAM), a rare lung disease affecting women. Shoichet and her team were inspired by the work of Green Eggs and LAM, a Toronto-based organization raising awareness of the disease.
Using their hydrogels, they were able to automate and screen more than 800 drugs, thereby uncovering treatments that could target disease growth and invasion.
In the ongoing collaboration, the researchers plan to next screen multiple drugs at different doses to gain greater insight into new treatment methods for LAM. The strategies and insights they gain could also help identify new drugs for other invasive cancers.
Shoichet, who was recently named a Distinguished Woman in Chemistry or Chemical Engineering, also plans to patent the hydrogel technology.
“This has, and continues to be, a great collaboration that is advancing knowledge at the intersection of engineering and biology,” says Shoichet.
I note that Shoichet (pronounced ShoyKet) is getting ready to patent this work. I do have a question about this and it’s not up to Shoichet to answer as she didn’t create the system. Will the taxpayers who funded her work receive any financial benefits should the hydrogel prove to be successful or will we be paying double, both supporting her research and paying for the hydrogel through our healthcare costs?
Getting back to the research, here’s a link to and a citation for the paper,
This may look like just another gauzy fabric but it has some special properties according to a February 7, 2019 news item on ScienceDaily,
Despite decades of innovation in fabrics with high-tech thermal properties that keep marathon runners cool or alpine hikers warm, there has never been a material that changes its insulating properties in response to the environment. Until now.
University of Maryland researchers have created a fabric that can automatically regulate the amount of heat that passes through it. When conditions are warm and moist, such as those near a sweating body, the fabric allows infrared radiation (heat) to pass through. When conditions become cooler and drier, the fabric reduces the heat that escapes. The development was reported in the February 8, 2019 issue of the journal Science.
The researchers created the fabric from specially engineered yarn coated with a conductive metal. Under hot, humid conditions, the strands of yarn compact and activate the coating, while cool, dry conditions reverse the action. The researchers refer to this as “gating”—essentially a tunable blind that transmits or blocks heat.
“This is the first technology that allows us to dynamically gate infrared radiation,” said YuHuang Wang, a professor of chemistry and biochemistry and one of the paper’s corresponding authors who directed the studies.
The base yarn for this new textile is created with fibers made of two different synthetic materials—one absorbs water and the other repels it. The strands are coated with carbon nanotubes, a special class of lightweight, carbon-based, conductive metal.
Because materials in the fibers both resist and absorb water, the fibers warp when exposed to humidity such as that surrounding a sweating body. That distortion brings the strands of yarn closer together, opening the pores in the fabric and creating a minor cooling effect by allowing heat to escape. More importantly, it modifies the electromagnetic coupling between the carbon nanotubes in the coating.
“You can think of this coupling effect like the bending of a radio antenna to change the wavelength or frequency it resonates with,” Wang said. “Imagine bringing two antennae close together to regulate the kind of electromagnetic wave they pick up. When the fabric fibers are brought closer together, the radiation they interact with changes. In clothing, that means the fabric interacts with the heat radiating from the human body.
”Depending on the tuning, the fabric either blocks infrared radiation or allows it to pass through. The reaction is almost instant, so before people realize it, the dynamic gating mechanism is either cooling them down or working in reverse to trap heat.
“The human body is a perfect radiator. It gives off heat quickly,” said Min Ouyang, a professor of physics at UMD and the paper’s other corresponding author. “For all of history, the only way to regulate the radiator has been to take clothes off or put clothes on. But this fabric is a true bidirectional regulator.
More work is needed before the fabric can be commercialized, but according to the researchers, materials used for the base fiber are readily available and the carbon coating can be easily added during the standard dyeing process.
Here’s a link to and a citation for the paper,
Dynamic gating of infrared radiation in a textile by Xu A. Zhang, Shangjie Yu, Beibei Xu, Min Li, Zhiwei Peng, Yongxin Wang, Shunliu Deng, Xiaojian Wu, Zupeng Wu, Min Ouyang, YuHuang Wang. Science 08 Feb 2019: Vol. 363, Issue 6427, pp. 619-623 DOI: 10.1126/science.aau1217
Xuan paper is special being both rare and used for calligraphy and art works. Before getting to the ‘fire-resistant’ news, it might be helpful to get some details about Xuan paper as it is typically prepared and used (from a Dec. 29, 2018 news item on xinhuanet.com),
Today’s Chinese artists now have the opportunity to preserve their works much longer than the masters who painted hundreds of years ago.
Chinese researchers have developed a non-flammable version of Xuan paper that has high thermal stability, according to the Chinese Academy of Sciences (CAS).
Xuan paper, a type of handmade paper, was originally produced in ancient China and used for both Chinese calligraphy and paintings. The procedure of making Xuan paper was listed as a world intangible cultural heritage by UNESCO in 2009.
The raw materials need to produce Xuan paper are found in Jingxian County, east China’s Anhui Province and as of late, are in short supply.
The traditional handmade method of Xuan paper involves more than 100 steps and takes nearly two years [emphasis mine]. It has a low output and high cost. Xuan paper made with organic materials often suffers from degradation, yellowing and deteriorating properties during the long-term natural aging process.
Furthermore, the most lethal problem of traditional Xuan paper is its high flammability.
A January 18, 2019 news item on Nanowerk adds a few more details about the traditional paper while describing the ‘new’ Xuan paper (Note: A link has been removed),
Xuan paper is an excellent example of the traditional handmade paper, and features excellent properties of durability, ink wetting, and resistance to insects and mildew. Its excellent durability is attributed to its unique raw materials and handmade manufacturing process under mild conditions.
The bark of pteroceltis tatarinowii, a common species of elm in the area, is used as the main raw material to produce Xuan paper. Limestone particles are deposited on the surface of pteroceltis bark fibers, which can neutralize acids produced by the hydrolysis of plant fibers and from the environment.
Since the raw materials are only produced in Jing County, Anhui Province, China, Xuan paper suffers from a severe shortage. Also, it has the shortcomings such as complicated traditional hand making process and flammability. In a recent paper published in ACS Sustainable Chemistry & Engineering (“Fire-Resistant Inorganic Analogous Xuan Paper with Thousands of Years’ Super-Durability”), a team led by Prof. ZHU Yingjie from Shanghai Institute of Ceramics of Chinese Academy of Sciences developed a new kind of “fire-resistant Xuan paper” based on ultralong hydroxyapatite nanowires.
The unique integral structure of the “fire-resistant Xuan paper” with excellent mechanical properties and high flexibility was designed to be similar to the reinforced concrete structure in tall buildings. Ultralong hydroxyapatite nanowires are used as the main building material and are similar to the concrete. Silica glass fibers with micrometer-sized diameters are used as the reinforcing framework material and are similar to supporting steel bars. In addition, a new kind of inorganic adhesive composed of amorphous nanoparticles was designed, prepared and used as the binder in the “fire-resistant Xuan paper”.
The as-prepared “fire-resistant Xuan paper” well keeps its properties even after the simulated aging for up to 3000 years.
The original whiteness of the “fire-resistant Xuan paper” is 92%, and its whiteness has a slight decrease to 91.6%, with the whiteness retention as high as 99.6% after the simulated aging for 2000 years. Even after the simulated aging for 3000 years, its whiteness only decreases to 86.7% with 94.2% of the whiteness retention. It is much higher than that of the traditional Xuan paper. The whiteness of the traditional unprocessed Xuan paper decreases from initial 70.5% to 47.3% with 67.1% of the whiteness retention after the simulated aging for 2000 years. Its whiteness decreases to 42.2% with 59.9% of the whiteness retention after the simulated aging for 3000 years.
The “fire-resistant Xuan paper” exhibits superior mechanical properties during the simulated aging process.
The retention percentage of tensile strength of the “fire-resistant Xuan paper” is as high as 95.2% aging for 2000 years, and 81.3% aging for 3000 years. In contrast, the average retention percentage of tensile strength of the unprocessed Xuan paper is only 54.9% aging for 2000 years, and 40.4% aging for 3000 years. Furthermore, the “fire-resistant Xuan paper” has an excellent ink wetting performance, which is mainly attributed to the nanoscale porous structure and hydroxyl groups of utralong hydroxyapatite nanowires.
The prevention of mould growth on the paper is a great challenge, because the mould can cause the deterioration of the Xuan paper. In this study, experiments showed that different kinds of mould spores do not breed and spread on the “fire-resistant Xuan paper”, and it is able to maintain a clean surface without the growth of any mould, indicating the excellent anti-mildew performance of the “fire-resistant Xuan paper” even exposure to the external nutrients. On the contrary, the growth and spread of mould are obviously observed on the traditional Xuan paper in the presence of external nutrients, indicating that its anti-mildew performance is not satisfactory.
The most important property is that the “fire-resistant Xuan paper” is fire resistant and highly thermal stable. Thus it can prevent the precious calligraphy and painting works as well as books, documents, and archives from the damage by fire. In addition, the production process of the “fire-resistant Xuan paper” is simple, highly efficient, and it only needs 3~4 days to produce.
Xuan paper is the best material carrier for the calligraphy and painting arts, many of which have been well preserved for hundreds of years.
The real map, not the the image of the map you see above, offers a disconcerting (for me, anyway) experience. Especially since I’ve just finished reading Lisa Feldman Barrett’s 2017 book, How Emotions are Made, where she presents her theory of ‘constructed emotion. (There’s more about ‘constructed emotion’ later in this post.)
Ooh, surprise! Those spontaneous sounds we make to express everything from elation (woohoo) to embarrassment (oops) say a lot more about what we’re feeling than previously understood, according to new research from the University of California, Berkeley.
Proving that a sigh is not just a sigh [a reference to the song, As Time Goes By? The lyric is “a kiss is still a kiss, a sigh is just a sigh …”], UC Berkeley scientists conducted a statistical analysis of listener responses to more than 2,000 nonverbal exclamations known as “vocal bursts” and found they convey at least 24 kinds of emotion. Previous studies of vocal bursts set the number of recognizable emotions closer to 13.
The results, recently published online in the American Psychologist journal, are demonstrated in vivid sound and color on the first-ever interactive audio map of nonverbal vocal communication.
“This study is the most extensive demonstration of our rich emotional vocal repertoire, involving brief signals of upwards of two dozen emotions as intriguing as awe, adoration, interest, sympathy and embarrassment,” said study senior author Dacher Keltner, a psychology professor at UC Berkeley and faculty director of the Greater Good Science Center, which helped support the research.
For millions of years, humans have used wordless vocalizations to communicate feelings that can be decoded in a matter of seconds, as this latest study demonstrates.
“Our findings show that the voice is a much more powerful tool for expressing emotion than previously assumed,” said study lead author Alan Cowen, a Ph.D. student in psychology at UC Berkeley.
On Cowen’s audio map, one can slide one’s cursor across the emotional topography and hover over fear (scream), then surprise (gasp), then awe (woah), realization (ohhh), interest (ah?) and finally confusion (huh?).
Among other applications, the map can be used to help teach voice-controlled digital assistants and other robotic devices to better recognize human emotions based on the sounds we make, he said.
As for clinical uses, the map could theoretically guide medical professionals and researchers working with people with dementia, autism and other emotional processing disorders to zero in on specific emotion-related deficits.
“It lays out the different vocal emotions that someone with a disorder might have difficulty understanding,” Cowen said. “For example, you might want to sample the sounds to see if the patient is recognizing nuanced differences between, say, awe and confusion.”
Though limited to U.S. responses, the study suggests humans are so keenly attuned to nonverbal signals – such as the bonding “coos” between parents and infants – that we can pick up on the subtle differences between surprise and alarm, or an amused laugh versus an embarrassed laugh.
For example, by placing the cursor in the embarrassment region of the map, you might find a vocalization that is recognized as a mix of amusement, embarrassment and positive surprise.
A tour through amusement reveals the rich vocabulary of laughter and a spin through the sounds of adoration, sympathy, ecstasy and desire may tell you more about romantic life than you might expect,” said Keltner.
Researchers recorded more than 2,000 vocal bursts from 56 male and female professional actors and non-actors from the United States, India, Kenya and Singapore by asking them to respond to emotionally evocative scenarios.
Next, more than 1,000 adults recruited via Amazon’s Mechanical Turk online marketplace listened to the vocal bursts and evaluated them based on the emotions and meaning they conveyed and whether the tone was positive or negative, among several other characteristics.
A statistical analysis of their responses found that the vocal bursts fit into at least two dozen distinct categories including amusement, anger, awe, confusion, contempt, contentment, desire, disappointment, disgust, distress, ecstasy, elation, embarrassment, fear, interest, pain, realization, relief, sadness, surprise (positive) surprise (negative), sympathy and triumph.
For the second part of the study, researchers sought to present real-world contexts for the vocal bursts. They did this by sampling YouTube video clips that would evoke the 24 emotions established in the first part of the study, such as babies falling, puppies being hugged and spellbinding magic tricks.
This time, 88 adults of all ages judged the vocal bursts extracted from YouTube videos. Again, the researchers were able to categorize their responses into 24 shades of emotion. The full set of data were then organized into a semantic space onto an interactive map.
“These results show that emotional expressions color our social interactions with spirited declarations of our inner feelings that are difficult to fake, and that our friends, co-workers, and loved ones rely on to decipher our true commitments,” Cowen said.
The writer assumes that emotions are pre-existing. Somewhere, there’s happiness, sadness, anger, etc. It’s the pre-existence that Lisa Feldman Barret challenges with her theory that we construct our emotions (from her Wikipedia entry),
She highlights differences in emotions between different cultures, and says that emotions “are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment.”
You can find Barrett’s December 6, 2017 TED talk here wheres she explains her theory in greater detail. One final note about Barrett, she was born and educated in Canada and now works as a Professor of Psychology at Northeastern University, with appointments at Harvard Medical School and Massachusetts General Hospital at Northeastern University in Boston, Massachusetts; US.
A February 7, 2019 by Mark Wilson for Fast Company delves further into the 24 emotion audio map mentioned at the outset of this posting (Note: Links have been removed),
Fear, surprise, awe. Desire, ecstasy, relief.
These emotions are not distinct, but interconnected, across the gradient of human experience. At least that’s what a new paper from researchers at the University of California, Berkeley, Washington University, and Stockholm University proposes. The accompanying interactive map, which charts the sounds we make and how we feel about them, will likely persuade you to agree.
At the end of his article, Wilson also mentions the Dalai Lama and his Atlas of Emotions, a data visualization project, (featured in Mark Wilson’s May 13, 2016 article for Fast Company). It seems humans of all stripes are interested in emotions.
Here’s a link to and a citation for the paper about the audio map,
I’m always a sucker for bioenergy harvesting stories but this is the first time I’ve seen research on the topic which combines weight control with wound healing. From a January 17, 2019 news item on Nanowerk,
Although electrical stimulation has therapeutic potential for various disorders and conditions, ungainly power sources have hampered practical applications. Now bioengineers have developed implantable and wearable nanogenerators from special materials that create electrical pulses when compressed by body motions. The pulses controlled weight gain and enhanced healing of skin wounds in rat models.
The work was performed by a research team led by Xudong Wang, Ph.D., Professor of Material Sciences and Engineering, College of Engineering, University of Wisconsin-Madison, and supported by the [US Dept. of Health, National Institutes of Health] National Institute of Biomedical Imaging and Bioengineering (NIBIB).
The researchers used what are known as piezoelectric and dielectric materials, including ceramics and crystals, which have a special property of creating an electrical charge in response to mechanical stress.
“Wang and colleagues have engineered solutions to a number of technical hurdles to create piezoelectric and dielectric materials that are compatible with body tissues and can generate a reliable, self-sufficient power supply. Their meticulous work has enabled a simple and elegant technology that offers the possibility of developing electrical stimulation therapies for a number of major diseases that currently lack adequate treatments,” explained David Rampulla, Ph.D., director of the Program in Biomaterials and Biomolecular Constructs at NIBIB
Shedding weight by curbing appetite
Worldwide, more than 700 million people — over 100 million of them children — are obese, causing health problems such as cardiovascular disease, diabetes, kidney disease, and certain cancers. In 2015 approximately four million people died of obesity-related causes1.
To address this crisis, Wang and his colleagues developed a vagal nerve stimulator (VNS) that dramatically improves appetite suppression through electrical stimulation of the vagus nerve. The approach is a promising one that has previously not proven practical because patients must carry bulky battery packs that require proper programming, and frequent recharging
The VNS consists of a small patch, about the size of a fingernail, which carries tiny devices called nanogenerators. Minimally invasive surgery was used to attach the VNS to the stomachs of rats. The rat’s stomach movements resulted in the delivery of gentle electrical pulses to the vagus nerve, which links the brain to the stomach. With the VNS, when the stomach moved in response to eating, the electric signal told the brain that the stomach was full, even if only a small amount of food was consumed.
The device curbed the rat’s appetite and reduced body weight by a remarkable 40 percent. “The stimulation is a natural response to regulate food intake, so there are no unwanted side effects,” explained Wang. When the device was removed the rats resumed their normal eating patterns and their weight returned to pre-treatment levels.
“Given the simplicity and effectiveness of the system, coupled with the fact that the effect is reversible and carries no side-effects, we are now planning testing in larger animals with the hope of eventually moving into human trials,” said Wang.
Accelerating wound healing
In another NIBIB-funded study in a rat experimental model, the researchers used their nanogenerator technology to determine whether electrical stimulation would accelerate healing of wounds on the skin surface.
For this experiment, a band of nanogenerators was placed around the rat’s chest, where the expansion from breathing created a mild electric field. Small electrodes in a bandage-like device were placed over skin wounds on the rat’s back, where they directed the electric field to cover the wound area.
The technique reduced healing times to just three days compared with nearly two weeks for the normal healing process.
Similar to the case with appetite suppression, it was known that electricity could enhance wound healing, but the devices that had been developed were large and impractical. The nanogenerator-powered bandage is completely non-invasive and produced a mild electric field that is similar to electrical activity detected in the normal wound-healing process.
The researchers observed electrical activation of normal cellular healing processes that included the movement of healthy skin fibroblasts into the wound, accompanied by the release of biochemical factors that promote the growth of the fibroblasts and other cell types that expand to repair the wound space.
“The dramatic decrease in healing time was surprising,” said Wang, “We now plan to test the device on pigs because their skin is very similar to humans.”
The team believes the simplicity of the electric bandage will help move the technology to human trials quickly. In addition, Wang explained that the fabrication of the device is very inexpensive and a product for human use would cost about the same as a normal bandage.
The experiments on appetite suppression were reported in the December issue of Nature Communications2. The wound-healing studies were reported in the December issue of ACS Nano3. Both studies were supported by grant EB021336 from the National Institute of Biomedical Imaging and Bioengineering, and grant CA014520 from the National Cancer Institute.
For anyone who’s not familiar with the problem, digital art is disappearing or very difficult and/or expensive to access after the technology on which or with which it was created becomes obsolete. Fear not! Mathematicians are coming to the rescue in a joint programme between New York University (NYU) and the Solomon R. Guggenheim Museum.
Just as conservators have developed methods to protect traditional artworks, computer scientists have now created means to safeguard computer- or time-based art by following the same preservation principles.
Software- and computer-based works of art are fragile — not unlike their canvas counterparts — as their underlying technologies such as operating systems and programming languages change rapidly, placing these works at risk.
These include Shu Lea Cheang’s Brandon (1998-99), Mark Napier’s net.flag (2002), and John F. Simon Jr.’s Unfolding Object (2002), three online works recently conserved at the Solomon R. Guggenheim Museum, through a collaboration with New York University’s Courant Institute of Mathematical Sciences.
Fortunately, just as conservators have developed methods to protect traditional artworks, computer scientists, in collaboration with time-based media conservators, have created means to safeguard computer- or time-based art by following the same preservation principles.
“The principles of art conservation for traditional works of art can be applied to decision-making in conservation of software- and computer-based works of art with respect to programming language selection, programming techniques, documentation, and other aspects of software remediation during restoration,” explains Deena Engel, a professor of computer science at New York University’s Courant Institute of Mathematical Sciences.
Since 2014, she has been working with the Guggenheim Museum’s Conservation Department to analyze, document, and preserve computer-based artworks from the museum’s permanent collection. In 2016, the Guggenheim took more formal steps to ensure the stature of these works by establishing Conserving Computer-Based Art (CCBA), a research and treatment initiative aimed at preserving software and computer-based artworks held by the museum.
“As part of conserving contemporary art, conservators are faced with new challenges as artists use current technology as media for their artworks,” says Engel. “If you think of a word processing document that you wrote 10 years ago, can you still open it and read or print it? Software-based art can be very complex. Museums are tasked with conserving and exhibiting works of art in perpetuity. It is important that museums and collectors learn to care for these vulnerable and important works in contemporary art so that future generations can enjoy them.”
Under this initiative, a team led by Engel and Joanna Phillips, former senior conservator of time-based media at the Guggenheim Museum, and including conservation fellow Jonathan Farbowitz and Lena Stringari, deputy director and chief conservator at the Guggenheim Museum, explore and implement both technical and theoretical approaches to the treatment and restoration of software-based art.
In doing so, they not only strive to maintain the functionality and appeal of the original works, but also follow the ethical principles that guide conservation of traditional artwork, such as sculptures and paintings. Specifically, Engel and Phillips adhere to the American Institute for Conservation of Historic and Artistic Works’ Code of Ethics, Guidelines for Practice, and Commentaries, applying these standards to artistic creations that rely on software as a medium.
“For example, if we migrate a work of software-based art from an obsolete programming environment to a current one, our selection and programming decisions in the new programming language and environment are informed in part by evaluating the artistic goals of the medium first used,” explains Engel. “We strive to maintain respect for the artist’s coding style and approach in our restoration.”
So far, Phillips and Engel have completed two restorations of on-line artworks at the museum: Cheang’s Brandon (restored in 2016-2017) and Simon’s Unfolding Object (restored in 2018).
Commissioned by the Guggenheim in 1998, Brandon was the first of three web artworks acquired by the museum. Many features of the work had begun to fail within the fast-evolving technological landscape of the Internet: specific pages were no longer accessible, text and image animations no longer displayed properly, and internal and external links were broken. Through changes implemented by CCBA, Brandon fully resumes its programmed, functional, and aesthetic behaviors. The newly restored artwork can again be accessed at http://brandon.guggenheim.org.
Unfolding Object enables visitors from across the globe to create their own individual artwork online by unfolding the pages of a virtual “object”—a two-dimensional rectangular form—click by click, creating a new, multifaceted shape. Users may also see traces left by others who have previously unfolded the same facets, represented by lines or hash marks. The colors of the object and the background change depending on the time of day, so that two simultaneous users in different time zones are looking at different colors. But because the Java technology used to develop this early Internet artwork is now obsolete, the work was no longer supported by contemporary web browsers and is not easily accessible online.
About the CCBA
A longtime pioneer in the field of contemporary art conservation, and one of the few institutions in the United States with dedicated staff and lab facilities for the conservation of time-based media art, the Guggenheim established the Conserving Computer-Based Art initiative in 2016. The first program dedicated to this subject at the museum, this multiyear project was created to research and develop better practices for the acquisition, preservation, maintenance, and display of computer-based art. By addressing the challenges of preserving digital artworks, including hardware failure, rapid obsolescence of operating systems, and artists’ custom software, CCBA is tasked with the conservation of 22 computer-based artworks in the Guggenheim collection to ensure long-term storage and access to the public. The CCBA initiative is an opportunity for the Guggenheim to facilitate cross-institutional collaboration towards best-practice development, and CCBA integrates the museum’s ongoing work with the faculty and students of the Department of Computer Science at NYU’s Courant Institute for Mathematical Sciences.
Conserving Computer-Based Art is supported by the Carl & Marilynn Thoma Art Foundation, the New York State Council on the Arts with the support of Governor Andrew Cuomo and the New York State Legislature, Christie’s, and Josh Elkes.
About the Solomon R. Guggenheim Foundation
The Solomon R. Guggenheim Foundation was established in 1937 and is dedicated to promoting the understanding and appreciation of modern and contemporary art through exhibitions, education programs, research initiatives, and publications. The Guggenheim international constellation of museums includes the Solomon R. Guggenheim Museum, New York; the Peggy Guggenheim Collection, Venice; the Guggenheim Museum Bilbao; and the future Guggenheim Abu Dhabi. In 2019, the Frank Lloyd Wright-designed Solomon R. Guggenheim Museum celebrates 60 years as an architectural icon and “temple of spirit” where radical art and architecture meet. To learn more about the museum and the Guggenheim’s activities around the world, visit guggenheim.org.
About the Courant Institute of Mathematical Sciences
New York University’s Courant Institute of Mathematical Sciences is a leading center for research and education in mathematics and computer science. The Institute has contributed to domestic and international science and engineering by promoting an integrated view of mathematics and computation. Faculty and students are engaged in a broad range of research activities, which include many areas of mathematics and computer science as well as the application of these disciplines to problems in the biological, physical, and economic sciences. The Courant Institute has played a central role in the development of applied mathematics, analysis, and computer science, and its faculty has received numerous national and international awards in recognition of their extraordinary research accomplishments. For more information, visit http://www.cims.nyu.edu/.
Have fun exploring these relatively newly available art works.
There seems to be much interest in bacteria as collaborators as opposed to the old ‘enemy that must be destoyed’ concept. The latest collaborative effort was announced in a January 19,2019 news item on Nanowerk,
More than one in 10 people in the world lack basic drinking water access, and by 2025, half of the world’s population will be living in water-stressed areas, which is why access to clean water is one of the National Academy of Engineering’s Grand Challenges. Engineers at Washington University in St. Louis [WUSTL] have designed a novel membrane technology that purifies water while preventing biofouling, or buildup of bacteria and other harmful microorganisms that reduce the flow of water.
And they used bacteria to build such filtering membranes.
Srikanth Singamaneni, professor of mechanical engineering & materials science, and Young-Shin Jun, professor of energy, environmental & chemical engineering, and their teams blended their expertise to develop an ultrafiltration membrane using graphene oxide and bacterial nanocellulose that they found to be highly efficient, long-lasting and environmentally friendly. If their technique were to be scaled up to a large size, it could benefit many developing countries where clean water is scarce.
Biofouling accounts for nearly half of all membrane fouling and is highly challenging to eradicate completely. Singamaneni and Jun have been tackling this challenge together for nearly five years. They previously developed other membranes using gold nanostars, but wanted to design one that used less expensive materials.
Their new membrane begins with feeding Gluconacetobacter hansenii bacteria a sugary substance so that they form cellulose nanofibers when in water. The team then incorporated graphene oxide (GO) flakes into the bacterial nanocellulose while it was growing, essentially trapping GO in the membrane to make it stable and durable.
After GO is incorporated, the membrane is treated with base solution to kill Gluconacetobacter. During this process, the oxygen groups of GO are eliminated, making it reduced GO. When the team shone sunlight onto the membrane, the reduced GO flakes immediately generated heat, which is dissipated into the surrounding water and bacteria nanocellulose.
Ironically, the membrane created from bacteria also can kill bacteria. “If you want to purify water with microorganisms in it, the reduced graphene oxide in the membrane can absorb the sunlight, heat the membrane and kill the bacteria,” Singamaneni said.
Singamaneni and Jun and their team exposed the membrane to E. coli bacteria, then shone light on the membrane’s surface. After being irradiated with light for just 3 minutes, the E. coli bacteria died. The team determined that the membrane quickly heated to above the 70 degrees Celsius required to deteriorate the cell walls of E. coli bacteria.
While the bacteria are killed, the researchers had a pristine membrane with a high quality of nanocellulose fibers that was able to filter water twice as fast as commercially available ultrafiltration membranes under a high operating pressure.
When they did the same experiment on a membrane made from bacterial nanocellulose without the reduced GO, the E. coli bacteria stayed alive.
“This is like 3-D printing with microorganisms,” Jun said. “We can add whatever we like to the bacteria nanocellulose during its growth. We looked at it under different pH conditions similar to what we encounter in the environment, and these membranes are much more stable compared to membranes prepared by vacuum filtration or spin-coating of graphene oxide.”
While Singamaneni and Jun acknowledge that implementing this process in conventional reverse osmosis systems is taxing, they propose a spiral-wound module system, similar to a roll of towels. It could be equipped with LEDs or a type of nanogenerator that harnesses mechanical energy from the fluid flow to produce light and heat, which would reduce the overall cost.