Tag Archives: Seoul National University

The cost of building ChatGPT

After seeing the description for Laura U. Marks’s recent work ‘Streaming Carbon Footprint’ (in my October 13, 2023 posting about upcoming ArtSci Salon events in Toronto), where she focuses on the environmental impact of streaming media and digital art, I was reminded of some September 2023 news.

A September 9, 2023 news item (an Associated Press article by Matt O’Brien and Hannah Fingerhut) on phys.org and also published September 12, 2023 on the Iowa Public Radio website, describe an unexpected cost for building ChatGPT and other AI agents, Note: Links have been removed,

The cost of building an artificial intelligence product like ChatGPT can be hard to measure.

But one thing Microsoft-backed OpenAI needed for its technology was plenty of water [emphases mine], pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.

As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.

But they’re often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAI’s most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it “was literally made next to cornfields west of Des Moines.”

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]

“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.

If you have the time, do read the O’Brien and Fingerhut article in it entirety. (Later in this post, I have a citation for and a link to a paper by Ren.)

Jason Clayworth’s September 18, 2023 article for AXIOS describes the issue from the Iowan perspective, Note: Links have been removed,

Future data center projects in West Des Moines will only be considered if Microsoft can implement technology that can “significantly reduce peak water usage,” the Associated Press reports.

Why it matters: Microsoft’s five WDM data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.

Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.

This information becomes more intriguing (and disturbing) after reading a February 10, 2023 article for the World Economic Forum titled ‘This is why we can’t dismiss water scarcity in the US‘ by James Rees and/or an August 11, 2020 article ‘Why is America running out of water?‘ by Jon Heggie published by the National Geographic, which is a piece of paid content. Note: Despite the fact that it’s sponsored by Finish Dish Detergent, the research in Heggie’s article looks solid.

From Heggie’s article, Note: Links have been removed,

In March 2019, storm clouds rolled across Oklahoma; rain swept down the gutters of New York; hail pummeled northern Florida; floodwaters forced evacuations in Missouri; and a blizzard brought travel to a stop in South Dakota. Across much of America, it can be easy to assume that we have more than enough water. But that same a month, as storms battered the country, a government-backed report issued a stark warning: America is running out of water.

As the U.S. water supply decreases, demand is set to increase. On average, each American uses 80 to 100 gallons of water every day, with the nation’s estimated total daily usage topping 345 billion gallons—enough to sink the state of Rhode Island under a foot of water. By 2100 the U.S. population will have increased by nearly 200 million, with a total population of some 514 million people. Given that we use water for everything, the simple math is that more people mean more water stress across the country.

And we are already tapping into our reserves. Aquifers, porous rocks and sediment that store vast volumes of water underground, are being drained. Nearly 165 million Americans rely on groundwater for drinking water, farmers use it for irrigation―37 percent of our total water usage is for agriculture—and industry needs it for manufacturing. Groundwater is being pumped faster than it can be naturally replenished. The Central Valley Aquifer in California underlies one of the nation’s most agriculturally productive regions, but it is in drastic decline and has lost about ten cubic miles of water in just four years.

Decreasing supply and increasing demand are creating a perfect water storm, the effects of which are already being felt. The Colorado River carved its way 1,450 miles from the Rockies to the Gulf of California for millions of years, but now no longer reaches the sea. In 2018, parts of the Rio Grande recorded their lowest water levels ever; Arizona essentially lives under permanent drought conditions; and in South Florida’s freshwater aquifers are increasingly susceptible to salt water intrusion due to over-extraction.

The focus is on individual use of water and Heggie ends his article by suggesting we use less,

… And every American can save more water at home in multiple ways, from taking shorter showers to not rinsing dishes under a running faucet before loading them into a dishwasher, a practice that wastes around 20 gallons of water for each load. …

As an advertising pitch goes, this is fairly subtle as there’s no branding in the article itself and it is almost wholly informational.

Attempts to stave off water shortages as noted in Heggie’s and other articles include groundwater pumping both for individual use and industrial use. This practice has had an unexpected impact according to a June 16, 2023 article by Warren Cornwall for Science (magazine),

While spinning on its axis, Earth wobbles like an off-kilter top. Sloshing molten iron in Earth’s core, melting ice, ocean currents, and even hurricanes can all cause the poles to wander. Now, scientists have found that a significant amount of the polar drift results from human activity: pumping groundwater for drinking and irrigation.

“The very way the planet wobbles is impacted by our activities,” says Surendra Adhikari, a geophysicist at NASA’s Jet Propulsion Laboratory and an expert on Earth’s rotation who was not involved in the study. “It is, in a way, mind boggling.”

Clark R. Wilson, a geophysicist at the University of Texas at Austin, and his colleagues thought the removal of tens of gigatons of groundwater each year might affect the drift. But they knew it could not be the only factor. “There’s a lot of pieces that go into the final budget for causing polar drift,” Wilson says.

The scientists built a model of the polar wander, accounting for factors such as reservoirs filling because of new dams and ice sheets melting, to see how well they explained the polar movements observed between 1993 and 2010. During that time, satellite measurements were precise enough to detect a shift in the poles as small as a few millimeters.

Dams and ice changes were not enough to match the observed polar motion. But when the researchers also put in 2150 gigatons of groundwater that hydrologic models estimate were pumped between 1993 and 2010, the predicted polar motion aligned much more closely with observations. Wilson and his colleagues conclude that the redistribution of that water weight to the world’s oceans has caused Earth’s poles to shift nearly 80 centimeters during that time. In fact, groundwater removal appears to have played a bigger role in that period than the release of meltwater from ice in either Greenland or Antarctica, the scientists reported Thursday [June 15, 2023] in Geophysical Research Letters.

The new paper helps confirm that groundwater depletion added approximately 6 millimeters to global sea level rise between 1993 and 2010. “I was very happy” that this new method matched other estimates, Seo [Ki-Weon Seo geophysicist at Seoul National University and the study’s lead author] says. Because detailed astronomical measurements of the polar axis location go back to the end of the 19th century, polar drift could enable Seo to trace the human impact on the planet’s water over the past century.

Two papers: environmental impact from AI and groundwater pumping wobbles poles

I have two links and citations for Ren’s paper on AI and its environmental impact,

Towards Environmentally Equitable AI via Geographical Load Balancing by Pengfei Li, Jianyi Yang, Adam Wierman, Shaolei Ren. Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY) Cite as: arXiv:2307.05494 [cs.AI] (or arXiv:2307.05494v1 [cs.AI] for this version) DOI: https://doi.org/10.48550/arXiv.2307.05494 Submitted June 20, 2023

Towards Environmentally Equitable AI via Geographical Load Balancing by Li, Pengfei; Yang, Jianyi; Wierman, Adam; Ren, Shaolei. UC Riverside. Retrieved from https://escholarship.org/uc/item/79c880vf Publication date: 2023-06-27

Both links offer open access to the paper. Should you be interested in more, you can find Shaolei Ren’s website here.

Now for the wobbling poles,

Drift of Earth’s Pole Confirms Groundwater Depletion as a Significant Contributor to Global Sea Level Rise 1993–2010 by Ki-Weon Seo, Dongryeol Ryu, Jooyoung Eom, Taewhan Jeon, Jae-Seung Kim, Kookhyoun Youm, Jianli Chen, Clark R. Wilson. Geophysical Research Letters Volume 50, Issue 12, 28 June 2023 e2023GL103509 DOI: https://doi.org/10.1029/2023GL103509 First published online: 15 June 2023

This paper too is open access.

Climate change and black gold

A July 3, 2019 news item on Nanowerk describes research coming from India and South Korea where nano gold is turned into black nanogold (Note: A link has been removed),

One of the main cause of global warming is the increase in the atmospheric CO2 level. The main source of this CO2 is from the burning of fossil fuels (electricity, vehicles, industry and many more).

Researchers at TIFR [Tata Institute of Fundamental Research] have developed the solution phase synthesis of Dendritic Plasmonic Colloidosomes (DPCs) with varying interparticle distances between the gold Nanoparticles (AU NPs) using a cycle-by-cycle growth approach by optimizing the nucleation-growth step. These DPCs absorb the entire visible and near-infrared region of solar light, due to interparticle plasmonic coupling as well as the heterogeneity in the Au NP [gold nanoparticle] sizes, which transformed golden gold material to black gold (Chemical Science, “Plasmonic colloidosomes of black gold for solar energy harvesting and hotspots directed catalysis for CO2 to fuel conversion”).

A July 3, 2019 Tata Institute of Fundamental Research (TIFR) press release on EurekAlert, which originated the news item, provides more technical detail,

Black (nano)gold was able to catalyze CO2 to methane (fuel) conversion at atmospheric pressure and temperature, using solar energy. They also observed the significant effect of the plasmonic hotspots on the performance of these DPCs for the purification of seawater to drinkable water via steam generation, temperature jump assisted protein unfolding, oxidation of cinnamyl alcohol using pure oxygen as the oxidant, and hydrosilylation of aldehydes.

This was attributed to varying interparticle distances and particle sizes in these DPCs. The results indicate the synergistic effects of EM and thermal hotspots as well as hot electrons on DPCs performance. Thus, DPCs catalysts can effectively be utilized as Vis-NIR light photo-catalysts, and the design of new plasmonic nanocatalysts for a wide range of other chemical reactions may be possible using the concept of plasmonic coupling.

Raman thermometry and SERS (Surface-enhanced Raman Spectroscopy) provided information about the thermal and electromagnetic hotspots and local temperatures which was found to be dependent on the interparticle plasmonic coupling. The spatial distribution of the localized surface plasmon modes by STEM-EELS plasmon mapping confirmed the role of the interparticle distances in the SPR (Surface Plasmon Resonance) of the material.

Thus, in this work, by using the techniques of nanotechnology, the researchers transformed golden gold to black gold, by changing the size and gaps between gold nanoparticles. Similar to the real trees, which use CO2, sunlight and water to produce food, the developed black gold acts like an artificial tree that uses CO2, sunlight and water to produce fuel, which can be used to run our cars. Notably, black gold can also be used to convert sea water into drinkable water using the heat that black gold generates after it captures sunlight.

This work is a way forward to develop “Artificial Trees” which capture and convert CO2 to fuel and useful chemicals. Although at this stage, the production rate of fuel is low, in coming years, these challenges can be resolved. We may be able to convert CO2 to fuel using sunlight at atmospheric condition, at a commercially viable scale and CO2 may then become our main source of clean energy.

Here’s an image illustrating the work

Caption: Use of black gold can get us one step closer to combat climate change. Credit: Royal Society of Chemistry, Chemical Science

A July 3, 2019 Royal Society of Chemistry Highlight features more information about the research,

A “black” gold material has been developed to harvest sunlight, and then use the energy to turn carbon dioxide (CO2) into useful chemicals and fuel.

In addition to this, the material can also be used for applications including water purification, heating – and could help further research into new, efficient catalysts.

“In this work, by using the techniques of nanotechnology, we transformed golden gold to black gold, by simply changing the size and gaps between gold nanoparticles,” said Professor Vivek Polshettiwar from Tata Institute of Fundamental Research (TIFR) in India.

Tuning the size and gaps between gold nanoparticles created thermal and electromagnetic hotspots, which allowed the material to absorb the entire visible and near-infrared region of sunlight’s wavelength – making the gold “black”.

The team of researchers, from TIFR and Seoul National University in South Korea, then demonstrated that this captured energy could be used to combat climate change.

Professor Polshettiwar said: “It not only harvests solar energy but also captures and converts CO2 to methane (fuel). Synthesis and use of black gold for CO2-to-fuel conversion, which is reported for the first time, has the potential to resolve the global CO2 challenge.

“Now, like real trees which use CO2, sunlight and water to produce food, our developed black gold acts like an artificial tree to produce fuel – which we can use to run our cars,” he added.
Although production is low at this stage, Professor Polshettiwar (who was included in the RSC’s 175 Faces of Chemistry) believes that the commercially-viable conversion of CO2 to fuel at atmospheric conditions is possible in the coming years.

He said: “It’s the only goal of my life – to develop technology to capture and convert CO2 and combat climate change, by using the concepts of nanotechnology.”

Other experiments described in the Chemical Science paper demonstrate using black gold to efficiently convert sea water into drinkable water via steam generation.

It was also used for protein unfolding, alcohol oxidation, and aldehyde hydrosilylation: and the team believe their methodology could lead to novel and efficient catalysts for a range of chemical transformations.

Here’s a link to and a citation for the paper,

Plasmonic colloidosomes of black gold for solar energy harvesting and hotspots directed catalysis for CO2 to fuel conversion by Mahak Dhiman, Ayan Maity, Anirban Das, Rajesh Belgamwar, Bhagyashree Chalke, Yeonhee Lee, Kyunjong Sim, Jwa-Min Nam and Vivek Polshettiwar. Chem. Sci., 2019, Advance Article. DOI: 10.1039/C9SC02369K First published on July 3, 2019

This paper is freely available in the open access journal Chemical Science.

Nanoparticle computing

I’m fascinated with this news and I’m pretty sure it’s my first exposure to nanoparticle computing so am quite excited about this ‘discovery of mine’.

A February 25, 2019 news item on Nanowerk announces the research from Korean scientists,

Computation is a ubiquitous concept in physical sciences, biology, and engineering, where it provides many critical capabilities. Historically, there have been ongoing efforts to merge computation with “unusual” matters across many length scales, from microscopic droplets (Science 315, 832, 2007) to DNA nanostructures (Science 335, 831, 2012; Nat. Chem. 9, 1056, 2017) and molecules (Science 266, 1021, 1994; Science 314, 1585, 2006; Nat. Nanotech. 2, 399, 2007; Nature 375, 368, 2011).

However, the implementation of complex computation in particle systems, especially in nanoparticles, remains challenging, despite a wide range of potential applications that would benefit from algorithmically controlling their unique and potentially useful intrinsic features (such as photonic, plasmonic, catalytic, photothermal, optoelectronic, electrical, magnetic and material properties) without human interventions.

This challenge is not due to the lack of sophistication in the the current state-of-the-art of stimuli-responsive nanoparticles, many of which can conceptually function as elementary logic gates. This is mostly due to the lack of scalable architectures that would enable systematic integration and wiring of the gates into a large integrated circuit.

Previous approaches are limited to (i) demonstrating one simple logic operation per test tube or (ii) relying on complicated enzyme-based molecular circuits in solution. It should be also noted that modular and scalable aspects are key challenges in DNA computing for practical and widespread use.

A February 23, 2019 Seoul National University press release on EurekAlert, which originated the news items, dives into more detail,

In nature, the cell membrane is analogous to a circuit board, as it organizes a wide range of biological nanostructures (e.g. proteins) as (computational) units and allows them to dynamically interact with each other on the fluidic 2D surface to carry out complex functions as a network and often induce signaling intracellular signaling cascades. For example, the membrane proteins on the membrane take chemical/physical cues as inputs (e.g. binding with chemical agents, mechanical stimuli) and change their conformations and/or dimerize as outputs. Most importantly, such biological “computing” processes occur in a massively parallel fashion. Information processing on living cell membranes is a key to how biological systems adapt to changes in external environments.

This manuscript reports the development of a nanoparticle-lipid bilayer hybrid-based computing platform termed lipid nanotablet (LNT), in which nanoparticles, each programmed with surface chemical ligands (DNA in this case), are tethered to a supported lipid bilayer to carry out computation. Taking inspirations from parallel computing processes on cellular membranes, we exploited supported lipid bilayers (SLBs)–synthetic mimics for cell surfaces–as chemical circuit boards to construct nanoparticle circuits. This “nano?bio” computing, which occurs at the interface of nanostructures and biomolecules, translates molecular information in solution (input) into dynamic assembly/disassembly of nanoparticles on a lipid bilayer (output).

We introduced two types of nanoparticles to a lipid bilayer that differ in mobility: mobile Nano-Floaters and immobile Nano-Receptors. Due to high mobility, floaters actively interact with receptors across space and time, functioning as active units of computation. The nanoparticles are functionalized with specially designed DNA [deoxyribonucleic acid] ligands, and the surface ligands render receptor-floater interactions programmable, thereby transforming a pair of receptor and floater into a logic gate. A nanoparticle logic gate takes DNA strands in solution as inputs and generates nanoparticle assembly or disassembly events as outputs. The nanoparticles and their interactions can be imaged and tracked by dark-field microscopy with single-nanoparticle resolution because of strong and stable scattering signals from plasmonic nanoparticles. Using this approach (termed “interface programming”), we first demonstrated that a pair of nanoparticles (that is, two nanoparticles on a lipid bilayer) can carry out AND, OR, INHIBIT logic operations and take multiple inputs (fan-in) and generate multiple outputs (fan-out). Also, multiple logic gates can be modularly wired with AND or OR logic via floaters, as the mobility of floaters enables the information cascade among several nanoparticle logic gates. We termed this strategy “network programming.” By combining these two strategies (interfacial and network programming), we were able to implement complex logic circuits such as multiplexer.

The most important contributions of our paper are the conceptual one and the major advances in modular and scalable molecular computing (DNA computing in this case). LNT platform, for the first time, introduces the idea of using lipid bilayer membranes as key components for information processing. As the two-dimensional (2D) fluidic lipid membrane is bio-compatible and chemically modifiable, any nanostructures can be potentially introduced and used as computing units. When tethered to the lipid bilayer “chip”, these nanostructures can be visualized and become controllable at the single-particle level; this dimensionality reduction, bringing the nanostructures from freely diffusible solution phase (3D) to fluidic membrane (2D), transforms a collection of nanostructures into a programmable, analyzable reaction network. Moreover, we also developed a digitized imaging method and software for quantitative and massively parallel analysis of interacting nanoparticles. In addition, LNT platform provides many practical merits to current state-of-the-art in molecular computing and nanotechnology. On LNT platforms, a network of nanoparticles (each with unique and beneficial properties) can be design to autonomously respond to molecular information; such capability to algorithmically control nanoparticle networks will be very useful for addressing many challenges with molecular computing and developing new computing platforms. As the title of our manuscript suggests, this nano-bio computing will lead to exciting opportunities in biocomputation, nanorobotics, DNA nanotechnology, artificial bio-interfaces, smart biosensors, molecular diagnostics, and intelligent nanomaterials. In summary, the operating and design principles of lipid nanotablet platform are as follows:

(1) LNT uses single nanoparticles as units of computation. By tracking numerous nanoparticles and their actions with dark-field microscopy at the single-particle level, we could treat a single nanoparticle as a two-state device representing a bit. A nanoparticle provides a discrete, in situ optical readout of its interaction (e.g. association or dissociation) with another particle as an output of logic computation.

(2) Nanoparticles on LNT function as Boolean logic gates. We exploited the programmable bonding interaction within particle-particle interfaces to transform two interacting nanoparticles into a Boolean logic gate. The gate senses single-stranded DNA as inputs and triggers an assembly or disassembly reaction of the pair as an output. We demonstrated two-input AND, two-input OR and INHIBIT logic operations, and fan-in/fan-out of logic gates.

(3) LNT enables modular wiring of multiple nanoparticle logic gates into a combinational circuit. We exploited parallel, single-particle imaging to program nanoparticle networks and thereby wire multiple logic gates into a combinational circuit. We demonstrate a multiplexer MUX2to1 circuit built from the network wiring rules.

Here’s a link to and a citation for the team’s latest paper,

Nano-bio-computing lipid nanotablet by Jinyoung Seo, Sungi Kim, Ha H. Park, Da Yeon Choi, and Jwa-Min Nam. Science Advances 22 Feb 2019: Vol. 5, no. 2, eaau2124 DOI: 10.1126/sciadv.aau2124

This paper appears to be open access.

Artificial synapse based on tantalum oxide from Korean researchers

This memristor story comes from South Korea as we progress on the way to neuromorphic computing (brainlike computing). A Sept. 7, 2018 news item on ScienceDaily makes the announcement,

A research team led by Director Myoung-Jae Lee from the Intelligent Devices and Systems Research Group at DGIST (Daegu Gyeongbuk Institute of Science and Technology) has succeeded in developing an artificial synaptic device that mimics the function of the nerve cells (neurons) and synapses that are response for memory in human brains. [sic]

Synapses are where axons and dendrites meet so that neurons in the human brain can send and receive nerve signals; there are known to be hundreds of trillions of synapses in the human brain.

This chemical synapse information transfer system, which transfers information from the brain, can handle high-level parallel arithmetic with very little energy, so research on artificial synaptic devices, which mimic the biological function of a synapse, is under way worldwide.

Dr. Lee’s research team, through joint research with teams led by Professor Gyeong-Su Park from Seoul National University; Professor Sung Kyu Park from Chung-ang University; and Professor Hyunsang Hwang from Pohang University of Science and Technology (POSTEC), developed a high-reliability artificial synaptic device with multiple values by structuring tantalum oxide — a trans-metallic material — into two layers of Ta2O5-x and TaO2-x and by controlling its surface.

A September 7, 2018 DGIST press release (also on EurekAlert), which originated the news item, delves further into the work,

The artificial synaptic device developed by the research team is an electrical synaptic device that simulates the function of synapses in the brain as the resistance of the tantalum oxide layer gradually increases or decreases depending on the strength of the electric signals. It has succeeded in overcoming durability limitations of current devices by allowing current control only on one layer of Ta2O5-x.

In addition, the research team successfully implemented an experiment that realized synapse plasticity [or synaptic plasticity], which is the process of creating, storing, and deleting memories, such as long-term strengthening of memory and long-term suppression of memory deleting by adjusting the strength of the synapse connection between neurons.

The non-volatile multiple-value data storage method applied by the research team has the technological advantage of having a small area of an artificial synaptic device system, reducing circuit connection complexity, and reducing power consumption by more than one-thousandth compared to data storage methods based on digital signals using 0 and 1 such as volatile CMOS (Complementary Metal Oxide Semiconductor).

The high-reliability artificial synaptic device developed by the research team can be used in ultra-low-power devices or circuits for processing massive amounts of big data due to its capability of low-power parallel arithmetic. It is expected to be applied to next-generation intelligent semiconductor device technologies such as development of artificial intelligence (AI) including machine learning and deep learning and brain-mimicking semiconductors.

Dr. Lee said, “This research secured the reliability of existing artificial synaptic devices and improved the areas pointed out as disadvantages. We expect to contribute to the development of AI based on the neuromorphic system that mimics the human brain by creating a circuit that imitates the function of neurons.”

Here’s a link to and a citation for the paper,

Reliable Multivalued Conductance States in TaOx Memristors through Oxygen Plasma-Assisted Electrode Deposition with in Situ-Biased Conductance State Transmission Electron Microscopy Analysis by Myoung-Jae Lee, Gyeong-Su Park, David H. Seo, Sung Min Kwon, Hyeon-Jun Lee, June-Seo Kim, MinKyung Jung, Chun-Yeol You, Hyangsook Lee, Hee-Goo Kim, Su-Been Pang, Sunae Seo, Hyunsang Hwang, and Sung Kyu Park. ACS Appl. Mater. Interfaces, 2018, 10 (35), pp 29757–29765 DOI: 10.1021/acsami.8b09046 Publication Date (Web): July 23, 2018

Copyright © 2018 American Chemical Society

This paper is open access.

You can find other memristor and neuromorphic computing stories here by using the search terms I’ve highlighted,  My latest (more or less) is an April 19, 2018 posting titled, New path to viable memristor/neuristor?

Finally, here’s an image from the Korean researchers that accompanied their work,

Caption: Representation of neurons and synapses in the human brain. The magnified synapse represents the portion mimicked using solid-state devices. Credit: Daegu Gyeongbuk Institute of Science and Technology(DGIST)

Ceria-zirconia nanoparticles for sepsis treatment

South Korean researchers are looking at a new way of dealing with infections (sepsis) according to a July 6, 2017 news item on phys.org,

During sepsis, cells are swamped with reactive oxygen species generated in an aberrant response of the immune system to a local infection. If this fatal inflammatory path could be interfered, new treatment schemes could be developed. Now, Korean scientists report in the journal Angewandte Chemie that zirconia-doped ceria nanoparticles act as effective scavengers of these oxygen radicals, promoting a greatly enhanced surviving rate in sepsis model organisms.

A July 6, 2017 Wiley (Publishers) press release, which originated the news item, provides more detail,

Sepsis proceeds as a vicious cycle of inflammatory reactions of the immune system to a local infection. Fatal consequences can be falling blood pressure and the collapse of organ function. As resistance against antibiotics is growing, scientists turn to the inflammatory pathway as an alternative target for new treatment strategies. Taeghwan Heyon from Seoul National University, Seung-Hoon Lee at Seoul National University Hospital, South Korea, and collaborators explore ceria nanoparticles for their ability to scavenge reactive oxygen species, which play a key role in the inflammatory process. By quickly converting between two oxidation states, the cerium ion can quench typical oxygen radical species like the superoxide anion, the hydroxyl radical anion, or even hydrogen peroxide. But in the living cell, this can only happen if two conditions are met.

The first condition is the size and nature of the particles. Small, two-nanometer-sized particles were coated by a hydrophilic shell of poly(ethylene glycol)-connected phospholipids to make them soluble so that they can enter the cell and remain there. Second, the cerium ion responsible for the quenching (Ce3+) should be accessible on the surface of the nanoparticles, and it must be regenerated after the reactions. Here, the scientists found out that a certain amount of zirconium ions in the structure helped, because “the Zr4+ ions control the Ce3+-to-Ce4+ ratio as well as the rate of conversion between the two oxidation states,” they argued.

The prepared nanoparticles were then tested for their ability to detoxify reactive oxygen species, not only in the test tube, but also in live animal models. The results were clear, as the authors stated: “A single dose of ceria-zirconia nanoparticles successfully attenuated the vicious cycle of inflammatory responses in two sepsis models.” The nanoparticles accumulated in organs where severe immune responses occurred, and they were successful in the eradication of reactive oxygen species, as evidenced with fluorescence microscopy and several other techniques. And importantly, the treated mice and rats had a far higher survival rate.

This work demonstrates that other approaches in sepsis treatment than killing bacteria with antibiotics are possible. Targeting the inflammatory signal pathways in macrophages is a very promising option, and the authors have shown that effective scavenging of reactive oxygen species and stopping inflammation is possible with a suitably designed chemical system like this cerium ion redox system provided by nanoparticles.

Here’s a link to and a citation for the paper,

Ceria–Zirconia Nanoparticles as an Enhanced Multi-Antioxidant for Sepsis Treatment by Min Soh, Dr. Dong-Wan Kang, Dr. Han-Gil Jeong, Dr. Dokyoon Kim, Dr. Do Yeon Kim, Dr. Wookjin Yang, Changyeong Song, Seungmin Baik, In-Young Choi, Seul-Ki Ki, Hyek Jin Kwon, Dr. Taeho Kim, Prof. Dr. Chi Kyung Kim, Prof. Dr. Seung-Hoon Lee, and Prof. Dr. Taeghwan Hyeon. Angewandte Chemie DOI: 10.1002/anie.201704904 Version of Record online: 5 JUL 2017

© 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Placenta-on-a-chip for research into causes for preterm birth

Preterm birth (premature baby) research has received a boost with this latest work from the University of Pennsylvania. A July 21, 2016 news item on phys.org tells us more,

Researchers at the University of Pennsylvania have developed the first placenta-on-a-chip that can fully model the transport of nutrients across the placental barrier.

A July 21, 2016 University of Pennsylvania news release, which originated the news item, provides more detail about the chip and the research (Note: Links have been removed),

The flash-drive-sized device contains two layers of human cells that model the interface between mother and fetus. Microfluidic channels on either side of those layers allow researchers to study how molecules are transported through, or are blocked by, that interface.

Like other organs-on-chips, such as ones developed to simulate lungs, intestines and eyes, the placenta-on-a-chip provides a unique capability to mimic and study the function of that human organ in ways that have not been possible using traditional tools.

Research on the team’s placenta-on-a-chip is part of a nationwide effort sponsored by the March of Dimes to identify causes of preterm birth and ways to prevent it. Prematurely born babies may experience lifelong, debilitating consequences, but the underlying mechanisms of this condition are not well understood due in part to the difficulties of experimenting with intact, living human placentae.

The research was led by Dan Huh, the Wilf Family Term Assistant Professor of Bioengineering in Penn’s School of Engineering and Applied Science, and Cassidy Blundell, a graduate student in the Huh lab. They collaborated with Samuel Parry, the Franklin Payne Professor of Obstetrics and Gynecology; Christos Coutifaris, the Nancy and Richard Wolfson Professor of Obstetrics and Gynecology in Penn’s Perelman School of Medicine; and Emily Su, assistant professor of obstetrics and gynecology in the Anschutz Medical School of the University of Colorado Denver.

The researchers’ placenta-on-a-chip is a clear silicone device with two parallel microfluidic channels separated by a porous membrane. On one side of those pores, trophoblast cells, which are found at the placental interface with maternal blood, are grown. On the other side are endothelial cells, found on the interior of fetal blood vessels. The layers of those two cell types mimic the placental barrier, the gatekeeper between the maternal and fetal circulatory systems.

“That barrier,” Blundell said, “mediates all transport between mother and fetus during pregnancy. Nutrients, but also foreign agents like viruses, need to be either transported by that barrier or stopped.”

“One of the most important function of the placental barrier is transport,” Huh said, “so it’s essential for us to mimic that functionality.”

In 2013, Huh and his collaborators at Seoul National University conducted a preliminary study to create a microfluidic device for culturing trophoblast cells and fetal endothelial cells. This model, however, lacked the ability to form physiological placental tissue and accurately simulate transport function of the placental barrier.

In their new study, the Penn researchers have demonstrated that the two layers of cells continue to grow and develop while inside the chip, undergoing a process known as “syncytialization.”

“The placental cells change over the course of pregnancy,” Huh said. “During pregnancy, the placental trophoblast cells actually fuse with one another to form an interesting tissue called syncytium. The barrier also becomes thinner as the pregnancy progresses, and with our new model we’re able to reproduce this change.

“This process is very important because it affects placental transport and was a critical aspect not represented in our previous model.”

The Penn team validated the new model by showing glucose transfer rates across this syncytialized barrier matched those measured in perfusion studies of donated human placentae.

While useful in providing this type of baseline, donated placental tissue can be problematic for doing many of the types of studies necessary for fully understanding the structure and function of the placenta, especially as it pertains to diseases and disorders.

“The placenta is arguably the least understood organ in the human body,” Huh said, “and much remains to be learned about how transport between mother and fetus works at the tissue, cellular and molecular levels. An isolated whole organ is an not ideal platform for these types of mechanistic studies.”

“Beyond the scarcity of samples,” Blundell said, “there’s a limited lifespan of how long the tissue remains viable, for only a few hours after delivery, and the system that is used to perfuse the tissue and perform transport studies is complex.”

While the placenta-on-a-chip is still in the early stages of testing, researchers at Penn and beyond are already planning to use it in studies on preterm birth.

“This effort,” Parry said, “was part of the much larger Prematurity Research Center here at Penn, one of five centers around the country funded by the March of Dimes to study the causes of preterm birth. The rate of preterm birth is about 10 to 11 percent of all pregnancies. That rate has not been decreasing, and interventions to prevent preterm birth have been largely unsuccessful.”

As part of a $10 million grant from the March of Dimes that established the Center, Parry and his colleagues research metabolic changes that may be associated with preterm birth using in vitro placental cell lines and ex vivo placental tissue. The grant also supported their work with the Huh lab to develop new tools that could model preterm birth-associated placental dysfunction and inform such research efforts.

“Since publishing this paper,” Samuel Parry said, “we’ve reached out to the principal investigators at the other four March of Dimes sites and offered to provide them this model to use in their experiments.”

“Eventually,” Huh said, “we hope to leverage the unique capabilities of our model to demonstrate the potential of organ-on-a-chip technology as a new strategy to innovate basic and translational research in reproductive biology and medicine.”

Here’s a link to and a citation for the paper,

A microphysiological model of the human placental barrier by Cassidy Blundell, Emily R. Tess, Ariana S. R. Schanzer, Christos Coutifaris, Emily J. Su, Samuel Parry. and Dongeun Huh. Lab Chip, 2016, Advance Article DOI: 10.1039/C6LC00259E First published online 20 May 2016

I believe this paper is behind a paywall.

One final note, I thought this was a really well written news release.

Extreme water repellency achieved by combining nanostructured surfaces with Leidenfrost effect

Apparently a new twist has been added to the water repellency story. From a May 17, 2016 news item on ScienceDaily,

What do you get if you combine nanotextured ‘Cassie’ surfaces with the Leidenfrost effect? Highly water-repellent surfaces that show potential for developing future self-cleaning windows, windshields, exterior paints and more [sic]

Combining superhydrophobic surfaces with Leidenfrost levitation–picture a water droplet hovering over a hot surface rather than making physical contact with it–has been explored extensively for the past decade by researchers hoping to uncover the holy grail of water-repellent surfaces.

A May 17, 2016 American Institute of Physics news release on EurekAlert, which originated the news item, provides more detail about the work,

In a new twist, a group of South Korean researchers from Seoul National University and Dankook University report an anomalous water droplet-bouncing phenomenon generated by Leidenfrost levitation on nanotextured surfaces in Applied Physics Letters, from AIP Publishing.

“Wettability plays a key role in determining the equilibrium contact angles, contact angle hysteresis, and adhesion between a solid surface and liquid, as well as the retraction process of a liquid droplet impinged on the surface,” explained Doo Jin Lee, lead author, and a postdoctoral researcher in the Department of Materials and Engineering at Seoul National University.

Nonwetting surfaces tend to be created by one of two methods. “First, textured surfaces enable nonwettability because a liquid can’t penetrate into the micro- or nano-features, thanks to air entrapment between asperities on the textured materials,” Lee said.

Or, second, the Leidenfrost effect “can help produce a liquid droplet dancing on a hot surface by floating it on a cushion of its own vapor,” he added. “The vapor film between the droplet and heated surface allows the droplet to bounce off the surface–also known as the ‘dynamic Leidenfrost phenomenon.'”

Lee and colleagues developed a special “nonwetting, nanotextured surface” so they could delve into the dynamic Leidenfrost effect’s impact on the material.

“Our nanotextured surface was verified to be ‘nonwetting’ via thermodynamic analysis,” Lee elaborated. “This analytical approach shows that the water droplet isn’t likely to penetrate into the surface’s nanoholes, which is advantageous for designing nonwetting, water-repellant systems. And the water droplet bouncing was powered by the synergetic combination of the nonwetting surface–often called a ‘Cassie surface’–and the Leidenfrost effect.”

By comparing the hydrophobic surface and nanotextured surface, the group discovered that enhanced water droplet bouncing was created by the combined impact of the Leidenfrost levitation and the nonwetting Cassie state.

“A thermodynamic approach predicts the nonwettability on the nanotextured surface, and a scaling law between the capillary and vapor pressure of the droplet explains the mechanism of the dynamic Leidenfrost phenomenon,” said Lee.

These findings should “be of value for a wide range of research areas, such as the study of nonwetting surfaces by the Leidenfrost effect and nanotextured features, enhanced liquid droplet bouncing, and film boiling of liquid droplets on heated Cassie surfaces,” he added.

Significantly, the group’s work furthers the fundamental understanding of the dynamic Leidenfrost droplet levitation and droplet-bouncing phenomena on hydrophobic and nanoengineered surfaces. This means that it will be useful for developing highly water-repellant surfaces for industrial applications such as self-cleaning windows, windshields, exterior paints, anti-fouling coatings, roof tiles, and textiles in the future.

“Our future work will focus on developing multiscale structures with microscale and nanoscale regularities, and explore the nonwetting characteristics of their surfaces with the dynamic Leidenfrost effect,” Lee noted.

Here’s a link to and a citation for the paper,

Anomalous water drop bouncing on a nanotextured surface by the Leidenfrost levitation by Doo Jin Lee and Young Seok Song.  Appl. Phys. Lett. 108, 201604 (2016); http://dx.doi.org/10.1063/1.4948769

This paper appears to be open access.

Graphene-based sensor mimics pain (mu-opioid) receptor

I once had a job where I had to perform literature searches and read papers on pain research as it related to morphine tolerance. Not a pleasant task, it has left me eager to encourage and write about alternatives to animal testing, a key component of pain research. So, with a ‘song in my heart’, I feature this research from the University of Pennsylvania written up in a May 12, 2014 news item on ScienceDaily,

Almost every biological process involves sensing the presence of a certain chemical. Finely tuned over millions of years of evolution, the body’s different receptors are shaped to accept certain target chemicals. When they bind, the receptors tell their host cells to produce nerve impulses, regulate metabolism, defend the body against invaders or myriad other actions depending on the cell, receptor and chemical type.

Now, researchers from the University of Pennsylvania have led an effort to create an artificial chemical sensor based on one of the human body’s most important receptors, one that is critical in the action of painkillers and anesthetics. In these devices, the receptors’ activation produces an electrical response rather than a biochemical one, allowing that response to be read out by a computer.

By attaching a modified version of this mu-opioid receptor to strips of graphene, they have shown a way to mass produce devices that could be useful in drug development and a variety of diagnostic tests. And because the mu-opioid receptor belongs to the most common class of such chemical sensors, the findings suggest that the same technique could be applied to detect a wide range of biologically relevant chemicals.

A May 6, 2014 University of Pennsylvania news release, which originated the news item, describes the main teams involved in this research along with why and how they worked together (Note: Links have been removed),

The study, published in the journal Nano Letters, was led by A.T. Charlie Johnson, director of Penn’s Nano/Bio Interface Center and professor of physics in Penn’s School of Arts & Sciences; Renyu Liu, assistant professor of anesthesiology in Penn’s Perelman School of Medicine; and Mitchell Lerner, then a graduate student in Johnson’s lab. It was made possible through a collaboration with Jeffery Saven, professor of chemistry in Penn Arts & Sciences. The Penn team also worked with researchers from the Seoul National University in South Korea.

Their study combines recent advances from several disciplines.

Johnson’s group has extensive experience attaching biological components to nanomaterials for use in chemical detectors. Previous studies have involved wrapping carbon nanotubes with single-stranded DNA to detect odors related to cancer and attaching antibodies to nanotubes to detect the presence of the bacteria associated with Lyme disease.

After Saven and Liu addressed these problems with the redesigned receptor, they saw that it might be useful to Johnson, who had previously published a study on attaching a similar receptor protein to carbon nanotubes. In that case, the protein was difficult to grow genetically, and Johnson and his colleagues also needed to include additional biological structures from the receptors’ natural membranes in order to keep them stable.

In contrast, the computationally redesigned protein could be readily grown and attached directly to graphene, opening up the possibility of mass producing biosensor devices that utilize these receptors.

“Due to the challenges associated with isolating these receptors from their membrane environment without losing functionality,” Liu said, “the traditional methods of studying them involved indirectly investigating the interactions between opioid and the receptor via radioactive or fluorescent labeled ligands, for example. This multi-disciplinary effort overcame those difficulties, enabling us to investigate these interactions directly in a cell free system without the need to label any ligands.”

With Saven and Liu providing a version of the receptor that could stably bind to sheets of graphene, Johnson’s team refined their process of manufacturing those sheets and connecting them to the circuitry necessary to make functional devices.

The news release provides more technical details about the graphene sensor,

“We start by growing a piece of graphene that is about six inches wide by 12 inches long,” Johnson said. “That’s a pretty big piece of graphene, but we don’t work with the whole thing at once. Mitchell Lerner, the lead author of the study, came up with a very clever idea to cut down on chemical contamination. We start with a piece that is about an inch square, then separate them into ribbons that are about 50 microns across.

“The nice thing about these ribbons is that we can put them right on top of the rest of the circuitry, and then go on to attach the receptors. This really reduces the potential for contamination, which is important because contamination greatly degrades the electrical properties we measure.”

Because the mechanism by which the device reports on the presence of the target molecule relies only on the receptor’s proximity to the nanostructure when it binds to the target, Johnson’s team could employ the same chemical technique for attaching the antibodies and other receptors used in earlier studies.

Once attached to the ribbons, the opioid receptors would produce changes in the surrounding graphene’s electrical properties whenever they bound to their target. Those changes would then produce electrical signals that would be transmitted to a computer via neighboring electrodes.

The high reliability of the manufacturing process — only one of the 193 devices on the chip failed — enables applications in both clinical diagnostics and further research. [emphasis mine]

“We can measure each device individually and average the results, which greatly reduces the noise,” said Johnson. “Or you could imagine attaching 10 different kinds of receptors to 20 devices each, all on the same chip, if you wanted to test for multiple chemicals at once.”

In the researchers’ experiment, they tested their devices’ ability to detect the concentration of a single type of molecule. They used naltrexone, a drug used in alcohol and opioid addiction treatment, because it binds to and blocks the natural opioid receptors that produce the narcotic effects patients seek.

“It’s not clear whether the receptors on the devices are as selective as they are in the biological context,” Saven said, “as the ones on your cells can tell the difference between an agonist, like morphine, and an antagonist, like naltrexone, which binds to the receptor but does nothing. By working with the receptor-functionalized graphene devices, however, not only can we make better diagnostic tools, but we can also potentially get a better understanding of how the bimolecular system actually works in the body.”

“Many novel opioids have been developed over the centuries,” Liu said. “However, none of them has achieved potent analgesic effects without notorious side effects, including devastating addiction and respiratory depression. This novel tool could potentially aid the development of new opioids that minimize these side effects.”

Wherever these devices find applications, they are a testament to the potential usefulness of the Nobel-prize winning material they are based on.

“Graphene gives us an advantage,” Johnson said, “in that its uniformity allows us to make 192 devices on a one-inch chip, all at the same time. There are still a number of things we need to work out, but this is definitely a pathway to making these devices in large quantities.”

There is no mention of animal research but it seems likely to me that this work could lead to a decreased use of animals in pain research.

This project must have been quite something as it involved collaboration across many institutions (from the news release),

Also contributing to the study were Gang Hee Han, Sung Ju Hong and Alexander Crook of Penn Arts & Sciences’ Department of Physics and Astronomy; Felipe Matsunaga and Jin Xi of the Department of Anesthesiology at the Perelman School of Medicine, José Manuel Pérez-Aguilar of Penn Arts & Sciences’ Department of Chemistry; and Yung Woo Park of Seoul National University. Mitchell Lerner is now at SPAWAR Systems Center Pacific, Felipe Matsunaga at Albert Einstein College of Medicine, José Manuel Pérez-Aguilar at Cornell University and Sung Ju Hong at Seoul National University.

Here’s a link to and a citation for the paper,

Scalable Production of Highly Sensitive Nanosensors Based on Graphene Functionalized with a Designed G Protein-Coupled Receptor by Mitchell B. Lerner, Felipe Matsunaga, Gang Hee Han, Sung Ju Hong, Jin Xi, Alexander Crook, Jose Manuel Perez-Aguilar, Yung Woo Park, Jeffery G. Saven, Renyu Liu, and A. T. Charlie Johnson.Nano Lett., Article ASAP
DOI: 10.1021/nl5006349 Publication Date (Web): April 17, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Should October 2013 be called ‘the month of graphene’?

Since the Oct. 10-11, 2013 Graphene Flagship (1B Euros investment) launch, mentioned in my preview Oct. 7, 2013 posting, there’ve been a flurry of graphene-themed news items both on this blog and elsewhere and I’ve decided to offer a brief roundup what I’ve found elsewhere.

Dexter Johnson offers a commentary in the pithily titled, Europe Invests €1 Billion to Become “Graphene Valley,” an Oct. 15, 2013 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) Note: Links have been removed,

The initiative has been dubbed “The Graphene Flagship,” and apparently it is the first in a number of €1 billion, 10-year plans the EC is planning to launch. The graphene version will bring together 76 academic institutions and industrial groups from 17 European countries, with an initial 30-month-budget of €54M ($73 million).

Graphene research is still struggling to find any kind of applications that will really take hold, and many don’t expect it will have a commercial impact until 2020. What’s more, manufacturing methods are still undeveloped. So it would appear that a 10-year plan is aimed at the academic institutions that form the backbone of this initiative rather than commercial enterprises.

Just from a political standpoint the choice of Chalmers University in Sweden as the base of operations for the Graphene Flagship is an intriguing choice. …

I have to agree with Dexter that choosing Chalmers University over the University of Manchester where graphene was first isolated is unexpected. As a companion piece to reading Dexter’s posting in its entirety and which features a video from the flagship launch, you might want to try this Oct. 15, 2013 article by Koen Mortelmans for Youris (h/t Oct. 15, 2013 news item on Nanowerk),

Andre Konstantin Geim is the only person who ever received both a Nobel and an Ig Nobel. He was born in 1958 in Russia, and is a Dutch-British physicist with German, Polish, Jewish and Ukrainian roots. “Having lived and worked in several European countries, I consider myself European. I don’t believe that any further taxonomy is necessary,” he says. He is now a physics professor at the University of Manchester. …

He shared the Noble [Nobel] Prize in 2010 with Konstantin Novoselov for their work on graphene. It was following on their isolation of microscope visible grapheme flakes that the worldwide research towards practical applications of graphene took off.  “We did not invent graphene,” Geim says, “we only saw what was laid up for five hundred year under our noses.”

Geim and Novoselov are often thought to have succeeded in separating graphene from graphite by peeling it off with ordinary duct tape until there only remained a layer. Graphene could then be observed with a microscope, because of the partial transparency of the material. That is, after dissolving the duct tape material in acetone, of course. That is also the story Geim himself likes to tell.

However, he did not use – as the urban myth goes – graphite from a common pencil. Instead, he used a carbon sample of extreme purity, specially imported. He also used ultrasound techniques. But, probably the urban legend will survive, as did Archimedes’ bath and Newtons apple. “It is nice to keep some of the magic,” is the expression Geim often uses when he does not want a nice story to be drowned in hard facts or when he wants to remain discrete about still incomplete, but promising research results.

Mortelmans’ article fills in some gaps for those not familiar with the graphene ‘origins’ story while Tim Harper’s July 22, 2012 posting on Cientifica’s (an emerging technologies consultancy where Harper is the CEO and founder) TNT blog offers an insight into Geim’s perspective on the race to commercialize graphene with a paraphrased quote for the title of Harper’s posting, “It’s a bit silly for society to throw a little bit of money at (graphene) and expect it to change the world.” (Note: Within this context, mention is made of the company’s graphene opportunities report.)

With all this excitement about graphene (and carbon generally), the magazine titled Carbon has just published a suggested nomenclature for 2D carbon forms such as graphene, graphane, etc., according to an Oct. 16, 2013 news item on Nanowerk (Note: A link has been removed),

There has been an intense research interest in all two-dimensional (2D) forms of carbon since Geim and Novoselov’s discovery of graphene in 2004. But as the number of such publications rise, so does the level of inconsistency in naming the material of interest. The isolated, single-atom-thick sheet universally referred to as “graphene” may have a clear definition, but when referring to related 2D sheet-like or flake-like carbon forms, many authors have simply defined their own terms to describe their product.

This has led to confusion within the literature, where terms are multiply-defined, or incorrectly used. The Editorial Board of Carbon has therefore published the first recommended nomenclature for 2D carbon forms (“All in the graphene family – A recommended nomenclature for two-dimensional carbon materials”).

This proposed nomenclature comes in the form of an editorial, from Carbon (Volume 65, December 2013, Pages 1–6),

All in the graphene family – A recommended nomenclature for two-dimensional carbon materials

  • Alberto Bianco
    CNRS, Institut de Biologie Moléculaire et Cellulaire, Immunopathologie et Chimie Thérapeutique, Strasbourg, France
  • Hui-Ming Cheng
    Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang 110016, China
  • Toshiaki Enoki
    Department of Chemistry, Graduate School of Science and Engineering, Tokyo Institute of Technology, Tokyo, Japan
  • Yury Gogotsi
    Materials Science and Engineering Department, A.J. Drexel Nanotechnology Institute, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104, USA
  • Robert H. Hurt
    Institute for Molecular and Nanoscale Innovation, School of Engineering, Brown University, Providence, RI 02912, USA
  • Nikhil Koratkar
    Department of Mechanical, Aerospace and Nuclear Engineering, The Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180, USA
  • Takashi Kyotani
    Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
  • Marc Monthioux
    Centre d’Elaboration des Matériaux et d’Etudes Structurales (CEMES), UPR-8011 CNRS, Université de Toulouse, 29 Rue Jeanne Marvig, F-31055 Toulouse, France
  • Chong Rae Park
    Carbon Nanomaterials Design Laboratory, Global Research Laboratory, Research Institute of Advanced Materials, Department of Materials Science and Engineering, Seoul National University, Seoul 151-744, Republic of Korea
  • Juan M.D. Tascon
    Instituto Nacional del Carbón, INCAR-CSIC, Apartado 73, 33080 Oviedo, Spain
  • Jin Zhang
    Center for Nanochemistry, College of Chemistry and Molecular Engineering, Peking University, Beijing 100871, China

This editorial is behind a paywall.

Psychedelic illustration for a nanobioelectronic tongue

A human tongue-like nanobioelectronic tongue. Illustration of the hTAS2R38-fucntionalized carboxylated polypyrrole nanotube. (Image: Dr. Park, Seoul National University)

A human tongue-like nanobioelectronic tongue. Illustration of the hTAS2R38-fucntionalized carboxylated polypyrrole nanotube. (Image: Dr. Park, Seoul National University)

This illustration accompanies a Dec. 14, 2012 Nanowerk Spotlight article by Michael Berger about the development of a nanobioelectronic tongue by Korean researchers (Note: I have removed links),

The concept of e-noses – electronic devices which mimic the olfactory systems of mammals and insects – is very intriguing to researchers involved in building better, cheaper and smaller sensor devices (read more: “Nanotechnology electronic noses”). Less well known is the fact that equivalent artificial sensors for taste – electronic tongues – are capable of recognizing dissolved substances (see for instance: “Electronic tongue identifies cava wines”).

“Even with current technological advances, e-tongue approaches still cannot mimic the biological features of the human tongue with regard to identifying elusive analytes in complex mixtures, such as food and beverage products,” Tai Hyun Park, a professor in the School of Chemical and Biological Engineering at Seoul National University, tells Nanowerk.

Park, together with Professor Jyongsik Jang and their collaborators, have now developed a human bitter-taste receptor as a nanobioelectronic tongue.

The team worked with a protein to develop the ‘tongue’,

The nanobioelectronic tongue uses a human taste receptor as a recognition element and a conducting polymer nanotube field effect transistor (FET) sensor as a sensor platform. Specifically, the Korean team functionalized carboxylated polypyrrole nanotubes with the human bitter taste receptor protein hTAS2R38. They say that the fabricated device could detect target bitter tastants with a detection limit of 1 femtomole and high selectivity.

“In the case of bitter taste, our nanobioelectronic tongue can be used for sensing quantitatively the bitter taste, for example, of coffee, chocolate drinks, drugs and oriental medicines,” says Park. “Our nanobioelectronic tongue can be used as an alternative to time-consuming and labor-intensive sensory evaluations and cell-based assays for the assessment of quality, tastant screening and basic research on the human taste system.”

Prachi Patel’s ??? 2012 article about the research for Chemical and Engineering News (C&EN) provides more technical details about the testing,

The researchers tested their device’s response to four bitter compounds: phenylthiocarbamide, propylthiouracil, goitrin, and isothiocyanate. When these compounds bound to the protein-coated nanotubes, the researchers noted, the current through the transistors changed. For solutions of phenylthiocarbamide and propylthiouracil in buffer, the researchers could detect concentrations of 1 and 10 femtomolar, respectively. The device could sense goitrin and isothiocyanate, which are found in cruciferous vegetables, at picomolar concentrations in samples taken from vegetables such as cabbage, broccoli, and kale.

The team also tested the sensor’s response to mixtures of bitter, sweet, and umami (or savory) flavor molecules. The device responded only when the bitter compounds were present in the mixtures, even at femtomolar concentrations. Park says that the researchers are now trying to make sensors for sweet and umami tastes by using human taste receptors that respond to those flavors.

Here’s a citation (not an official one) and a link to the researchers’ paper,

Human Taste Receptor-Functionalized Field Effect Transistor as a Human-Like Nanobioelectronic Tongue by Hyun Seok Song, Oh Seok Kwon, Sang Hun Lee, Seon Joo Park, Un-Kyung Kim, Jyongsik Jang, and Tai Hyun Park in Nano Lett., Article ASAP DOI: 10.1021/nl3038147 Publication Date (Web): November 26, 2012 Copyright © 2012 American Chemical Society

Access to the full article is behind a paywall.