Tag Archives: Princeton University

Shining a light on flurocarbon bonds and robotic ‘soft’ matter research

Both of these news bits are concerned with light for one reason or another.

Rice University (Texas, US) and breaking fluorocarbon bonds

The secret to breaking fluorocarbon bonds is light according to a June 22, 2020 news item on Nanowerk,

Rice University engineers have created a light-powered catalyst that can break the strong chemical bonds in fluorocarbons, a group of synthetic materials that includes persistent environmental pollutants.

A June 22, 2020 Rice University news release (also on EurekAlert), which originated the news item, describes the work in greater detail,

In a study published this month in Nature Catalysis, Rice nanophotonics pioneer Naomi Halas and collaborators at the University of California, Santa Barbara (UCSB) and Princeton University showed that tiny spheres of aluminum dotted with specks of palladium could break carbon-fluorine (C-F) bonds via a catalytic process known as hydrodefluorination in which a fluorine atom is replaced by an atom of hydrogen.

The strength and stability of C-F bonds are behind some of the 20th century’s most recognizable chemical brands, including Teflon, Freon and Scotchgard. But the strength of those bonds can be problematic when fluorocarbons get into the air, soil and water. Chlorofluorocarbons, or CFCs, for example, were banned by international treaty in the 1980s after they were found to be destroying Earth’s protective ozone layer, and other fluorocarbons were on the list of “forever chemicals” targeted by a 2001 treaty.

“The hardest part about remediating any of the fluorine-containing compounds is breaking the C-F bond; it requires a lot of energy,” said Halas, an engineer and chemist whose Laboratory for Nanophotonics (LANP) specializes in creating and studying nanoparticles that interact with light.

Over the past five years, Halas and colleagues have pioneered methods for making “antenna-reactor” catalysts that spur or speed up chemical reactions. While catalysts are widely used in industry, they are typically used in energy-intensive processes that require high temperature, high pressure or both. For example, a mesh of catalytic material is inserted into a high-pressure vessel at a chemical plant, and natural gas or another fossil fuel is burned to heat the gas or liquid that’s flowed through the mesh. LANP’s antenna-reactors dramatically improve energy efficiency by capturing light energy and inserting it directly at the point of the catalytic reaction.

In the Nature Catalysis study, the energy-capturing antenna is an aluminum particle smaller than a living cell, and the reactors are islands of palladium scattered across the aluminum surface. The energy-saving feature of antenna-reactor catalysts is perhaps best illustrated by another of Halas’ previous successes: solar steam. In 2012, her team showed its energy-harvesting particles could instantly vaporize water molecules near their surface, meaning Halas and colleagues could make steam without boiling water. To drive home the point, they showed they could make steam from ice-cold water.

The antenna-reactor catalyst design allows Halas’ team to mix and match metals that are best suited for capturing light and catalyzing reactions in a particular context. The work is part of the green chemistry movement toward cleaner, more efficient chemical processes, and LANP has previously demonstrated catalysts for producing ethylene and syngas and for splitting ammonia to produce hydrogen fuel.

Study lead author Hossein Robatjazi, a Beckman Postdoctoral Fellow at UCSB who earned his Ph.D. from Rice in 2019, conducted the bulk of the research during his graduate studies in Halas’ lab. He said the project also shows the importance of interdisciplinary collaboration.

“I finished the experiments last year, but our experimental results had some interesting features, changes to the reaction kinetics under illumination, that raised an important but interesting question: What role does light play to promote the C-F breaking chemistry?” he said.

The answers came after Robatjazi arrived for his postdoctoral experience at UCSB. He was tasked with developing a microkinetics model, and a combination of insights from the model and from theoretical calculations performed by collaborators at Princeton helped explain the puzzling results.

“With this model, we used the perspective from surface science in traditional catalysis to uniquely link the experimental results to changes to the reaction pathway and reactivity under the light,” he said.

The demonstration experiments on fluoromethane could be just the beginning for the C-F breaking catalyst.

“This general reaction may be useful for remediating many other types of fluorinated molecules,” Halas said.

Caption: An artist’s illustration of the light-activated antenna-reactor catalyst Rice University engineers designed to break carbon-fluorine bonds in fluorocarbons. The aluminum portion of the particle (white and pink) captures energy from light (green), activating islands of palladium catalysts (red). In the inset, fluoromethane molecules (top) comprised of one carbon atom (black), three hydrogen atoms (grey) and one fluorine atom (light blue) react with deuterium (yellow) molecules near the palladium surface (black), cleaving the carbon-fluorine bond to produce deuterium fluoride (right) and monodeuterated methane (bottom). Credit: H. Robatjazi/Rice University

Here’s a link to and a citation for the paper,

Plasmon-driven carbon–fluorine (C(sp3)–F) bond activation with mechanistic insights into hot-carrier-mediated pathways by Hossein Robatjazi, Junwei Lucas Bao, Ming Zhang, Linan Zhou, Phillip Christopher, Emily A. Carter, Peter Nordlander & Naomi J. Halas. Nature Catalysis (2020) DOI: https://doi.org/10.1038/s41929-020-0466-5 Published: 08 June 2020

This paper is behind a paywall.

Northwestern University (Illinois, US) brings soft robots to ‘life’

This June 22, 2020 news item on ScienceDaily reveals how scientists are getting soft robots to mimic living creatures,

Northwestern University researchers have developed a family of soft materials that imitates living creatures.

When hit with light, the film-thin materials come alive — bending, rotating and even crawling on surfaces.

A June 22, 2020 Northwestern University news release (also on EurekAlert) by Amanda Morris, which originated the news item, delves further into the details,

Called “robotic soft matter by the Northwestern team,” the materials move without complex hardware, hydraulics or electricity. The researchers believe the lifelike materials could carry out many tasks, with potential applications in energy, environmental remediation and advanced medicine.

“We live in an era in which increasingly smarter devices are constantly being developed to help us manage our everyday lives,” said Northwestern’s Samuel I. Stupp, who led the experimental studies. “The next frontier is in the development of new science that will bring inert materials to life for our benefit — by designing them to acquire capabilities of living creatures.”

The research will be published on June 22 [2020] in the journal Nature Materials.

Stupp is the Board of Trustees Professor of Materials Science and Engineering, Chemistry, Medicine and Biomedical Engineering at Northwestern and director of the Simpson Querrey Institute He has appointments in the McCormick School of Engineering, Weinberg College of Arts and Sciences and Feinberg School of Medicine. George Schatz, the Charles E. and Emma H. Morrison Professor of Chemistry in Weinberg, led computer simulations of the materials’ lifelike behaviors. Postdoctoral fellow Chuang Li and graduate student Aysenur Iscen, from the Stupp and Schatz laboratories, respectively, are co-first authors of the paper.

Although the moving material seems miraculous, sophisticated science is at play. Its structure comprises nanoscale peptide assemblies that drain water molecules out of the material. An expert in materials chemistry, Stupp linked the peptide arrays to polymer networks designed to be chemically responsive to blue light.

When light hits the material, the network chemically shifts from hydrophilic (attracts water) to hydrophobic (resists water). As the material expels the water through its peptide “pipes,” it contracts — and comes to life. When the light is turned off, water re-enters the material, which expands as it reverts to a hydrophilic structure.

This is reminiscent of the reversible contraction of muscles, which inspired Stupp and his team to design the new materials.

“From biological systems, we learned that the magic of muscles is based on the connection between assemblies of small proteins and giant protein polymers that expand and contract,” Stupp said. “Muscles do this using a chemical fuel rather than light to generate mechanical energy.”

For Northwestern’s bio-inspired material, localized light can trigger directional motion. In other words, bending can occur in different directions, depending on where the light is located. And changing the direction of the light also can force the object to turn as it crawls on a surface.

Stupp and his team believe there are endless possible applications for this new family of materials. With the ability to be designed in different shapes, the materials could play a role in a variety of tasks, ranging from environmental clean-up to brain surgery.

“These materials could augment the function of soft robots needed to pick up fragile objects and then release them in a precise location,” he said. “In medicine, for example, soft materials with ‘living’ characteristics could bend or change shape to retrieve blood clots in the brain after a stroke. They also could swim to clean water supplies and sea water or even undertake healing tasks to repair defects in batteries, membranes and chemical reactors.”

Fascinating, eh? No batteries, no power source, just light to power movement. For the curious, here’s a link to and a citation for the paper,

Supramolecular–covalent hybrid polymers for light-activated mechanical actuation by Chuang Li, Aysenur Iscen, Hiroaki Sai, Kohei Sato, Nicholas A. Sather, Stacey M. Chin, Zaida Álvarez, Liam C. Palmer, George C. Schatz & Samuel I. Stupp. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0707-7 Published: 22 June 2020

This paper is behind a paywall.

Desalination with nanowood

A new treatment for wood could make renewable salt-separating membranes. Courtesy: University of Maryland

An August 6, 2019 article by Adele Peters for Fast Company describes a ‘wooden’approach to water desalinization (also known as desalination),

“We are trying to develop a new type of membrane material that is nature-based,” says Z. Jason Ren, an engineering professor at Princeton University and one of the coauthors of a new paper in Science Advances about that material, which is made from wood. It’s designed for use in a process called membrane distillation, which heats up saltwater and uses pressure to force the water vapor through a membrane, leaving the salt behind and creating pure water. The membranes are usually made from a type of plastic. Using “nanowood” membranes instead can both improve the energy efficiency of the process and avoid the environmental problems of plastic.

An August 2, 2019 University of Maryland (UMD) news release provides more detail about the research,

A membrane made of a sliver of wood could be the answer to renewably sourced water cleaning. Most membranes that are currently used to distill fresh water from salty are made of polymers based on fossil fuels.

Inspired by the intricate system of water circulating in a tree, a research team from the University of Maryland, Princeton University, and the University of Colorado Boulder have figured out how to use a thin slice of wood as a membrane through which water vapor can evaporate, leaving behind salt or other contaminants.

“This work demonstrates another exciting energy/water application of nanostructured wood, as a high-performance membrane material,” said Liangbing Hu, a professor of materials science and engineering at UMD’s A. James Clark School of Engineering, who co-led the study.

The team chemically treated the wood to become hydrophobic, so that it more efficiently allows water vapor through, driven by a heat source like solar energy.

“This study discovered a new way of using wood materials’ unique properties as both an excellent insulator and water vapor transporter,” said Z. Jason Ren, a professor in environmental engineering who recently moved from CU Boulder to Princeton, and the other co-leader of the team that performed the study.

The researchers treat the wood so that it loses its lignin, the part of the wood that makes it brown and rigid, and its hemicellulose, which weaves in and out between cellulose to hold it in place. The resulting “nanowood” is treated with silane, a compound used to make silicon for computer chips. The semiconducting nature of the compound maintains the wood’s natural nanostructures of cellulose, and clings less to water vapor molecules as they pass through. Silane is also used in solar cell manufacturing.

The membrane looks like a thin piece of wood, seemingly bleached white, that is suspended above a source of water vapor. As the water heats and passes into the gas phase, the molecules are small enough to fit through the tiny channels lining the walls of the leftover cell structure. Water collected on the other side is now free of large contaminants like salt.
To test it, the researchers distilled water through it and found that it performed 1.2 times better than a conventional membrane.

“The wood membrane has very high porosity, which promotes water vapor transport and prevents heat loss,” said first author Dianxun Hou, who was a student at CU Boulder.
Inventwood, a UMD spinoff company of Hu’s research group, is working on commercializing wood based nanotechnologies.

Here’s a link to and a citation for the paper,

Hydrophobic nanostructured wood membrane for thermally efficient distillation by Dianxun Hou, Tian Li, Xi Chen, Shuaiming He, Jiaqi Dai, Sohrab A. Mofid, Deyin Hou, Arpita Iddya, David Jassby, Ronggui Yang, Liangbing Hu, and Zhiyong Jason Ren. Science Advances 02 Aug 2019: Vol. 5, no. 8, eaaw3203 DOI: 10.1126/sciadv.aaw3203

This paper appears to be open access.

In my brief survey of the paper, I noticed that the researchers were working with cellulose nanofibrils (CNF), a term which should be familiar for anyone following the nanocellulose story, such as it.

Monitoring forest soundscapes for conservation and more about whale songs

I don’t understand why anyone would publicize science work featuring soundscapes without including an audio file. However, no one from Princeton University (US) phoned and asked for my advice :).

On the plus side, my whale story does have a sample audio file. However, I’m not sure if I can figure out how to embed it here.

Princeton and monitoring forests

In addition to a professor from Princeton University, there’s the founder of an environmental news organization and someone who’s both a professor at the University of Queensland (Australia) and affiliated with the Nature Conservancy making this of the more unusual collaborations I’ve seen.

Moving on to the news, a January 4, 2019 Princeton University news release (also on EurekAlert but published on Jan. 3, 2019) by B. Rose Kelly announces research into monitoring forests,

Recordings of the sounds in tropical forests could unlock secrets about biodiversity and aid conservation efforts around the world, according to a perspective paper published in Science.

Compared to on-the-ground fieldwork, bioacoustics –recording entire soundscapes, including animal and human activity — is relatively inexpensive and produces powerful conservation insights. The result is troves of ecological data in a short amount of time.

Because these enormous datasets require robust computational power, the researchers argue that a global organization should be created to host an acoustic platform that produces on-the-fly analysis. Not only could the data be used for academic research, but it could also monitor conservation policies and strategies employed by companies around the world.

“Nongovernmental organizations and the conservation community need to be able to truly evaluate the effectiveness of conservation interventions. It’s in the interest of certification bodies to harness the developments in bioacoustics for better enforcement and effective measurements,” said Zuzana Burivalova, a postdoctoral research fellow in Professor David Wilcove’s lab at Princeton University’s Woodrow Wilson School of Public and International Affairs.

“Beyond measuring the effectiveness of conservation projects and monitoring compliance with forest protection commitments, networked bioacoustic monitoring systems could also generate a wealth of data for the scientific community,” said co-author Rhett Butler of the environmental news outlet Mongabay.

Burivalova and Butler co-authored the paper with Edward Game, who is based at the Nature Conservancy and the University of Queensland.

The researchers explain that while satellite imagery can be used to measure deforestation, it often fails to detect other subtle ecological degradations like overhunting, fires, or invasion by exotic species. Another common measure of biodiversity is field surveys, but those are often expensive, time consuming and cover limited ground.

Depending on the vegetation of the area and the animals living there, bioacoustics can record animal sounds and songs from several hundred meters away. Devices can be programmed to record at specific times or continuously if there is solar polar or a cellular network signal. They can also record a range of taxonomic groups including birds, mammals, insects, and amphibians. To date, several multiyear recordings have already been completed.

Bioacoustics can help effectively enforce policy efforts as well. Many companies are engaged in zero-deforestation efforts, which means they are legally obligated to produce goods without clearing large forests. Bioacoustics can quickly and cheaply determine how much forest has been left standing.

“Companies are adopting zero deforestation commitments, but these policies do not always translate to protecting biodiversity due to hunting, habitat degradation, and sub-canopy fires. Bioacoustic monitoring could be used to augment satellites and other systems to monitor compliance with these commitments, support real-time action against prohibited activities like illegal logging and poaching, and potentially document habitat and species recovery,” Butler said.

Further, these recordings can be used to measure climate change effects. While the sounds might not be able to assess slow, gradual changes, they could help determine the influence of abrupt, quick differences to land caused by manufacturing or hunting, for example.

Burivalova and Game have worked together previously as you can see in a July 24, 2017 article by Justine E. Hausheer for a nature.org blog ‘Cool Green Science’ (Note: Links have been removed),

Morning in Musiamunat village. Across the river and up a steep mountainside, birds-of-paradise call raucously through the rainforest canopy, adding their calls to the nearly deafening insect chorus. Less than a kilometer away, small birds flit through a grove of banana trees, taro and pumpkin vines winding across the rough clearing. Here too, the cicadas howl.

To the ear, both garden and forest are awash with noise. But hidden within this dawn chorus are clues to the forest’s health.

New acoustic research from Nature Conservancy scientists indicates that forest fragmentation drives distinct changes in the dawn and dusk choruses of forests in Papua New Guinea. And this innovative method can help evaluate the conservation benefits of land-use planning efforts with local communities, reducing the cost of biodiversity monitoring in the rugged tropics.

“It’s one thing for a community to say that they cut fewer trees, or restricted hunting, or set aside a protected area, but it’s very difficult for small groups to demonstrate the effectiveness of those efforts,” says Eddie Game, The Nature Conservancy’s lead scientist for the Asia-Pacific region.

Aside from the ever-present logging and oil palm, another threat to PNG’s forests is subsistence agriculture, which feeds a majority of the population. In the late 1990s, The Nature Conservancy worked with 11 communities in the Adelbert Mountains to create land-use plans, dividing each community’s lands into different zones for hunting, gardening, extracting forest products, village development, and conservation. The goal was to limit degradation to specific areas of the forest, while keeping the rest intact.

But both communities and conservationists needed a way to evaluate their efforts, before the national government considered expanding the program beyond Madang province. So in July 2015, Game and two other scientists, Zuzana Burivalova and Timothy Boucher, spent two weeks gathering data in the Adelbert Mountains, a rugged lowland mountain range in Papua New Guinea’s Madang province.

Working with conservation rangers from Musiamunat, Yavera, and Iwarame communities, the research team tested an innovative method — acoustic sampling — to measure biodiversity across the community forests. Game and his team used small acoustic recorders placed throughout the forest to record 24-hours of sound from locations in each of the different land zones.

Soundscapes from healthy, biodiverse forests are more complex, so the scientists hoped that these recordings would show if parts of the community forests, like the conservation zones, were more biodiverse than others. “Acoustic recordings won’t pick up every species, but we don’t need that level of detail to know if a forest is healthy,” explains Boucher, a conservation geographer with the Conservancy.

Here’s a link to and a citation for the latest work from Burivalova and Game,

The sound of a tropical forest by Zuzana Burivalova, Edward T. Game, Rhett A. Butler. Science 04 Jan 2019: Vol. 363, Issue 6422, pp. 28-29 DOI: 10.1126/science.aav1902

This paper is behind a paywall. You can find out more about Mongabay and Rhett Butler in its Wikipedia entry.

***ETA July 18, 2019: Cara Cannon Byington, Associate Director, Science Communications for the Nature Conservancy emailed to say that a January 3, 2019 posting on the conservancy’s Cool Green Science Blog features audio files from the research published in ‘The sound of a tropical forest. Scroll down about 75% of the way for the audio.***

Whale songs

Whales share songs when they meet and a January 8, 2019 Wildlife Conservation Society news release (also on EurekAlert) describes how that sharing takes place,

Singing humpback whales from different ocean basins seem to be picking up musical ideas from afar, and incorporating these new phrases and themes into the latest song, according to a newly published study in Royal Society Open Science that’s helping scientists better understand how whales learn and change their musical compositions.

The new research shows that two humpback whale populations in different ocean basins (the South Atlantic and Indian Oceans) in the Southern Hemisphere sing similar song types, but the amount of similarity differs across years. This suggests that males from these two populations come into contact at some point in the year to hear and learn songs from each other.

The study titled “Culturally transmitted song exchange between humpback whales (Megaptera novaeangliae) in the southeast Atlantic and southwest Indian Ocean basins” appears in the latest edition of the Royal Society Open Science journal. The authors are: Melinda L. Rekdahl, Carissa D. King, Tim Collins, and Howard Rosenbaum of WCS (Wildlife Conservation Society); Ellen C. Garland of the University of St. Andrews; Gabriella A. Carvajal of WCS and Stony Brook University; and Yvette Razafindrakoto of COSAP [ (Committee for the Management of the Protected Area of Bezà Mahafaly ] and Madagascar National Parks.

“Song sharing between populations tends to happen more in the Northern Hemisphere where there are fewer physical barriers to movement of individuals between populations on the breeding grounds, where they do the majority of their singing. In some populations in the Southern Hemisphere song sharing appears to be more complex, with little song similarity within years but entire songs can spread to neighboring populations leading to song similarity across years,” said Dr. Melinda Rekdahl, marine conservation scientist for WCS’s Ocean Giants Program and lead author of the study. “Our study shows that this is not always the case in Southern Hemisphere populations, with similarities between both ocean basin songs occurring within years to different degrees over a 5-year period.”

The study authors examined humpback whale song recordings from both sides of the African continent–from animals off the coasts of Gabon and Madagascar respectively–and transcribed more than 1,500 individual sounds that were recorded between 2001-2005. Song similarity was quantified using statistical methods.

Male humpback whales are one of the animal kingdom’s most noteworthy singers, and individual animals sing complex compositions consisting of moans, cries, and other vocalizations called “song units.” Song units are composed into larger phrases, which are repeated to form “themes.” Different themes are produced in a sequence to form a song cycle that are then repeated for hours, or even days. For the most part, all males within the same population sing the same song type, and this population-wide song similarity is maintained despite continual evolution or change to the song leading to seasonal “hit songs.” Some song learning can occur between populations that are in close proximity and may be able to hear the other population’s song.

Over time, the researchers detected shared phrases and themes in both populations, with some years exhibiting more similarities than others. In the beginning of the study, whale populations in both locations shared five “themes.” One of the shared themes, however, had differences. Gabon’s version of Theme 1, the researchers found, consisted of a descending “cry-woop”, whereas the Madagascar singers split Theme 1 into two parts: a descending cry followed by a separate woop or “trumpet.”

Other differences soon emerged over time. By 2003, the song sung by whales in Gabon became more elaborate than their counterparts in Madagascar. In 2004, both population song types shared the same themes, with the whales in Gabon’s waters singing three additional themes. Interestingly, both whale groups had dropped the same two themes from the previous year’s song types. By 2005, songs being sung on both sides of Africa were largely similar, with individuals in both locations singing songs with the same themes and order. However, there were exceptions, including one whale that revived two discontinued themes from the previous year.

The study’s results stands in contrast to other research in which a song in one part of an ocean basin replaces or “revolutionizes” another population’s song preference. In this instance, the gradual changes and degrees of similarity shared by humpbacks on both sides of Africa was more gradual and subtle.

“Studies such as this one are an important means of understanding connectivity between different whale populations and how they move between different seascapes,” said Dr. Howard Rosenbaum, Director of WCS’s Ocean Giants Program and one of the co-authors of the new paper. “Insights on how different populations interact with one another and the factors that drive the movements of these animals can lead to more effective plans for conservation.”

The humpback whale is one of the world’s best-studied marine mammal species, well known for its boisterous surface behavior and migrations stretching thousands of miles. The animal grows up to 50 feet in length and has been globally protected from commercial whaling since the 1960s. WCS has studied humpback whales since that time and–as the New York Zoological Society–played a key role in the discovery that humpback whales sing songs. The organization continues to study humpback whale populations around the world and right here in the waters of New York; research efforts on humpback and other whales in New York Bight are currently coordinated through the New York Aquarium’s New York Seascape program.

I’m not able to embed the audio file here but, for the curious, there is a portion of a humpback whale song from Gabon here at EurekAlert.

Here’s a link to and a citation for the research paper,

Culturally transmitted song exchange between humpback whales (Megaptera novaeangliae) in the southeast Atlantic and southwest Indian Ocean basins by Melinda L. Rekdahl, Ellen C. Garland, Gabriella A. Carvajal, Carissa D. King, Tim Collins, Yvette Razafindrakoto and Howard Rosenbaum. Royal Society Open Science 21 November 2018 Volume 5 Issue 11 https://doi.org/10.1098/rsos.172305 Published:28 November 2018

This is an open access paper.

Crowdsourcing brain research at Princeton University to discover 6 new neuron types

Spritely music!

There were already 1/4M registered players as of May 17, 2018 but I’m sure there’s room for more should you be inspired. A May 17, 2018 Princeton University news release (also on EurekAlert) reveals more about the game and about the neurons,

With the help of a quarter-million video game players, Princeton researchers have created and shared detailed maps of more than 1,000 neurons — and they’re just getting started.

“Working with Eyewirers around the world, we’ve made a digital museum that shows off the intricate beauty of the retina’s neural circuits,” said Sebastian Seung, the Evnin Professor in Neuroscience and a professor of computer science and the Princeton Neuroscience Institute (PNI). The related paper is publishing May 17 [2018] in the journal Cell.

Seung is unveiling the Eyewire Museum, an interactive archive of neurons available to the general public and neuroscientists around the world, including the hundreds of researchers involved in the federal Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.

“This interactive viewer is a huge asset for these larger collaborations, especially among people who are not physically in the same lab,” said Amy Robinson Sterling, a crowdsourcing specialist with PNI and the executive director of Eyewire, the online gaming platform for the citizen scientists who have created this data set.

“This museum is something like a brain atlas,” said Alexander Bae, a graduate student in electrical engineering and one of four co-first authors on the paper. “Previous brain atlases didn’t have a function where you could visualize by individual cell, or a subset of cells, and interact with them. Another novelty: Not only do we have the morphology of each cell, but we also have the functional data, too.”

The neural maps were developed by Eyewirers, members of an online community of video game players who have devoted hundreds of thousands of hours to painstakingly piecing together these neural cells, using data from a mouse retina gathered in 2009.

Eyewire pairs machine learning with gamers who trace the twisting and branching paths of each neuron. Humans are better at visually identifying the patterns of neurons, so every player’s moves are recorded and checked against each other by advanced players and Eyewire staffers, as well as by software that is improving its own pattern recognition skills.

Since Eyewire’s launch in 2012, more than 265,000 people have signed onto the game, and they’ve collectively colored in more than 10 million 3-D “cubes,” resulting in the mapping of more than 3,000 neural cells, of which about a thousand are displayed in the museum.

Each cube is a tiny subset of a single cell, about 4.5 microns across, so a 10-by-10 block of cubes would be the width of a human hair. Every cell is reviewed by between 5 and 25 gamers before it is accepted into the system as complete.

“Back in the early years it took weeks to finish a single cell,” said Sterling. “Now players complete multiple neurons per day.” The Eyewire user experience stays focused on the larger mission — “For science!” is a common refrain — but it also replicates a typical gaming environment, with achievement badges, a chat feature to connect with other players and technical support, and the ability to unlock privileges with increasing skill. “Our top players are online all the time — easily 30 hours a week,” Sterling said.

Dedicated Eyewirers have also contributed in other ways, including donating the swag that gamers win during competitions and writing program extensions “to make game play more efficient and more fun,” said Sterling, including profile histories, maps of player activity, a top 100 leaderboard and ever-increasing levels of customizability.

“The community has really been the driving force behind why Eyewire has been successful,” Sterling said. “You come in, and you’re not alone. Right now, there are 43 people online. Some of them will be admins from Boston or Princeton, but most are just playing — now it’s 46.”

For science!

With 100 billion neurons linked together via trillions of connections, the brain is immeasurably complex, and neuroscientists are still assembling its “parts list,” said Nicholas Turner, a graduate student in computer science and another of the co-first authors. “If you know what parts make up the machine you’re trying to break apart, you’re set to figure out how it all works,” he said.

The researchers have started by tackling Eyewire-mapped ganglion cells from the retina of a mouse. “The retina doesn’t just sense light,” Seung said. “Neural circuits in the retina perform the first steps of visual perception.”

The retina grows from the same embryonic tissue as the brain, and while much simpler than the brain, it is still surprisingly complex, Turner said. “Hammering out these details is a really valuable effort,” he said, “showing the depth and complexity that exists in circuits that we naively believe are simple.”

The researchers’ fundamental question is identifying exactly how the retina works, said Bae. “In our case, we focus on the structural morphology of the retinal ganglion cells.”

“Why the ganglion cells of the eye?” asked Shang Mu, an associate research scholar in PNI and fellow first author. “Because they’re the connection between the retina and the brain. They’re the only cell class that go back into the brain.” Different types of ganglion cells are known to compute different types of visual features, which is one reason the museum has linked shape to functional data.

Using Eyewire-produced maps of 396 ganglion cells, the researchers in Seung’s lab successfully classified these cells more thoroughly than has ever been done before.

“The number of different cell types was a surprise,” said Mu. “Just a few years ago, people thought there were only 15 to 20 ganglion cell types, but we found more than 35 — we estimate between 35 and 50 types.”

Of those, six appear to be novel, in that the researchers could not find any matching descriptions in a literature search.

A brief scroll through the digital museum reveals just how remarkably flat the neurons are — nearly all of the branching takes place along a two-dimensional plane. Seung’s team discovered that different cells grow along different planes, with some reaching high above the nucleus before branching out, while others spread out close to the nucleus. Their resulting diagrams resemble a rainforest, with ground cover, an understory, a canopy and an emergent layer overtopping the rest.

All of these are subdivisions of the inner plexiform layer, one of the five previously recognized layers of the retina. The researchers also identified a “density conservation principle” that they used to distinguish types of neurons.

One of the biggest surprises of the research project has been the extraordinary richness of the original sample, said Seung. “There’s a little sliver of a mouse retina, and almost 10 years later, we’re still learning things from it.”

Of course, it’s a mouse’s brain that you’ll be examining and while there are differences between a mouse brain and a human brain, mouse brains still provide valuable data as they did in the case of some groundbreaking research published in October 2017. James Hamblin wrote about it in an Oct. 7, 2017 article for The Atlantic (Note: Links have been removed),

 

Scientists Somehow Just Discovered a New System of Vessels in Our Brains

It is unclear what they do—but they likely play a central role in aging and disease.

A transparent model of the brain with a network of vessels filled in
Daniel Reich / National Institute of Neurological Disorders and Stroke

You are now among the first people to see the brain’s lymphatic system. The vessels in the photo above transport fluid that is likely crucial to metabolic and inflammatory processes. Until now, no one knew for sure that they existed.

Doctors practicing today have been taught that there are no lymphatic vessels inside the skull. Those deep-purple vessels were seen for the first time in images published this week by researchers at the U.S. National Institute of Neurological Disorders and Stroke.

In the rest of the body, the lymphatic system collects and drains the fluid that bathes our cells, in the process exporting their waste. It also serves as a conduit for immune cells, which go out into the body looking for adversaries and learning how to distinguish self from other, and then travel back to lymph nodes and organs through lymphatic vessels.

So how was it even conceivable that this process wasn’t happening in our brains?

Reich (Daniel Reich, senior investigator) started his search in 2015, after a major study in Nature reported a similar conduit for lymph in mice. The University of Virginia team wrote at the time, “The discovery of the central-nervous-system lymphatic system may call for a reassessment of basic assumptions in neuroimmunology.” The study was regarded as a potential breakthrough in understanding how neurodegenerative disease is associated with the immune system.

Around the same time, researchers discovered fluid in the brains of mice and humans that would become known as the “glymphatic system.” [emphasis mine] It was described by a team at the University of Rochester in 2015 as not just the brain’s “waste-clearance system,” but as potentially helping fuel the brain by transporting glucose, lipids, amino acids, and neurotransmitters. Although since “the central nervous system completely lacks conventional lymphatic vessels,” the researchers wrote at the time, it remained unclear how this fluid communicated with the rest of the body.

There are occasional references to the idea of a lymphatic system in the brain in historic literature. Two centuries ago, the anatomist Paolo Mascagni made full-body models of the lymphatic system that included the brain, though this was dismissed as an error. [emphases mine]  A historical account in The Lancet in 2003 read: “Mascagni was probably so impressed with the lymphatic system that he saw lymph vessels even where they did not exist—in the brain.”

I couldn’t resist the reference to someone whose work had been dismissed summarily being proved right, eventually, and with the help of mouse brains. Do read Hamblin’s article in its entirety if you have time as these excerpts don’t do it justice.

Getting back to Princeton’s research, here’s their research paper,

Digital museum of retinal ganglion cells with dense anatomy and physiology,” by Alexander Bae, Shang Mu, Jinseop Kim, Nicholas Turner, Ignacio Tartavull, Nico Kemnitz, Chris Jordan, Alex Norton, William Silversmith, Rachel Prentki, Marissa Sorek, Celia David, Devon Jones, Doug Bland, Amy Sterling, Jungman Park, Kevin Briggman, Sebastian Seung and the Eyewirers, was published May 17 in the journal Cell with DOI 10.1016/j.cell.2018.04.040.

The research was supported by the Gatsby Charitable Foundation, National Institute of Health-National Institute of Neurological Disorders and Stroke (U01NS090562 and 5R01NS076467), Defense Advanced Research Projects Agency (HR0011-14-2- 0004), Army Research Office (W911NF-12-1-0594), Intelligence Advanced Research Projects Activity (D16PC00005), KT Corporation, Amazon Web Services Research Grants, Korea Brain Research Institute (2231-415) and Korea National Research Foundation Brain Research Program (2017M3C7A1048086).

This paper is behind a paywall. For the players amongst us, here’s the Eyewire website. Go forth,  play, and, maybe, discover new neurons!

Of musical parodies, Despacito, and evolution

What great timing, I just found out about a musical science parody featuring evolution and biology and learned of the latest news about the study of evolution on one of the islands in the Galapagos (where Charles Darwin made some of his observations). Thanks to Stacey Johnson for her November 24, 2017 posting on the Signals blog for featuring Evo-Devo (Despacito Biology Parody), an A Capella Science music video from Tim Blais,

Now, for the latest regarding the Galapagos and evolution (from a November 24, 2017 news item on ScienceDaily),

The arrival 36 years ago of a strange bird to a remote island in the Galapagos archipelago has provided direct genetic evidence of a novel way in which new species arise.

In this week’s issue of the journal Science, researchers from Princeton University and Uppsala University in Sweden report that the newcomer belonging to one species mated with a member of another species resident on the island, giving rise to a new species that today consists of roughly 30 individuals.

The study comes from work conducted on Darwin’s finches, which live on the Galapagos Islands in the Pacific Ocean. The remote location has enabled researchers to study the evolution of biodiversity due to natural selection.

The direct observation of the origin of this new species occurred during field work carried out over the last four decades by B. Rosemary and Peter Grant, two scientists from Princeton, on the small island of Daphne Major.

A November 23, 2017 Princeton University news release on EurekAlert, which originated the news item, provides more detail,

“The novelty of this study is that we can follow the emergence of new species in the wild,” said B. Rosemary Grant, a senior research biologist, emeritus, and a senior biologist in the Department of Ecology and Evolutionary Biology. “Through our work on Daphne Major, we were able to observe the pairing up of two birds from different species and then follow what happened to see how speciation occurred.”

In 1981, a graduate student working with the Grants on Daphne Major noticed the newcomer, a male that sang an unusual song and was much larger in body and beak size than the three resident species of birds on the island.

“We didn’t see him fly in from over the sea, but we noticed him shortly after he arrived. He was so different from the other birds that we knew he did not hatch from an egg on Daphne Major,” said Peter Grant, the Class of 1877 Professor of Zoology, Emeritus, and a professor of ecology and evolutionary biology, emeritus.

The researchers took a blood sample and released the bird, which later bred with a resident medium ground finch of the species Geospiz fortis, initiating a new lineage. The Grants and their research team followed the new “Big Bird lineage” for six generations, taking blood samples for use in genetic analysis.

In the current study, researchers from Uppsala University analyzed DNA collected from the parent birds and their offspring over the years. The investigators discovered that the original male parent was a large cactus finch of the species Geospiza conirostris from Española island, which is more than 100 kilometers (about 62 miles) to the southeast in the archipelago.

The remarkable distance meant that the male finch was not able to return home to mate with a member of his own species and so chose a mate from among the three species already on Daphne Major. This reproductive isolation is considered a critical step in the development of a new species when two separate species interbreed.

The offspring were also reproductively isolated because their song, which is used to attract mates, was unusual and failed to attract females from the resident species. The offspring also differed from the resident species in beak size and shape, which is a major cue for mate choice. As a result, the offspring mated with members of their own lineage, strengthening the development of the new species.

Researchers previously assumed that the formation of a new species takes a very long time, but in the Big Bird lineage it happened in just two generations, according to observations made by the Grants in the field in combination with the genetic studies.

All 18 species of Darwin’s finches derived from a single ancestral species that colonized the Galápagos about one to two million years ago. The finches have since diversified into different species, and changes in beak shape and size have allowed different species to utilize different food sources on the Galápagos. A critical requirement for speciation to occur through hybridization of two distinct species is that the new lineage must be ecologically competitive — that is, good at competing for food and other resources with the other species — and this has been the case for the Big Bird lineage.

“It is very striking that when we compare the size and shape of the Big Bird beaks with the beak morphologies of the other three species inhabiting Daphne Major, the Big Birds occupy their own niche in the beak morphology space,” said Sangeet Lamichhaney, a postdoctoral fellow at Harvard University and the first author on the study. “Thus, the combination of gene variants contributed from the two interbreeding species in combination with natural selection led to the evolution of a beak morphology that was competitive and unique.”

The definition of a species has traditionally included the inability to produce fully fertile progeny from interbreeding species, as is the case for the horse and the donkey, for example. However, in recent years it has become clear that some closely related species, which normally avoid breeding with each other, do indeed produce offspring that can pass genes to subsequent generations. The authors of the study have previously reported that there has been a considerable amount of gene flow among species of Darwin’s finches over the last several thousands of years.

One of the most striking aspects of this study is that hybridization between two distinct species led to the development of a new lineage that after only two generations behaved as any other species of Darwin’s finches, explained Leif Andersson, a professor at Uppsala University who is also affiliated with the Swedish University of Agricultural Sciences and Texas A&M University. “A naturalist who came to Daphne Major without knowing that this lineage arose very recently would have recognized this lineage as one of the four species on the island. This clearly demonstrates the value of long-running field studies,” he said.

It is likely that new lineages like the Big Birds have originated many times during the evolution of Darwin’s finches, according to the authors. The majority of these lineages have gone extinct but some may have led to the evolution of contemporary species. “We have no indication about the long-term survival of the Big Bird lineage, but it has the potential to become a success, and it provides a beautiful example of one way in which speciation occurs,” said Andersson. “Charles Darwin would have been excited to read this paper.”

Here’s a link to and a citation for the paper,

Rapid hybrid speciation in Darwin’s finches by Sangeet Lamichhaney, Fan Han, Matthew T. Webster, Leif Andersson, B. Rosemary Grant, Peter R. Grant. Science 23 Nov 2017: eaao4593 DOI: 10.1126/science.aao4593

This paper is behind a paywall.

Happy weekend! And for those who love their Despacito, there’s this parody featuring three Italians in a small car (thanks again to Stacey Johnson’s blog posting),

A different type of ‘smart’ window with a new solar cell technology

I always like a ‘smart’ window story. Given my issues with summer (I don’t like the heat), anything which promises to help reduce the heat in my home at that time of year, has my vote. Unfortunately, solutions don’t seem to have made a serious impact on the marketplace. Nonetheless, there’s always hope and perhaps this development at Princeton University will be the one to break through the impasse. From a June 30, 2017 news item on ScienceDaily,

Smart windows equipped with controllable glazing can augment lighting, cooling and heating systems by varying their tint, saving up to 40 percent in an average building’s energy costs.

These smart windows require power for operation, so they are relatively complicated to install in existing buildings. But by applying a new solar cell technology, researchers at Princeton University have developed a different type of smart window: a self-powered version that promises to be inexpensive and easy to apply to existing windows. This system features solar cells that selectively absorb near-ultraviolet (near-UV) light, so the new windows are completely self-powered.

A June 30, 2017 Princeton University news release, which originated the news item, expands on the theme,

“Sunlight is a mixture of electromagnetic radiation made up of near-UV rays, visible light, and infrared energy, or heat,” said Yueh-Lin (Lynn) Loo, director of the Andlinger Center for Energy and the Environment, and the Theodora D. ’78 and William H. Walton III ’74 Professor in Engineering. “We wanted the smart window to dynamically control the amount of natural light and heat that can come inside, saving on energy cost and making the space more comfortable.”

The smart window controls the transmission of visible light and infrared heat into the building, while the new type of solar cell uses near-UV light to power the system.

“This new technology is actually smart management of the entire spectrum of sunlight,” said Loo, who is a professor of chemical and biological engineering. Loo is one of the authors of a paper, published June 30, that describes this technology, which was developed in her lab.

Because near-UV light is invisible to the human eye, the researchers set out to harness it for the electrical energy needed to activate the tinting technology.

“Using near-UV light to power these windows means that the solar cells can be transparent and occupy the same footprint of the window without competing for the same spectral range or imposing aesthetic and design constraints,” Loo added. “Typical solar cells made of silicon are black because they absorb all visible light and some infrared heat – so those would be unsuitable for this application.”

In the paper published in Nature Energy, the researchers described how they used organic semiconductors – contorted hexabenzocoronene (cHBC) derivatives – for constructing the solar cells. The researchers chose the material because its chemical structure could be modified to absorb a narrow range of wavelengths – in this case, near-UV light. To construct the solar cell, the semiconductor molecules are deposited as thin films on glass with the same production methods used by organic light-emitting diode manufacturers. When the solar cell is operational, sunlight excites the cHBC semiconductors to produce electricity.

At the same time, the researchers constructed a smart window consisting of electrochromic polymers, which control the tint, and can be operated solely using power produced by the solar cell. When near-UV light from the sun generates an electrical charge in the solar cell, the charge triggers a reaction in the electrochromic window, causing it to change from clear to dark blue. When darkened, the window can block more than 80 percent of light.

Nicholas Davy, a doctoral student in the chemical and biological engineering department and the paper’s lead author, said other researchers have already developed transparent solar cells, but those target infrared energy. However, infrared energy carries heat, so using it to generate electricity can conflict with a smart window’s function of controlling the flow of heat in or out of a building. Transparent near-UV solar cells, on the other hand, don’t generate as much power as the infrared version, but don’t impede the transmission of infrared radiation, so they complement the smart window’s task.

Davy said that the Princeton team’s aim is to create a flexible version of the solar-powered smart window system that can be applied to existing windows via lamination.

“Someone in their house or apartment could take these wireless smart window laminates – which could have a sticky backing that is peeled off – and install them on the interior of their windows,” said Davy. “Then you could control the sunlight passing into your home using an app on your phone, thereby instantly improving energy efficiency, comfort, and privacy.”

Joseph Berry, senior research scientist at the National Renewable Energy Laboratory, who studies solar cells but was not involved in the research, said the research project is interesting because the device scales well and targets a specific part of the solar spectrum.

“Integrating the solar cells into the smart windows makes them more attractive for retrofits and you don’t have to deal with wiring power,” said Berry. “And the voltage performance is quite good. The voltage they have been able to produce can drive electronic devices directly, which is technologically quite interesting.”

Davy and Loo have started a new company, called Andluca Technologies, based on the technology described in the paper, and are already exploring other applications for the transparent solar cells. They explained that the near-UV solar cell technology can also power internet-of-things sensors and other low-power consumer products.

“It does not generate enough power for a car, but it can provide auxiliary power for smaller devices, for example, a fan to cool the car while it’s parked in the hot sun,” Loo said.

Here’s a link to and a citation for the paper,

Pairing of near-ultraviolet solar cells with electrochromic windows for smart management of the solar spectrum by Nicholas C. Davy, Melda Sezen-Edmonds, Jia Gao, Xin Lin, Amy Liu, Nan Yao, Antoine Kahn, & Yueh-Lin Loo. Nature Energy 2, Article number: 17104 (2017 doi:10.1038/nenergy.2017.104 Published online: 30 June 2017

This paper is behind a paywall.

Here’s what a sample of the special glass looks like,

Graduate student Nicholas Davy holds a sample of the special window glass. (Photos by David Kelly Crow)

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Making lead look like gold (so to speak)

Apparently you can make lead ‘look’ like gold if you can get it to reflect light in the same way. From a Feb. 28, 2017 news item on Nanowerk (Note: A link has been removed),

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Transmutation has been realized in modern times, but on a minute scale using a massive particle accelerator.

Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another. A computational theory published Feb. 24 [2017] in the journal Physical Review Letters (“How to Make Distinct Dynamical Systems Appear Spectrally Identical”) demonstrates that any two systems can be made to look alike, even if just for the smallest fraction of a second.

In this context, for two objects to “look” like each other, they need to reflect light in the same way. The Princeton researchers’ method involves using light to make non-permanent changes to a substance’s molecules so that they mimic the reflective properties of another substance’s molecules. This ability could have implications for optical computing, a type of computing in which electrons are replaced by photons that could greatly enhance processing power but has proven extremely difficult to engineer. It also could be applied to molecular detection and experiments in which expensive samples could be replaced by cheaper alternatives.

A Feb. 28, 2017 Princeton University news release (also on EurekAlert) by Tien Nguyen, which originated the news item, expands on the theme (Note: Links have been removed),

“It was a big shock for us that such a general statement as ‘any two objects can be made to look alike’ could be made,” said co-author Denys Bondar, an associate research scholar in the laboratory of co-author Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry.

The Princeton researchers posited that they could control the light that bounces off a molecule or any substance by controlling the light shone on it, which would allow them to alter how it looks. This type of manipulation requires a powerful light source such as an ultrafast laser and would last for only a femtosecond, or one quadrillionth of a second. Unlike normal light sources, this ultrafast laser pulse is strong enough to interact with molecules and distort their electron cloud while not actually changing their identity.

“The light emitted by a molecule depends on the shape of its electron cloud, which can be sculptured by modern lasers,” Bondar said. Using advanced computational theory, the research team developed a method called “spectral dynamic mimicry” that allowed them to calculate the laser pulse shape, which includes timing and wavelength, to produce any desired spectral output. In other words, making any two systems look alike.

Conversely, this spectral control could also be used to make two systems look as different from one another as possible. This differentiation, the researchers suggested, could prove valuable for applications of molecular detections such as identifying toxic versus safe chemicals.

Shaul Mukamel, a chemistry professor at the University of California-Irvine, said that the Princeton research is a step forward in an important and active research field called coherent control, in which light can be manipulated to control behavior at the molecular level. Mukamel, who has collaborated with the Rabitz lab but was not involved in the current work, said that the Rabitz group has had a prominent role in this field for decades, advancing technology such as quantum computing and using light to drive artificial chemical reactivity.

“It’s a very general and nice application of coherent control,” Mukamel said. “It demonstrates that you can, by shaping the optical paths, bring the molecules to do things that you want beforehand — it could potentially be very significant.”

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another, even if just for the smallest fraction of a second. The researchers are, left to right, Renan Cabrera, an associate research scholar in chemistry; Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry; associate research scholar in chemistry Denys Bondar; and graduate student Andre Campos. (Photo by C. Todd Reichart, Department of Chemistry)

Here’s a link to and a citation for the paper,

How to Make Distinct Dynamical Systems Appear Spectrally Identical by
Andre G. Campos, Denys I. Bondar, Renan Cabrera, and Herschel A. Rabitz.
Phys. Rev. Lett. 118, 083201 (Vol. 118, Iss. 8) DOI:https://doi.org/10.1103/PhysRevLett.118.083201 Published 24 February 2017

© 2017 American Physical Society

This paper is behind a paywall.

Brushing your way to nanofibres

The scientists are using what looks like a hairbrush to create nanofibres ,

Figure 2: Brush-spinning of nanofibers. (Reprinted with permission by Wiley-VCH Verlag)) [downloaded from http://www.nanowerk.com/spotlight/spotid=41398.php]

Figure 2: Brush-spinning of nanofibers. (Reprinted with permission by Wiley-VCH Verlag)) [downloaded from http://www.nanowerk.com/spotlight/spotid=41398.php]

A Sept. 23, 2015 Nanowerk Spotlight article by Michael Berger provides an in depth look at this technique (developed by a joint research team of scientists from the University of Georgia, Princeton University, and Oxford University) which could make producing nanofibers for use in scaffolds (tissue engineering and other applications) more easily and cheaply,

Polymer nanofibers are used in a wide range of applications such as the design of new composite materials, the fabrication of nanostructured biomimetic scaffolds for artificial bones and organs, biosensors, fuel cells or water purification systems.

“The simplest method of nanofiber fabrication is direct drawing from a polymer solution using a glass micropipette,” Alexander Tokarev, Ph.D., a Research Associate in the Nanostructured Materials Laboratory at the University of Georgia, tells Nanowerk. “This method however does not scale up and thus did not find practical applications. In our new work, we introduce a scalable method of nanofiber spinning named touch-spinning.”

James Cook in a Sept. 23, 2015 article for Materials Views provides a description of the technology,

A glass rod is glued to a rotating stage, whose diameter can be chosen over a wide range of a few centimeters to more than 1 m. A polymer solution is supplied, for example, from a needle of a syringe pump that faces the glass rod. The distance between the droplet of polymer solution and the tip of the glass rod is adjusted so that the glass rod contacts the polymer droplet as it rotates.

Following the initial “touch”, the polymer droplet forms a liquid bridge. As the stage rotates the bridge stretches and fiber length increases, with the diameter decreasing due to mass conservation. It was shown that the diameter of the fiber can be precisely controlled down to 40 nm by the speed of the stage rotation.

The method can be easily scaled-up by using a round hairbrush composed of 600 filaments.

When the rotating brush touches the surface of a polymer solution, the brush filaments draw many fibers simultaneously producing hundred kilometers of fibers in minutes.

The drawn fibers are uniform since the fiber diameter depends on only two parameters: polymer concentration and speed of drawing.

Returning to Berger’s Spotlight article, there is an important benefit with this technique,

As the team points out, one important aspect of the method is the drawing of single filament fibers.

These single filament fibers can be easily wound onto spools of different shapes and dimensions so that well aligned one-directional, orthogonal or randomly oriented fiber meshes with a well-controlled average mesh size can be fabricated using this very simple method.

“Owing to simplicity of the method, our set-up could be used in any biomedical lab and facility,” notes Tokarev. “For example, a customized scaffold by size, dimensions and othermorphologic characteristics can be fabricated using donor biomaterials.”

Berger’s and Cook’s articles offer more illustrations and details.

Here’s a link to and a citation for the paper,

Touch- and Brush-Spinning of Nanofibers by Alexander Tokarev, Darya Asheghal, Ian M. Griffiths, Oleksandr Trotsenko, Alexey Gruzd, Xin Lin, Howard A. Stone, and Sergiy Minko. Advanced Materials DOI: 10.1002/adma.201502768ViewFirst published: 23 September 2015

This paper is behind a paywall.

Magnetospinning with an inexpensive magnet

The fridge magnet mentioned in the headline for a May 11, 2015  Nanowerk spotlight aricle by Michael Berger isn’t followed up until the penultimate paragraph but it is worth the wait,

“Our method for spinning of continuous micro- and nanofibers uses a permanent revolving magnet,” Alexander Tokarev, Ph.D., a Research Associate in the Nanostructured Materials Laboratory at the University of Georgia, tells Nanowerk. “This fabrication technique utilizes magnetic forces and hydrodynamic features of stretched threads to produce fine nanofibers.”

“The new method provides excellent control over the fiber diameter and is compatible with a range of polymeric materials and polymer composite materials including biopolymers,” notes Tokarev. “Our research showcases this new technique and demonstrates its advantages to the scientific community.”

Electrospinning is the most popular method to produce nanofibers in labs now. Owing to its simplicity and low costs, a magnetospinning set-up could be installed in any non-specialized laboratory for broader use of magnetospun nanofibers in different methods and technologies. The total cost of a laboratory electrospinning system is above $10,000. In contrast, no special equipment is needed for magnetospinning. It is possible to build a magnetospinning set-up, such as the University of Georgia team utilizes, by just using a $30 rotating motor and a $5 permanent magnet. [emphasis mine]

Berger’s article references a recent paper published by the team,

Magnetospinning of Nano- and Microfibers by Alexander Tokarev, Oleksandr Trotsenko, Ian M. Griffiths, Howard A. Stone, and Sergiy Minko. Advanced Materials First published: 8 May 2015Full publication history DOI: 10.1002/adma.201500374View/save citation

This paper is behind a paywall.

* The headline originally stated that a ‘fridge’ magnet was used. Researcher Alexander Tokarev kindly dropped by correct this misunderstanding on my part and the headline has been changed to read  ‘inexpensive magnet’ on May 14, 2015 at approximately 1400 hundred hours PDT.