Tag Archives: University of Rochester

Mini T-shirt demonstrates photosynthetic living materials

Caption: A mini T-shirt demonstrates the photosynthetic living materials created in the lab of University Rochester biologist Anne S. Meyer and Delft University of Technology bionanoscientist Marie-Eve Aubin-Tam using 3D printers and a new bioink technique. Credit: University of Rochester photo

I’m not sure how I feel about a t-shirt, regardless of size, made of living biological material but these researchers seem uniformly enthusiastic. From a May 3, 2021 news item on phys.org (Note: A link has been removed),

Living materials, which are made by housing biological cells within a non-living matrix, have gained popularity in recent years as scientists recognize that often the most robust materials are those that mimic nature.

For the first time, an international team of researchers from the University of Rochester [located in New York state, US] and Delft University of Technology in the Netherlands used 3D printers and a novel bioprinting technique to print algae into living, photosynthetic materials that are tough and resilient. The material has a variety of applications in the energy, medical, and fashion sectors. The research is published in the journal Advanced Functional Materials.

An April 30, 2021 University of Rochester new release (also on EurekAlert but published May 3, 2021) by Lindsey Valich, which originated the news item, delves further into the topic of living materials,

“Three-dimensional printing is a powerful technology for fabrication of living functional materials that have a huge potential in a wide range of environmental and human-based applications.” says Srikkanth Balasubramanian, a postdoctoral research associate at Delft and the first author of the paper. “We provide the first example of an engineered photosynthetic material that is physically robust enough to be deployed in real-life applications.”

HOW TO BUILD NEW MATERIALS: LIVING AND NONLIVING COMPONENTS

To create the photosynthetic materials, the researchers began with a non-living bacterial cellulose–an organic compound that is produced and excreted by bacteria. Bacterial cellulose has many unique mechanical properties, including its flexibility, toughness, strength, and ability to retain its shape, even when twisted, crushed, or otherwise physically distorted.

The bacterial cellulose is like the paper in a printer, while living microalgae acts as the ink. The researchers used a 3D printer to deposit living algae onto the bacterial cellulose.

The combination of living (microalgae) and nonliving (bacterial cellulose) components resulted in a unique material that has the photosynthetic quality of the algae and the robustness of the bacterial cellulose; the material is tough and resilient while also eco-friendly, biodegradable, and simple and scalable to produce. The plant-like nature of the material means it can use photosynthesis to “feed” itself over periods of many weeks, and it is also able to be regenerated–a small sample of the material can be grown on-site to make more materials.

ARTIFICIAL LEAVES, PHOTOSYNTHETIC SKINS, AND BIO-GARMENTS

The unique characteristics of the material make it an ideal candidate for a variety of applications, including new products such as artificial leaves, photosynthetic skins, or photosynthetic bio-garments.

Artificial leaves are materials that mimic actual leaves in that they use sunlight to convert water and carbon dioxide–a major driver of climate change–into oxygen and energy, much like leaves during photosynthesis. The leaves store energy in chemical form as sugars, which can then be converted into fuels. Artificial leaves therefore offer a way to produce sustainable energy in places where plants don’t grow well, including outer space colonies. The artificial leaves produced by the researchers at Delft and Rochester are additionally made from eco-friendly materials, in contrast to most artificial leaf technologies currently in production, which are produced using toxic chemical methods.

“For artificial leaves, our materials are like taking the ‘best parts’ of plants–the leaves–which can create sustainable energy, without needing to use resources to produce parts of plants–the stems and the roots–that need resources but don’t produce energy,” says Anne S. Meyer, an associate professor of biology at Rochester. “We are making a material that is only focused on the sustainable production of energy.”

Another application of the material would be photosynthetic skins, which could be used for skin grafts, Meyer says. “The oxygen generated would help to kick-start healing of the damaged area, or it might be able to carry out light-activated wound healing.”

Besides offering sustainable energy and medical treatments, the materials could also change the fashion sector. Bio-garments made from algae would address some of the negative environmental effects of the current textile industry in that they would be high-quality fabrics that would be sustainability produced and completely biodegradable. They would also work to purify the air by removing carbon dioxide through photosynthesis and would not need to be washed as often as conventional garments, reducing water usage.

“Our living materials are promising because they can survive for several days with no water or nutrients access, and the material itself can be used as a seed to grow new living materials,” says Marie-Eve Aubin-Tam, an associate professor of bionanoscience at Delft. “This opens the door to applications in remote areas, even in space, where the material can be seeded on site.”

Here’s a link to and a citation for the paper,

Bioprinting of Regenerative Photosynthetic Living Materials by Srikkanth Balasubramanian, Kui Yu, Anne S. Meyer, Elvin Karana, Marie-Eve Aubin-Tam DOI: https://doi.org/10.1002/adfm.202011162 First published: 29 April 2021

This paper is open access.

The researchers have provided this artistic impression of 3D printing of living (microalgae) and nonliving materials (bacterial cellulose),

An artist’s illustration demonstrates how 3D printed materials could be applied as durable, living clothing. (Lizah van der Aart illustration)

Bacteria and graphene oxide as a basis for producing computers

A July 10, 2019 news item on ScienceDaily announces a more environmentally friendly way to produce graphene leading to more environmentally friendly devices such as computers,

In order to create new and more efficient computers, medical devices, and other advanced technologies, researchers are turning to nanomaterials: materials manipulated on the scale of atoms or molecules that exhibit unique properties.

Graphene — a flake of carbon as thin as a single later of atoms — is a revolutionary nanomaterial due to its ability to easily conduct electricity, as well as its extraordinary mechanical strength and flexibility. However, a major hurdle in adopting it for everyday applications is producing graphene at a large scale, while still retaining its amazing properties.

In a paper published in the journal ChemOpen, Anne S. Meyer, an associate professor of biology at the University of Rochester [New York state, US], and her colleagues at Delft University of Technology in the Netherlands, describe a way to overcome this barrier. The researchers outline their method to produce graphene materials using a novel technique: mixing oxidized graphite with bacteria. Their method is a more cost-efficient, time-saving, and environmentally friendly way of producing graphene materials versus those produced chemically, and could lead to the creation of innovative computer technologies and medical equipment.

A July 10, 2019 University of Rochester news release (also on EurekAlert), which originated the news item, provides details as to how this new technique for extracting graphene differs from the technique currently used,

Graphene is extracted from graphite, the material found in an ordinary pencil. At exactly one atom thick, graphene is the thinnest–yet strongest–two-dimensional material known to researchers. Scientists from the University of Manchester in the United Kingdom were awarded the 2010 Nobel Prize in Physics for their discovery of graphene; however, their method of using sticky tape to make graphene yielded only small amounts of the material.

“For real applications you need large amounts,” Meyer says. “Producing these bulk amounts is challenging and typically results in graphene that is thicker and less pure. This is where our work came in.”

In order to produce larger quantities of graphene materials, Meyer and her colleagues started with a vial of graphite. They exfoliated the graphite–shedding the layers of material–to produce graphene oxide (GO), which they then mixed with the bacteria Shewanella. They let the beaker of bacteria and precursor materials sit overnight, during which time the bacteria reduced the GO to a graphene material.

“Graphene oxide is easy to produce, but it is not very conductive due to all of the oxygen groups in it,” Meyer says. “The bacteria remove most of the oxygen groups, which turns it into a conductive material.”

While the bacterially-produced graphene material created in Meyer’s lab is conductive, it is also thinner and more stable than graphene produced chemically. It can additionally be stored for longer periods of time, making it well suited for a variety of applications, including field-effect transistor (FET) biosensors and conducting ink. FET biosensors are devices that detect biological molecules and could be used to perform, for example, real-time glucose monitoring for diabetics.

“When biological molecules bind to the device, they change the conductance of the surface, sending a signal that the molecule is present,” Meyer says. “To make a good FET biosensor you want a material that is highly conductive but can also be modified to bind to specific molecules.” Graphene oxide that has been reduced is an ideal material because it is lightweight and very conductive, but it typically retains a small number of oxygen groups that can be used to bind to the molecules of interest.

The bacterially produced graphene material could also be the basis for conductive inks, which could, in turn, be used to make faster and more efficient computer keyboards, circuit boards, or small wires such as those used to defrost car windshields. Using conductive inks is an “easier, more economical way to produce electrical circuits, compared to traditional techniques,” Meyer says. Conductive inks could also be used to produce electrical circuits on top of nontraditional materials like fabric or paper.

“Our bacterially produced graphene material will lead to far better suitability for product development,” Meyer says. “We were even able to develop a technique of ‘bacterial lithography’ to create graphene materials that were only conductive on one side, which can lead to the development of new, advanced nanocomposite materials.”

Here’s a link to and a citation for the paper,

Creation of Conductive Graphene Materials by Bacterial Reduction Using Shewanella Oneidensis by Benjamin A. E. Lehner, Vera A. E. C. Janssen, Dr. Ewa M. Spiesz, Dominik Benz, Dr. Stan J. J. Brouns, Dr. Anne S. Meyer, Prof. Dr. Herre S. J. van der Zant. ChemistryOpen Volume 8, Issue 7 July 2019 Pages 888-895 DOI: https://doi.org/10.1002/open.201900186
First published: 04 July 2019

As you would expect given the journal’s title, this paper is open access.

Crowdsourcing brain research at Princeton University to discover 6 new neuron types

Spritely music!

There were already 1/4M registered players as of May 17, 2018 but I’m sure there’s room for more should you be inspired. A May 17, 2018 Princeton University news release (also on EurekAlert) reveals more about the game and about the neurons,

With the help of a quarter-million video game players, Princeton researchers have created and shared detailed maps of more than 1,000 neurons — and they’re just getting started.

“Working with Eyewirers around the world, we’ve made a digital museum that shows off the intricate beauty of the retina’s neural circuits,” said Sebastian Seung, the Evnin Professor in Neuroscience and a professor of computer science and the Princeton Neuroscience Institute (PNI). The related paper is publishing May 17 [2018] in the journal Cell.

Seung is unveiling the Eyewire Museum, an interactive archive of neurons available to the general public and neuroscientists around the world, including the hundreds of researchers involved in the federal Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.

“This interactive viewer is a huge asset for these larger collaborations, especially among people who are not physically in the same lab,” said Amy Robinson Sterling, a crowdsourcing specialist with PNI and the executive director of Eyewire, the online gaming platform for the citizen scientists who have created this data set.

“This museum is something like a brain atlas,” said Alexander Bae, a graduate student in electrical engineering and one of four co-first authors on the paper. “Previous brain atlases didn’t have a function where you could visualize by individual cell, or a subset of cells, and interact with them. Another novelty: Not only do we have the morphology of each cell, but we also have the functional data, too.”

The neural maps were developed by Eyewirers, members of an online community of video game players who have devoted hundreds of thousands of hours to painstakingly piecing together these neural cells, using data from a mouse retina gathered in 2009.

Eyewire pairs machine learning with gamers who trace the twisting and branching paths of each neuron. Humans are better at visually identifying the patterns of neurons, so every player’s moves are recorded and checked against each other by advanced players and Eyewire staffers, as well as by software that is improving its own pattern recognition skills.

Since Eyewire’s launch in 2012, more than 265,000 people have signed onto the game, and they’ve collectively colored in more than 10 million 3-D “cubes,” resulting in the mapping of more than 3,000 neural cells, of which about a thousand are displayed in the museum.

Each cube is a tiny subset of a single cell, about 4.5 microns across, so a 10-by-10 block of cubes would be the width of a human hair. Every cell is reviewed by between 5 and 25 gamers before it is accepted into the system as complete.

“Back in the early years it took weeks to finish a single cell,” said Sterling. “Now players complete multiple neurons per day.” The Eyewire user experience stays focused on the larger mission — “For science!” is a common refrain — but it also replicates a typical gaming environment, with achievement badges, a chat feature to connect with other players and technical support, and the ability to unlock privileges with increasing skill. “Our top players are online all the time — easily 30 hours a week,” Sterling said.

Dedicated Eyewirers have also contributed in other ways, including donating the swag that gamers win during competitions and writing program extensions “to make game play more efficient and more fun,” said Sterling, including profile histories, maps of player activity, a top 100 leaderboard and ever-increasing levels of customizability.

“The community has really been the driving force behind why Eyewire has been successful,” Sterling said. “You come in, and you’re not alone. Right now, there are 43 people online. Some of them will be admins from Boston or Princeton, but most are just playing — now it’s 46.”

For science!

With 100 billion neurons linked together via trillions of connections, the brain is immeasurably complex, and neuroscientists are still assembling its “parts list,” said Nicholas Turner, a graduate student in computer science and another of the co-first authors. “If you know what parts make up the machine you’re trying to break apart, you’re set to figure out how it all works,” he said.

The researchers have started by tackling Eyewire-mapped ganglion cells from the retina of a mouse. “The retina doesn’t just sense light,” Seung said. “Neural circuits in the retina perform the first steps of visual perception.”

The retina grows from the same embryonic tissue as the brain, and while much simpler than the brain, it is still surprisingly complex, Turner said. “Hammering out these details is a really valuable effort,” he said, “showing the depth and complexity that exists in circuits that we naively believe are simple.”

The researchers’ fundamental question is identifying exactly how the retina works, said Bae. “In our case, we focus on the structural morphology of the retinal ganglion cells.”

“Why the ganglion cells of the eye?” asked Shang Mu, an associate research scholar in PNI and fellow first author. “Because they’re the connection between the retina and the brain. They’re the only cell class that go back into the brain.” Different types of ganglion cells are known to compute different types of visual features, which is one reason the museum has linked shape to functional data.

Using Eyewire-produced maps of 396 ganglion cells, the researchers in Seung’s lab successfully classified these cells more thoroughly than has ever been done before.

“The number of different cell types was a surprise,” said Mu. “Just a few years ago, people thought there were only 15 to 20 ganglion cell types, but we found more than 35 — we estimate between 35 and 50 types.”

Of those, six appear to be novel, in that the researchers could not find any matching descriptions in a literature search.

A brief scroll through the digital museum reveals just how remarkably flat the neurons are — nearly all of the branching takes place along a two-dimensional plane. Seung’s team discovered that different cells grow along different planes, with some reaching high above the nucleus before branching out, while others spread out close to the nucleus. Their resulting diagrams resemble a rainforest, with ground cover, an understory, a canopy and an emergent layer overtopping the rest.

All of these are subdivisions of the inner plexiform layer, one of the five previously recognized layers of the retina. The researchers also identified a “density conservation principle” that they used to distinguish types of neurons.

One of the biggest surprises of the research project has been the extraordinary richness of the original sample, said Seung. “There’s a little sliver of a mouse retina, and almost 10 years later, we’re still learning things from it.”

Of course, it’s a mouse’s brain that you’ll be examining and while there are differences between a mouse brain and a human brain, mouse brains still provide valuable data as they did in the case of some groundbreaking research published in October 2017. James Hamblin wrote about it in an Oct. 7, 2017 article for The Atlantic (Note: Links have been removed),

 

Scientists Somehow Just Discovered a New System of Vessels in Our Brains

It is unclear what they do—but they likely play a central role in aging and disease.

A transparent model of the brain with a network of vessels filled in
Daniel Reich / National Institute of Neurological Disorders and Stroke

You are now among the first people to see the brain’s lymphatic system. The vessels in the photo above transport fluid that is likely crucial to metabolic and inflammatory processes. Until now, no one knew for sure that they existed.

Doctors practicing today have been taught that there are no lymphatic vessels inside the skull. Those deep-purple vessels were seen for the first time in images published this week by researchers at the U.S. National Institute of Neurological Disorders and Stroke.

In the rest of the body, the lymphatic system collects and drains the fluid that bathes our cells, in the process exporting their waste. It also serves as a conduit for immune cells, which go out into the body looking for adversaries and learning how to distinguish self from other, and then travel back to lymph nodes and organs through lymphatic vessels.

So how was it even conceivable that this process wasn’t happening in our brains?

Reich (Daniel Reich, senior investigator) started his search in 2015, after a major study in Nature reported a similar conduit for lymph in mice. The University of Virginia team wrote at the time, “The discovery of the central-nervous-system lymphatic system may call for a reassessment of basic assumptions in neuroimmunology.” The study was regarded as a potential breakthrough in understanding how neurodegenerative disease is associated with the immune system.

Around the same time, researchers discovered fluid in the brains of mice and humans that would become known as the “glymphatic system.” [emphasis mine] It was described by a team at the University of Rochester in 2015 as not just the brain’s “waste-clearance system,” but as potentially helping fuel the brain by transporting glucose, lipids, amino acids, and neurotransmitters. Although since “the central nervous system completely lacks conventional lymphatic vessels,” the researchers wrote at the time, it remained unclear how this fluid communicated with the rest of the body.

There are occasional references to the idea of a lymphatic system in the brain in historic literature. Two centuries ago, the anatomist Paolo Mascagni made full-body models of the lymphatic system that included the brain, though this was dismissed as an error. [emphases mine]  A historical account in The Lancet in 2003 read: “Mascagni was probably so impressed with the lymphatic system that he saw lymph vessels even where they did not exist—in the brain.”

I couldn’t resist the reference to someone whose work had been dismissed summarily being proved right, eventually, and with the help of mouse brains. Do read Hamblin’s article in its entirety if you have time as these excerpts don’t do it justice.

Getting back to Princeton’s research, here’s their research paper,

Digital museum of retinal ganglion cells with dense anatomy and physiology,” by Alexander Bae, Shang Mu, Jinseop Kim, Nicholas Turner, Ignacio Tartavull, Nico Kemnitz, Chris Jordan, Alex Norton, William Silversmith, Rachel Prentki, Marissa Sorek, Celia David, Devon Jones, Doug Bland, Amy Sterling, Jungman Park, Kevin Briggman, Sebastian Seung and the Eyewirers, was published May 17 in the journal Cell with DOI 10.1016/j.cell.2018.04.040.

The research was supported by the Gatsby Charitable Foundation, National Institute of Health-National Institute of Neurological Disorders and Stroke (U01NS090562 and 5R01NS076467), Defense Advanced Research Projects Agency (HR0011-14-2- 0004), Army Research Office (W911NF-12-1-0594), Intelligence Advanced Research Projects Activity (D16PC00005), KT Corporation, Amazon Web Services Research Grants, Korea Brain Research Institute (2231-415) and Korea National Research Foundation Brain Research Program (2017M3C7A1048086).

This paper is behind a paywall. For the players amongst us, here’s the Eyewire website. Go forth,  play, and, maybe, discover new neurons!

Unbreakable encrypted message with key that’s shorter than the message

A Sept. 5, 2016 University of Rochester (NY state, US) news release (also on EurekAlert), makes an intriguing announcement,

Researchers at the University of Rochester have moved beyond the theoretical in demonstrating that an unbreakable encrypted message can be sent with a key that’s far shorter than the message—the first time that has ever been done.

Until now, unbreakable encrypted messages were transmitted via a system envisioned by American mathematician Claude Shannon, considered the “father of information theory.” Shannon combined his knowledge of algebra and electrical circuitry to come up with a binary system of transmitting messages that are secure, under three conditions: the key is random, used only once, and is at least as long as the message itself.

The findings by Daniel Lum, a graduate student in physics, and John Howell, a professor of physics, have been published in the journal Physical Review A.

“Daniel’s research amounts to an important step forward, not just for encryption, but for the field of quantum data locking,” said Howell.

Quantum data locking is a method of encryption advanced by Seth Lloyd, a professor of quantum information at Massachusetts Institute of Technology, that uses photons—the smallest particles associated with light—to carry a message. Quantum data locking was thought to have limitations for securely encrypting messages, but Lloyd figured out how to make additional assumptions—namely those involving the boundary between light and matter—to make it a more secure method of sending data.  While a binary system allows for only an on or off position with each bit of information, photon waves can be altered in many more ways: the angle of tilt can be changed, the wavelength can be made longer or shorter, and the size of the amplitude can be modified. Since a photon has more variables—and there are fundamental uncertainties when it comes to quantum measurements—the quantum key for encrypting and deciphering a message can be shorter that the message itself.

Lloyd’s system remained theoretical until this year, when Lum and his team developed a device—a quantum enigma machine—that would put the theory into practice. The device takes its name from the encryption machine used by Germany during World War II, which employed a coding method that the British and Polish intelligence agencies were secretly able to crack.

Let’s assume that Alice wants to send an encrypted message to Bob. She uses the machine to generate photons that travel through free space and into a spatial light modulator (SLM) that alters the properties of the individual photons (e.g. amplitude, tilt) to properly encode the message into flat but tilted wavefronts that can be focused to unique points dictated by the tilt. But the SLM does one more thing: it distorts the shapes of the photons into random patterns, such that the wavefront is no longer flat which means it no longer has a well-defined focus. Alice and Bob both know the keys which identify the implemented scrambling operations, so Bob is able to use his own SLM to flatten the wavefront, re-focus the photons, and translate the altered properties into the distinct elements of the message.

Along with modifying the shape of the photons, Lum and the team made use of the uncertainty principle, which states that the more we know about one property of a particle, the less we know about another of its properties. Because of that, the researchers were able to securely lock in six bits of classical information using only one bit of an encryption key—an operation called data locking.

“While our device is not 100 percent secure, due to photon loss,” said Lum, “it does show that data locking in message encryption is far more than a theory.”

The ultimate goal of the quantum enigma machine is to prevent a third party—for example, someone named Eve—from intercepting and deciphering the message. A crucial principle of quantum theory is that the mere act of measuring a quantum system changes the system. As a result, Eve has only one shot at obtaining and translating the encrypted message—something that is virtually impossible, given the nearly limitless number of patterns that exist for each photon.

The paper by Lum and Howell was one of two papers published simultaneously on the same topic. The other paper, “Quantum data locking,” was from a team led by Chinese physicist Jian-Wei Pan.

“It’s highly unlikely that our free-space implementation will be useful through atmospheric conditions,” said Lum. “Instead, we have identified the use of optic fiber as a more practical route for data locking, a path Pan’s group actually started with. Regardless, the field is still in its infancy with a great deal more research needed.”

Here’s a link to and a citation for the paper,

Quantum enigma machine: Experimentally demonstrating quantum data locking by Daniel J. Lum, John C. Howell, M. S. Allman, Thomas Gerrits, Varun B. Verma, Sae Woo Nam, Cosmo Lupo, and Seth Lloyd. Phys. Rev. A, Vol. 94, Iss. 2 — August 2016 DOI: http://dx.doi.org/10.1103/PhysRevA.94.022315

©2016 American Physical Society

This paper is behind a paywall.

There is an earlier open access version of the paper by the Chinese researchers on arXiv.org,

Experimental quantum data locking by Yang Liu, Zhu Cao, Cheng Wu, Daiji Fukuda, Lixing You, Jiaqiang Zhong, Takayuki Numata, Sijing Chen, Weijun Zhang, Sheng-Cai Shi, Chao-Yang Lu, Zhen Wang, Xiongfeng Ma, Jingyun Fan, Qiang Zhang, Jian-Wei Pan. arXiv.org > quant-ph > arXiv:1605.04030

The Chinese team’s later version of the paper is available here,

Experimental quantum data locking by Yang Liu, Zhu Cao, Cheng Wu, Daiji Fukuda, Lixing You, Jiaqiang Zhong, Takayuki Numata, Sijing Chen, Weijun Zhang, Sheng-Cai Shi, Chao-Yang Lu, Zhen Wang, Xiongfeng Ma, Jingyun Fan, Qiang Zhang, and Jian-Wei Pan. Phys. Rev. A, Vol. 94, Iss. 2 — August 2016 DOI: http://dx.doi.org/10.1103/PhysRevA.94.020301

©2016 American Physical Society

This version is behind a paywall.

Getting back to the folks at the University of Rochester, they have provided this image to illustrate their work,

The quantum enigma machine developed by researchers at the University of Rochester, MIT, and the National Institute of Standards and Technology. (Image by Daniel Lum/University of Rochester)

The quantum enigma machine developed by researchers at the University of Rochester, MIT, and the National Institute of Standards and Technology. (Image by Daniel Lum/University of Rochester)

2015 daguerreotype exhibit follows problematic 2005 show

In 2005, curators had a horrifying experience when historical images (daguerreotypes) were deteriorating as the 150-year old images were being displayed in an exhibit titled “Young America.” Some 25 of the photographs were affected, five of them sustaining critical damage. The debacle occasioned a research project involving conservators, physicists, and nanotechnology (see my Jan. 10, 2013 posting for more about the 2005 exhibit and resulting research project).

A new daguerreotype exhibit currently taking place showcases the results of that research according to a Nov. 13, 2015 University of Rochester news release,

In 1839, Louis-Jacques-Mandé Daguerre unveiled one of the world’s first successful photographic mediums: the daguerreotype. The process transformed the human experience by providing a means to capture light and record people, places, and events. The University of Rochester is leading groundbreaking nanotechnology research that explores the extraordinary qualities of this photographic process. A new exhibition in Rush Rhees Library showcases the results of this research, while bridging the gap between the sciences and the humanities. …

… From 2010-2014, a National Science Foundation grant supported nanotechnology research conducted by two University of Rochester scientists—Nicholas Bigelow, Lee A. DuBridge Professor of Physics, and Ralph Wiegandt, visiting research scientist and conservator—who explored how environment impacts the survival of these unique, non-reproducible images. In addition to conservation science and cultural research, Bigelow and Wiegandt are also investigating ways in which the chemical and physical processes used to create daguerreotypes can influence modern nanofabrication and nanotechnology.

“The daguerreotype should be considered one of humankind’s most disruptive technological advances,” Bigelow and Wiegandt said. “Not only was it the first successful imaging medium, it was also the first truly engineered nanotechnology. The daguerreotype was a prescient catalyst to the ensuing cascade of discoveries in physics and chemistry over the latter half of the 19th century and into the 20th.”

Blending the past with the future, the exhibition displays the first known daguerreotype of a Rochester graduating class (1853) alongside a 2015 daguerreotype of current University President Joel Seligman, created by Rochester daguerreotypist Irving Pobboravsky.

Both Bigelow and Wiegandt are mentioned in the 2013 posting describing the research project’s inception.

For anyone who’s in the area of New York state where the University of Rochester is located, the exhibit will run until February 29, 2016 in the Friedlander Lobby of Rush Rhees Library.  Plus, there’s this from the news release,

A special presentation about the scientific advances surrounding the daguerreotype and their relationship to cultural preservation will be led by Bigelow, Wiegandt, and Jim Kuhn, assistant dean for Special Collections and Preservation, on December 14 from 7-9 p.m. in the Hawkins-Carlson Room of Rush Rhees Library. For more information visit: http://www.library.rochester.edu/event/daguerreotype-exhibition or call (585).

There’s no indication that the special presentation will be livestreamed or recorded and made available at a later date.

Quantum and classical physics may be closer than we thought

It seems that a key theory about the boundary between the quantum world and our own macro world has been disproved and I think the July 21, 2015 news item on Nanotechnology Now says it better,

Quantum theory is one of the great achievements of 20th century science, yet physicists have struggled to find a clear boundary between our everyday world and what Albert Einstein called the “spooky” features of the quantum world, including cats that could be both alive and dead, and photons that can communicate with each other across space instantaneously.

For the past 60 years, the best guide to that boundary has been a theorem called Bell’s Inequality, but now a new paper shows that Bell’s Inequality is not the guidepost it was believed to be, which means that as the world of quantum computing brings quantum strangeness closer to our daily lives, we understand the frontiers of that world less well than scientists have thought.

In the new paper, published in the July 20 [2015] edition of Optica, University of Rochester [New York state, US] researchers show that a classical beam of light that would be expected to obey Bell’s Inequality can fail this test in the lab, if the beam is properly prepared to have a particular feature: entanglement.

A July 21, 2015 University of Rochester news release, which originated the news item, reveals more about the boundary and the research,

Not only does Bell’s test not serve to define the boundary, the new findings don’t push the boundary deeper into the quantum realm but do just the opposite. They show that some features of the real world must share a key ingredient of the quantum domain. This key ingredient is called entanglement, exactly the feature of quantum physics that Einstein labeled as spooky. According to Joseph Eberly, professor of physics and one of the paper’s authors, it now appears that Bell’s test only distinguishes those systems that are entangled from those that are not. It does not distinguish whether they are “classical” or quantum. In the forthcoming paper the Rochester researchers explain how entanglement can be found in something as ordinary as a beam of light.

Eberly explained that “it takes two to tangle.” For example, think about two hands clapping regularly. What you can be sure of is that when the right hand is moving to the right, the left hand is moving to the left, and vice versa. But if you were asked to guess without listening or looking whether at some moment the right hand was moving to the right, or maybe to the left, you wouldn’t know. But you would still know that whatever the right hand was doing at that time, the left hand would be doing the opposite. The ability to know for sure about a common property without knowing anything for sure about an individual property is the essence of perfect entanglement.

Eberly added that many think of entanglement as a quantum feature because “Schrodinger coined the term ‘entanglement’ to refer to his famous cat scenario.” But their experiment shows that some features of the “real” world must share a key ingredient of Schrodinger’s Cat domain: entanglement.

The existence of classical entanglement was pointed out in 1980, but Eberly explained that it didn’t seem a very interesting concept, so it wasn’t fully explored. As opposed to quantum entanglement, classical entanglement happens within one system. The effect is all local: there is no action at a distance, none of the “spookiness.”

With this result, Eberly and his colleagues have shown experimentally “that the border is not where it’s usually thought to be, and moreover that Bell’s Inequalities should no longer be used to define the boundary.”

Here’s a link to and a citation for the paper,

Shifting the quantum-classical boundary: theory and experiment for statistically classical optical fields by Xiao-Feng Qian, Bethany Little, John C. Howell, and J. H. Eberly. Optica Vol. 2, Issue 7, pp. 611-615 (2015) •doi: 10.1364/OPTICA.2.000611

This paper is open access.

Paths of desire: quantum style

Shortcuts are also called paths of desire (and other terms too) by those who loathe them. It turns that humans and other animals are not the only ones who use shortcuts. From a July 30, 2014 news item on ScienceDaily,

Groundskeepers and landscapers hate them, but there is no fighting them. Called desire paths, social trails or goat tracks, they are the unofficial shortcuts people create between two locations when the purpose-built path doesn’t take them where they want to go.

There’s a similar concept in classical physics called the “path of least action.” If you throw a softball to a friend, the ball traces a parabola through space. It doesn’t follow a serpentine path or loop the loop because those paths have higher “actions” than the true path.

A July 30, 2014 Washington University in St. Louis (Missouri, US) news release (also on EurekAlert) by Diana Lutz, which originated the news item, describes the issues associated with undertaking this research,

Quantum particles can exist in a superposition of states, yet as soon as quantum particles are “touched” by the outside world, they lose this quantum strangeness and collapse to a classically permitted state. Because of this evasiveness, it wasn’t possible until recently to observe them in their quantum state.

But in the past 20 years, physicists have devised devices that isolate quantum systems from the environment and allow them to be probed so gently that they don’t immediately collapse. With these devices, scientists can at long last follow quantum systems into quantum territory, or state space.

Kater Murch, PhD, an assistant professor of physics at Washington University in St. Louis, and collaborators Steven Weber and Irfan Siddiqui of the Quantum Nanoelectronics Laboratory at the University of California, Berkeley, have used a superconducting quantum device to continuously record the tremulous paths a quantum system took between a superposition of states to one of two classically permitted states.

Because even gentle probing makes each quantum trajectory noisy, Murch’s team repeated the experiment a million times and examined which paths were most common. The quantum equivalent of the classical “least action” path — or the quantum device’s path of desire — emerged from the resulting cobweb of many paths, just as pedestrian desire paths gradually emerge after new sod is laid.

The experiments, the first continuous measurements of the trajectories of a quantum system between two points, are described in the cover article of the July 31 [2014] issue of Nature.

“We are working with the simplest possible quantum system,” Murch said. “But the understanding of quantum interactions we are gaining might eventually be useful for the quantum control of biological and chemical systems.

“Chemistry at its most basic level is described by quantum mechanics,” he said. “In the past 20 years, chemists have developed a technique called quantum control, where shaped laser pulses are used to drive chemical reactions — that is, to drive them between two quantum states. The chemists control the quantum field from the laser, and that field controls the dynamics of a reaction,” he said.

“Eventually, we’ll be able to control the dynamics of chemical reactions with lasers instead of just mixing reactant 1 with reactant 2 and letting the reaction evolve on its own,” he said.

An artificial atom The device Murch uses to explore quantum space is a simple superconducting circuit. Because it has quantized energy levels, or states, like an atom, it is sometimes called an artificial atom. Murch’s team uses the bottom two energy levels, the ground state and an excited state, as their model quantum system.

Between these two states, there are an infinite number of quantum states that are superpositions, or combinations, of the ground and excited states. In the past, these states would have been invisible to physicists because attempts to measure them would have caused the system to immediately collapse.

But Murch’s device allows the system’s state to be probed many times before it becomes an effectively classical system. The quantum state of the circuit is detected by putting it inside a microwave box. A very small number of microwave photons are sent into the box where their quantum fields interact with the superconducting circuit.

The microwaves are so far off resonance with the circuit that they cannot drive it between its ground and its excited state. So instead of being absorbed, they leave the box bearing information about the quantum system in the form of a phase shift (the position of the troughs and peaks of the photons’ wavefunctions).

Although there is information about the quantum system in the exiting microwaves, it is only a small amount of information.

“Every time we nudge the system, something different happens,” Murch said. “That’s because the photons we use to measure the quantum system are quantum mechanical as well and exhibit quantum fluctuations. So it takes many of these measurements to distinguish the system’s signal from the quantum fluctuations of the photons probing it.” Or, as physicists put it, these are weak measurements.

Murch compares these experiments to soccer matches, which are ultimately experiments to determine which team is better. But because so few goals are scored in soccer, and these are often lucky shots, the less skilled team has a good chance of winning. Or as Murch might put it, one soccer match is such a weak measurement of a team’s skill that it can’t be used to draw a statistically reliable conclusion about which team is more skilled.

Each time a team scores a goal, it becomes somewhat more likely that that team is the better team, but the teams would have to play many games or play for a very long time to know for sure. These fluctuations are what make soccer matches so exciting.

Murch is in essence able to observe millions of these matches, and from all the matches where team B wins, he can determine the most likely way a game that ends with a victory for team B will develop.

Despite the difficulties, the team did establish a path of desire,

“Before we started this experiment,” Murch said, ” I asked everybody in the lab what they thought the most likely path between quantum states would be. I drew a couple of options on the board: a straight line, a convex curve, a concave curve, a squiggly line . . . I took a poll, and we all guessed different options. Here we were, a bunch of quantum experts, and we had absolutely no intuition about the most likely path.”

Andrew N. Jordan of the University of Rochester and his students Areeya Chantasri and Justin Dressel inspired the study by devising a theory to predict the likely path. Their theory predicted that a convex curve Murch had drawn on the white board would be the correct path.

“When we looked at the data, we saw that the theorists were right. Our very clever collaborators had devised a ‘principle of least action’ that works in the quantum case,” Murch said.

They had found the quantum system’s line of desire mathematically and by calculation before many microwave photons trampled out the path in Murch’s lab.

Here’s an illustrated quantum path of desire’s experimental data,

Caption: A path of desire emerging from many trajectories between two points in quantum state space. Credit: Murch Lab/WUSTL

Caption: A path of desire emerging from many trajectories between two points in quantum state space.
Credit: Murch Lab/WUSTL

The University of Rochester, a collaborating institution on this research, issued a July 30, 2014 news release (also on EurekAlert) featuring this poetic allusion from one of the theorists,

Jordan [Andrew N. Jordan, professor of physics at the University of Rochester] compares the experiment to watching butterflies make their way one by one from a cage to nearby trees. “Each butterfly’s path is like a single run of the experiment,” said Jordan. “They are all starting from the same cage, the initial state, and ending in one of the trees, each being a different end state.” By watching the quantum equivalent of a million butterflies make the journey from cage to tree, the researchers were in effect able to predict the most likely path a butterfly took by observing which tree it landed on (known as post-selection in quantum physics measurements), despite the presence of a wind, or any disturbance that affects how it flies (which is similar to the effect measuring has on the system).

The theorists provided this illustration of the theory,

Caption: Measurement data showing the comparison with the 'most likely' path (in red) between initial and final quantum states (black dots). The measurements are shown on a representation referred to as a Bloch sphere. Credit: Areeya Chantasri Courtesy: University of Rochester

Caption: Measurement data showing the comparison with the ‘most likely’ path (in red) between initial and final quantum states (black dots). The measurements are shown on a representation referred to as a Bloch sphere.
Credit: Areeya Chantasri Courtesy: University of Rochester

The research study can be found here,

Mapping the optimal route between two quantum states by S. J. Weber, A. Chantasri, J. Dressel, A. N. Jordan, K. W. Murch & I. Siddiqi. Nature 511, 570–573 (31 July 2014) doi:10.1038/nature13559 Published online 30 July 2014

This paper is behind a paywall but there is a free preview via ReadCube Access.

More on US National Nanotechnology Initiative (NNI) and EHS research strategy

In my Oct, 18, 2011 posting I noted that the US National Nanotechnology Initiative (NNI) would be holding a webinar on Oct. 20, 2011 to announce an environmental, health, and safety (EHS) research strategy for federal agencies participating in the NNI. I also noted that I was unable to register for the event. Thankfully all is not lost. There are a couple of news items on Nanowerk which give some information about the research strategy. The first news item, U.S. government releases environmental, health, and safety research strategy for nanotechnology, from the NNI offers this,

The strategy identifies six core categories of research that together can contribute to the responsible development of nanotechnology: (1) Nanomaterial Measurement Infrastructure, (2) Human Exposure Assessment, (3) Human Health, (4) Environment, (5) Risk Assessment and Risk Management, and (6) Informatics and Modeling. The strategy also aims to address the various ethical, legal, and societal implications of this emerging technology. Notable elements of the 2011 NNI EHS Research Strategy include:

  • The critical role of informatics and predictive modeling in organizing the expanding nanotechnology EHS knowledge base;
  • Targeting and accelerating research through the prioritization of nanomaterials for research; the establishment of standardized measurements, terminology, and nomenclature; and the stratification of knowledge for different applications of risk assessment; and
  • Identification of best practices for the coordination and implementation of NNI interagency collaborations and industrial and international partnerships. “The EHS Research Strategy provides guidance to all the Federal agencies that have been producing gold-standard scientific data for risk assessment and management, regulatory decision making, product use, research planning, and public outreach,” said Dr. Sally Tinkle, NNI EHS Coordinator and Deputy Director of the National Nanotechnology Coordination Office (NNCO), which coordinates activities of the 25 agencies that participate in the NNI. “This continues a trend in this Administration of increasing support for nanotechnology-related EHS research, as exemplified by new funding in 2011 from the Food and Drug Administration and the Consumer Product Safety Commission and increased funding from both the Environmental Protection Agency and the National Institute of Occupational Safety and Health within the Centers for Disease Control and Prevention.”

The other news item, Responsible development of nanotechnology: Maximizing results while minimizing risk, from Sally Tinkle, Deputy Director of the National Nanotechnology Coordination Office and Tof Carim, Assistant Director for Nanotechnology at OSTP (White House Office of Science and Technology Policy) adds this,

Core research areas addressed in the 2011 strategy include: nanomaterial measurement, human exposure assessment, human health, environment, risk assessment and management, and the new core area of predictive modeling and informatics. Also emphasized in this strategy is a more robust risk assessment component that incorporates product life cycle analysis and ethical, legal, and societal implications of nanotechnology. Most importantly, the strategy introduces principles for targeting and accelerating nanotechnology EHS research so that risk assessment and risk management decisions are based on sound science.

Progress in EHS research is occurring on many fronts as the NNI EHS research agencies have joined together to plan and fund research programs in core areas. For example, the Food and Drug Administration and National Institutes of Health have researched the safety of nanomaterials used in skin products like sunscreen; the Environmental Protection Agency and Consumer Product Safety Commission are monitoring the health and environmental impacts of products containing silver nanoparticles, and National Institute of Occupational Safety and Health has recommended safe handling guidelines for workers in industries and laboratories.

Erwin Gianchandani of the Computing Community Consortium blog focuses, not unnaturally, on the data aspect of the research strategy in his Oct. 20, 2011 posting titled, New Nanotechnology Strategy Touts Big Data, Modeling,

From the EHS Research Strategy:

Expanding informatics capabilities will aid development, analysis, organization, archiving, sharing, and use of data that is acquired in nanoEHS research projects… Effective management of reliable, high-quality data will also help support advanced modeling and simulation capabilities in support of future nanoEHS R&D and nanotechnology-related risk management.

Research needs highlighted span “Big Data”…

Data acquisition: Improvements in data reliability and reproducibility can be effected quickly by leveraging the widespread use of wireless and video-enabled devices by the public and by standards development organizations to capture protocol detail through videos…

Data analysis: The need for sensitivity analysis in conjunction with error and uncertainty analysis is urgent for hazard and exposure estimation and the rational design of nanomaterials… Collaborative efforts in nanomaterial design [will include] curation of datasets with known uncertainties and errors, the use of sensitivity analysis to predict changes in nanomaterial properties, and the development of computational models to augment and elucidate experimental data.

Data sharing: Improved data sharing is a crucial need to accelerate progress in nanoscience by removing the barriers presented by the current “siloed” data environment. Because data must be curated by those who have the most intimate knowledge of how it was obtained and analyzed and how it will be used, a central repository to facilitate sharing is not an optimal solution. However, federating database systems through common data elements would permit rapid semantic search and transparent sharing over all associated databases, while leaving control and curation of the data in the hands of the experts. The use of nanomaterial ontologies to define those data elements together with their computer-readable logical relationships can provide a semantic search capability.

…and predictive modeling:

Predictive models and simulations: The turnaround times for the development and validation of predictive models is measured in years. Pilot websites, applications, and tools should be added to the NCN [Network for Computational Nanotechnology] to speed collaborative code development among relevant modeling and simulation disciplines, including the risk modeling community. The infrastructure should provide for collaborative code development by public and private scientists, code validation exercises, feedback through interested user communities, and the transfer of validated versions to centers such as NanoHUB… Collaborative efforts could supplement nanomaterial characterization measurements to provide more complete sensitivity information and structure-property relationships.

Gianchandani’s post provides an unusual insight into the importance of data where research is considered. I do recommend more of his posting.

Dr. Andrew Maynard on his 2020 Science blog has posted as of Oct. 20, 2011 with a comparison of the original draft to the final report,

Given the comments received, I was interested to see how much they had influenced the final strategy.  If you take the time to comment on a federal document, it’s always nice to know that someone has paid attention.  Unfortunately, it isn’t usual practice for the federal government to respond directly to public comments, so I had the arduous task of carrying out a side by side comparison of the draft, and today’s document.

As it turns out, there are extremely few differences between the draft and the final strategy, and even fewer of these alter the substance of the document.  Which means that, by on large, my assessment of the document at the beginning of the year still stands.

Perhaps the most significant changes were on chapter 6 – Risk Assessment and Risk Management Methods. The final strategy presents a substantially revised set of current research needs, that more accurately and appropriately (in my opinion) reflect the current state of knowledge and uncertainty (page 66).  This is accompanied by an updated analysis of current projects (page 73), and additional text on page 77 stating

“Risk communication should also be appropriately tailored to the targeted audience. As a result, different approaches may be used to communicate risk(s) by Federal and state agencies, academia, and industry stakeholders with the goal of fostering the development of an effective risk management framework.”

Andrew examines the document further,

Comparing the final strategy to public comments from Günter Oberdörster [professor of Environmental Medicine at the University of Rochester in NY state] on the draft document. I decided to do this as Günter provided some of the most specific public comments, and because he is one of the most respected experts in the field.  The specificity of his comments also provided an indication of the extent to which they had been directly addressed in the final strategy.

Andrew’s post is well worth reading especially if you’ve ever made a submission to a public consultation held by your government.

The research strategy and other associated documents are now available for access and the webinar will be available for viewing at a later date. Go here.

Aside, I was a little surprised that I was unable to register to view the webinar live (I wonder if I’ll encounter the same difficulties later). It’s the first time I’ve had a problem viewing any such event hosted by a US government agency.

Supraparticles, self-assembly, and uniformity and Futurity

I’m not sure what I find more interesting the research  or the website. First the research, from the August 25, 2011 news item on Futurity,

In another instance of forces behaving in unexpected ways at the nanoscale, scientists [at the University of Michigan] discovered that if you start with small nanoscale building blocks that are varied enough in size, the electrostatic repulsion force and van der Waals attraction force will balance each other and limit the growth of the clusters, enabling formations that are uniform in size. The findings are published in the Nature Nanotechnology.

Researchers created the inorganic superclusters—technically called “supraparticles”—out of red, powdery cadmium selenide In many ways the structures are similar to viruses. They share many attributes with the simplest forms of life, including size, shape, core-shell structure, and the abilities to both assemble and dissemble, says co-author Nicholas Kotov.

Here’s a graphic that accompanies the news item,

Under the right circumstances, basic atomic forces can be exploited to enable nanoparticles to assemble into superclusters that are uniform in size and share attributes with viruses. (Credit: T.D.Nguyen)

I’m particularly interested in that comment about the resemblance to viruses. Now on to Futurity, a science news aggregator (from the About Futurity page)

Futurity features the latest discoveries in all fields from scientists at the top universities in the US, UK, and Canada. The site, which is hosted at the University of Rochester, launched in 2009 as a way to share research news with the public.

Who is Futurity?
A consortium of participating universities manages and funds the project. The university partners are members of the Association of American Universities (AAU) and of the Russell Group. Futurity aggregates the very best research news from these top universities.

There are two universities from Canada involved, University of Toronto and McGill University.