Tag Archives: Peter Reuell

3D picture language for mathematics

There’s a new, 3D picture language for mathematics called ‘quon’ according to a March 3, 2017 news item on phys.org,

Galileo called mathematics the “language with which God wrote the universe.” He described a picture-language, and now that language has a new dimension.

The Harvard trio of Arthur Jaffe, the Landon T. Clay Professor of Mathematics and Theoretical Science, postdoctoral fellow Zhengwei Liu, and researcher Alex Wozniakowski has developed a 3-D picture-language for mathematics with potential as a tool across a range of topics, from pure math to physics.

Though not the first pictorial language of mathematics, the new one, called quon, holds promise for being able to transmit not only complex concepts, but also vast amounts of detail in relatively simple images. …

A March 2, 2017 Harvard University news release by Peter Reuell, which originated the news item, provides more context for the research,

“It’s a big deal,” said Jacob Biamonte of the Quantum Complexity Science Initiative after reading the research. “The paper will set a new foundation for a vast topic.”

“This paper is the result of work we’ve been doing for the past year and a half, and we regard this as the start of something new and exciting,” Jaffe said. “It seems to be the tip of an iceberg. We invented our language to solve a problem in quantum information, but we have already found that this language led us to the discovery of new mathematical results in other areas of mathematics. We expect that it will also have interesting applications in physics.”

When it comes to the “language” of mathematics, humans start with the basics — by learning their numbers. As we get older, however, things become more complex.

“We learn to use algebra, and we use letters to represent variables or other values that might be altered,” Liu said. “Now, when we look at research work, we see fewer numbers and more letters and formulas. One of our aims is to replace ‘symbol proof’ by ‘picture proof.’”

The new language relies on images to convey the same information that is found in traditional algebraic equations — and in some cases, even more.

“An image can contain information that is very hard to describe algebraically,” Liu said. “It is very easy to transmit meaning through an image, and easy for people to understand what they see in an image, so we visualize these concepts and instead of words or letters can communicate via pictures.”

“So this pictorial language for mathematics can give you insights and a way of thinking that you don’t see in the usual, algebraic way of approaching mathematics,” Jaffe said. “For centuries there has been a great deal of interaction between mathematics and physics because people were thinking about the same things, but from different points of view. When we put the two subjects together, we found many new insights, and this new language can take that into another dimension.”

In their most recent work, the researchers moved their language into a more literal realm, creating 3-D images that, when manipulated, can trigger mathematical insights.

“Where before we had been working in two dimensions, we now see that it’s valuable to have a language that’s Lego-like, and in three dimensions,” Jaffe said. “By pushing these pictures around, or working with them like an object you can deform, the images can have different mathematical meanings, and in that way we can create equations.”

Among their pictorial feats, Jaffe said, are the complex equations used to describe quantum teleportation. The researchers have pictures for the Pauli matrices, which are fundamental components of quantum information protocols. This shows that the standard protocols are topological, and also leads to discovery of new protocols.

“It turns out one picture is worth 1,000 symbols,” Jaffe said.

“We could describe this algebraically, and it might require an entire page of equations,” Liu added. “But we can do that in one picture, so it can capture a lot of information.”

Having found a fit with quantum information, the researchers are now exploring how their language might also be useful in a number of other subjects in mathematics and physics.

“We don’t want to make claims at this point,” Jaffe said, “but we believe and are thinking about quite a few other areas where this picture-language could be important.”

Sadly, there are no artistic images illustrating quon but this is from the paper,

An n-quon is represented by n hemispheres. We call the flat disc on the boundary of each hemisphere a boundary disc. Each hemisphere contains a neutral diagram with four boundary points on its boundary disk. The dotted box designates the internal structure that specifies the quon vector. For example, the 3-quon is represented as

Courtesy: PNAS and Harvard University

I gather the term ‘quon’ is meant to suggest quantum particles.

Here’s a link and a citation for the paper,

Quon 3D language for quantum information by Zhengwei Liu, Alex Wozniakowski, and Arthur M. Jaffe. Proceedins of the National Academy of Sciences Published online before print February 6, 2017, doi: 10.1073/pnas.1621345114 PNAS March 7, 2017 vol. 114 no. 10

This paper appears to be open access.

Cyborgs (a presentation) at the American Chemical Society’s 248th meeting

There will be a plethora of chemistry news online over the next few days as the American Society’s (ACS) 248th meeting in San Francisco, CA from Aug. 10 -14, 2014 takes place. Unexpectedly, an Aug. 11, 2014 news item on Azonano highlights a meeting presentation focused on cyborgs,

No longer just fantastical fodder for sci-fi buffs, cyborg technology is bringing us tangible progress toward real-life electronic skin, prosthetics and ultraflexible circuits. Now taking this human-machine concept to an unprecedented level, pioneering scientists are working on the seamless marriage between electronics and brain signaling with the potential to transform our understanding of how the brain works — and how to treat its most devastating diseases.

An Aug. 10, 2014 ACS news release on EurekAlert provides more detail about the presentation (Note: Links have been removed),

“By focusing on the nanoelectronic connections between cells, we can do things no one has done before,” says Charles M. Lieber, Ph.D. “We’re really going into a new size regime for not only the device that records or stimulates cellular activity, but also for the whole circuit. We can make it really look and behave like smart, soft biological material, and integrate it with cells and cellular networks at the whole-tissue level. This could get around a lot of serious health problems in neurodegenerative diseases in the future.”

These disorders, such as Parkinson’s, that involve malfunctioning nerve cells can lead to difficulty with the most mundane and essential movements that most of us take for granted: walking, talking, eating and swallowing.

Scientists are working furiously to get to the bottom of neurological disorders. But they involve the body’s most complex organ — the brain — which is largely inaccessible to detailed, real-time scrutiny. This inability to see what’s happening in the body’s command center hinders the development of effective treatments for diseases that stem from it.

By using nanoelectronics, it could become possible for scientists to peer for the first time inside cells, see what’s going wrong in real time and ideally set them on a functional path again.

For the past several years, Lieber has been working to dramatically shrink cyborg science to a level that’s thousands of times smaller and more flexible than other bioelectronic research efforts. His team has made ultrathin nanowires that can monitor and influence what goes on inside cells. Using these wires, they have built ultraflexible, 3-D mesh scaffolding with hundreds of addressable electronic units, and they have grown living tissue on it. They have also developed the tiniest electronic probe ever that can record even the fastest signaling between cells.

Rapid-fire cell signaling controls all of the body’s movements, including breathing and swallowing, which are affected in some neurodegenerative diseases. And it’s at this level where the promise of Lieber’s most recent work enters the picture.

In one of the lab’s latest directions, Lieber’s team is figuring out how to inject their tiny, ultraflexible electronics into the brain and allow them to become fully integrated with the existing biological web of neurons. They’re currently in the early stages of the project and are working with rat models.

“It’s hard to say where this work will take us,” he says. “But in the end, I believe our unique approach will take us on a path to do something really revolutionary.”

Lieber acknowledges funding from the U.S. Department of Defense, the National Institutes of Health and the U.S. Air Force.

I first covered Lieber’s work in an Aug. 27, 2012 posting  highlighting some good descriptions from Lieber and his colleagues of their work. There’s also this Aug. 26, 2012 article by Peter Reuell in the Harvard Gazette (featuring a very good technical description for someone not terribly familiar with the field but able to grasp some technical information while managing their own [mine] ignorance). The posting and the article provide details about the foundational work for Lieber’s 2014 presentation at the ACS meeting.

Lieber will be speaking next at the IEEE (Institute for Electrical and Electronics Engineers) 14th International Conference on Nanotechnology sometime between August 18 – 21, 2014 in Toronto, Ontario, Canada.

As for some of Lieber’s latest published work, there’s more information in my Feb. 20, 2014 posting which features a link to a citation for the paper (behind a paywall) in question.

Watch out Roomba! Camouflaging soft robots are on the move

Roomba, one of the better known consumer-class robots, is a hard-bodied robot used for vacuum-cleaning (or, hoovering as the Brits say). These days scientists are working on soft-bodied robots modeled on an octopus or a starfish or a squid. A team at Harvard University has added a camouflaging feature to its soft robot.

The Aug. 16, 2012 news release on EurekAlert provides some detail about the inspiration (in a field generally known as biomimicry or biomimetics),

A team of researchers led by George Whitesides, the Woodford L. and Ann A. Flowers University Professor [and well known within the field of nanotechnology], has already broken new engineering ground with the development of soft, silicone-based robots inspired by creatures like starfish and squid.

Now, they’re working to give those robots the ability to disguise themselves.

“When we began working on soft robots, we were inspired by soft organisms, including octopi and squid,” Morin said [Stephen Morin, a Post-Doctoral Fellow and first author for the paper]. “One of the fascinating characteristics of these animals is their ability to control their appearance, and that inspired us to take this idea further and explore dynamic coloration. I think the important thing we’ve shown in this paper is that even when using simple systems – in this case we have simple, open-ended micro-channels – you can achieve a great deal in terms of your ability to camouflage an object, or to display where an object is.”

“One of the most interesting questions in science is ‘Why do animals have the shape, and color, and capabilities that they do?'” said Whitesides. “Evolution might lead to a particular form, but why? One function of our work on robotics is to give us, and others interested in this kind of question, systems that we can use to test ideas. Here the question might be: ‘How does a small crawling organism most efficiently disguise (or advertise) itself in leaves?’ These robots are test-beds for ideas about form and color and movement.”

Peter Reuell’s Aug. 16, 2012 article for Harvard Science, which originated the news release, describes some of the technology and capabilities,

Just as with the soft robots, the “color layers” used in the camouflage start as molds created using 3-D printers. Silicone is then poured into the molds to create micro-channels, which are topped with another layer of silicone. The layers can be created as a separate sheet that sits atop the soft robots, or incorporated directly into their structure. Once created, researchers can pump colored liquids into the channels, causing the robot to mimic the colors and patterns of its environment.

The system’s camouflage capabilities aren’t limited to visible colors though.

By pumping heated or cooled liquids into the channels, researchers can camouflage the robots thermally (infrared color). Other tests described in the Science [journal]  paper used fluorescent liquids that allowed the color layers to literally glow in the dark.

“There is an enormous amount of spectral control we can exert with this system,” Morin said. “We can design color layers with multiple channels, which can be activated independently. We’ve only begun to scratch the surface, I think, of what’s possible.”

The uses for the color-layer technology, however, don’t end at camouflage.

Just as animals use color change to communicate, Morin envisions robots using the system as a way to signal their position, both to other robots, and to the public. As an example, he cited the possible use of the soft machines during search and rescue operations following a disaster. In dimly lit conditions, he said, a robot that stands out from its surroundings (or even glows in the dark) could be useful in leading rescue crews trying to locate survivors.

So,  if the scientists are pumping the colour into the soft robot, it’s still a long way from nature’s design where the creature produces its own colourants and controls its own camouflage in response to environmental factors.

Interestingly, there’s no mention of military applications for this camouflaging robot.