Tag Archives: University of Texas at Austin

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.

Nanoelectronic thread (NET) brain probes for long-term neural recording

A rendering of the ultra-flexible probe in neural tissue gives viewers a sense of the device’s tiny size and footprint in the brain. Image credit: Science Advances.

As long time readers have likely noted, I’m not a big a fan of this rush to ‘colonize’ the brain but it continues apace as a Feb. 15, 2017 news item on Nanowerk announces a new type of brain probe,

Engineering researchers at The University of Texas at Austin have designed ultra-flexible, nanoelectronic thread (NET) brain probes that can achieve more reliable long-term neural recording than existing probes and don’t elicit scar formation when implanted.

A Feb. 15, 2017 University of Texas at Austin news release, which originated the news item, provides more information about the new probes (Note: A link has been removed),

A team led by Chong Xie, an assistant professor in the Department of Biomedical Engineering in the Cockrell School of Engineering, and Lan Luan, a research scientist in the Cockrell School and the College of Natural Sciences, have developed new probes that have mechanical compliances approaching that of the brain tissue and are more than 1,000 times more flexible than other neural probes. This ultra-flexibility leads to an improved ability to reliably record and track the electrical activity of individual neurons for long periods of time. There is a growing interest in developing long-term tracking of individual neurons for neural interface applications, such as extracting neural-control signals for amputees to control high-performance prostheses. It also opens up new possibilities to follow the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s diseases.

One of the problems with conventional probes is their size and mechanical stiffness; their larger dimensions and stiffer structures often cause damage around the tissue they encompass. Additionally, while it is possible for the conventional electrodes to record brain activity for months, they often provide unreliable and degrading recordings. It is also challenging for conventional electrodes to electrophysiologically track individual neurons for more than a few days.

In contrast, the UT Austin team’s electrodes are flexible enough that they comply with the microscale movements of tissue and still stay in place. The probe’s size also drastically reduces the tissue displacement, so the brain interface is more stable, and the readings are more reliable for longer periods of time. To the researchers’ knowledge, the UT Austin probe — which is as small as 10 microns at a thickness below 1 micron, and has a cross-section that is only a fraction of that of a neuron or blood capillary — is the smallest among all neural probes.

“What we did in our research is prove that we can suppress tissue reaction while maintaining a stable recording,” Xie said. “In our case, because the electrodes are very, very flexible, we don’t see any sign of brain damage — neurons stayed alive even in contact with the NET probes, glial cells remained inactive and the vasculature didn’t become leaky.”

In experiments in mouse models, the researchers found that the probe’s flexibility and size prevented the agitation of glial cells, which is the normal biological reaction to a foreign body and leads to scarring and neuronal loss.

“The most surprising part of our work is that the living brain tissue, the biological system, really doesn’t mind having an artificial device around for months,” Luan said.

The researchers also used advanced imaging techniques in collaboration with biomedical engineering professor Andrew Dunn and neuroscientists Raymond Chitwood and Jenni Siegel from the Institute for Neuroscience at UT Austin to confirm that the NET enabled neural interface did not degrade in the mouse model for over four months of experiments. The researchers plan to continue testing their probes in animal models and hope to eventually engage in clinical testing. The research received funding from the UT BRAIN seed grant program, the Department of Defense and National Institutes of Health.

Here’s a link to and citation for the paper,

Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration by Lan Luan, Xiaoling Wei, Zhengtuo Zhao, Jennifer J. Siegel, Ojas Potnis, Catherine A Tuppen, Shengqing Lin, Shams Kazmi, Robert A. Fowler, Stewart Holloway, Andrew K. Dunn, Raymond A. Chitwood, and Chong Xie. Science Advances  15 Feb 2017: Vol. 3, no. 2, e1601966 DOI: 10.1126/sciadv.1601966

This paper is open access.

You can get more detail about the research in a Feb. 17, 2017 posting by Dexter Johnson on his Nanoclast blog (on the IEEE [International Institute for Electrical and Electronics Engineers] website).

Nanotechnology cracks Wall Street (Daily)

David Dittman’s Jan. 11, 2017 article for wallstreetdaily.com portrays a great deal of excitement about nanotechnology and the possibilities (I’m highlighting the article because it showcases Dexter Johnson’s Nanoclast blog),

When we talk about next-generation aircraft, next-generation wearable biomedical devices, and next-generation fiber-optic communication, the consistent theme is nano: nanotechnology, nanomaterials, nanophotonics.

For decades, manufacturers have used carbon fiber to make lighter sports equipment, stronger aircraft, and better textiles.

Now, as Dexter Johnson of IEEE [Institute of Electrical and Electronics Engineers] Spectrum reports [on his Nanoclast blog], carbon nanotubes will help make aerospace composites more efficient:

Now researchers at the University of Surrey’s Advanced Technology Institute (ATI), the University of Bristol’s Advanced Composite Centre for Innovation and Science (ACCIS), and aerospace company Bombardier [headquartered in Montréal, Canada] have collaborated on the development of a carbon nanotube-enabled material set to replace the polymer sizing. The reinforced polymers produced with this new material have enhanced electrical and thermal conductivity, opening up new functional possibilities. It will be possible, say the British researchers, to embed gadgets such as sensors and energy harvesters directly into the material.

When it comes to flight, lighter is better, so building sensors and energy harvesters into the body of aircraft marks a significant leap forward.

Johnson also reports for IEEE Spectrum on a “novel hybrid nanomaterial” based on oscillations of electrons — a major advance in nanophotonics:

Researchers at the University of Texas at Austin have developed a hybrid nanomaterial that enables the writing, erasing and rewriting of optical components. The researchers believe that this nanomaterial and the techniques used in exploiting it could create a new generation of optical chips and circuits.

Of course, the concept of rewritable optics is not altogether new; it forms the basis of optical storage mediums like CDs and DVDs. However, CDs and DVDs require bulky light sources, optical media and light detectors. The advantage of the rewritable integrated photonic circuits developed here is that it all happens on a 2-D material.

“To develop rewritable integrated nanophotonic circuits, one has to be able to confine light within a 2-D plane, where the light can travel in the plane over a long distance and be arbitrarily controlled in terms of its propagation direction, amplitude, frequency and phase,” explained Yuebing Zheng, a professor at the University of Texas who led the research… “Our material, which is a hybrid, makes it possible to develop rewritable integrated nanophotonic circuits.”

Who knew that mixing graphene with homemade Silly Putty would create a potentially groundbreaking new material that could make “wearables” actually useful?

Next-generation biomedical devices will undoubtedly include some of this stuff:

A dash of graphene can transform the stretchy goo known as Silly Putty into a pressure sensor able to monitor a human pulse or even track the dainty steps of a small spider.

The material, dubbed G-putty, could be developed into a device that continuously monitors blood pressure, its inventors hope.

The guys who made G-putty often rely on “household stuff” in their research.

It’s nice to see a blogger’s work be highlighted. Congratulations Dexter.

G-putty was mentioned here in a Dec. 30, 2016 posting which also includes a link to Dexter’s piece on the topic.

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

New electrochromic material for ‘smart’ windows

Given that it’s summer, I seem to be increasingly obsessed with windows that help control the heat from the sun. So, this Aug. 22, 2016 news item on ScienceDaily hit my sweet spot,

Researchers in the Cockrell School of Engineering at The University of Texas at Austin have invented a new flexible smart window material that, when incorporated into windows, sunroofs, or even curved glass surfaces, will have the ability to control both heat and light from the sun. …

Delia Milliron, an associate professor in the McKetta Department of Chemical Engineering, and her team’s advancement is a new low-temperature process for coating the new smart material on plastic, which makes it easier and cheaper to apply than conventional coatings made directly on the glass itself. The team demonstrated a flexible electrochromic device, which means a small electric charge (about 4 volts) can lighten or darken the material and control the transmission of heat-producing, near-infrared radiation. Such smart windows are aimed at saving on cooling and heating bills for homes and businesses.

An Aug. 22, 2016 University of Texas at Austin news release (also on EurekAlert), which originated the news item, describes the international team behind this research and offers more details about the research itself,

The research team is an international collaboration, including scientists at the European Synchrotron Radiation Facility and CNRS in France, and Ikerbasque in Spain. Researchers at UT Austin’s College of Natural Sciences provided key theoretical work.

Milliron and her team’s low-temperature process generates a material with a unique nanostructure, which doubles the efficiency of the coloration process compared with a coating produced by a conventional high-temperature process. It can switch between clear and tinted more quickly, using less power.

The new electrochromic material, like its high-temperature processed counterpart, has an amorphous structure, meaning the atoms lack any long-range organization as would be found in a crystal. However, the new process yields a unique local arrangement of the atoms in a linear, chain-like structure. Whereas conventional amorphous materials produced at high temperature have a denser three-dimensionally bonded structure, the researchers’ new linearly structured material, made of chemically condensed niobium oxide, allows ions to flow in and out more freely. As a result, it is twice as energy efficient as the conventionally processed smart window material.

At the heart of the team’s study is their rare insight into the atomic-scale structure of the amorphous materials, whose disordered structures are difficult to characterize. Because there are few techniques for characterizing the atomic-scale structure sufficiently enough to understand properties, it has been difficult to engineer amorphous materials to enhance their performance.

“There’s relatively little insight into amorphous materials and how their properties are impacted by local structure,” Milliron said. “But, we were able to characterize with enough specificity what the local arrangement of the atoms is, so that it sheds light on the differences in properties in a rational way.”

Graeme Henkelman, a co-author on the paper and chemistry professor in UT Austin’s College of Natural Sciences, explains that determining the atomic structure for amorphous materials is far more difficult than for crystalline materials, which have an ordered structure. In this case, the researchers were able to use a combination of techniques and measurements to determine an atomic structure that is consistent in both experiment and theory.

“Such collaborative efforts that combine complementary techniques are, in my view, the key to the rational design of new materials,” Henkelman said.

Milliron believes the knowledge gained here could inspire deliberate engineering of amorphous materials for other applications such as supercapacitors that store and release electrical energy rapidly and efficiently.

The Milliron lab’s next challenge is to develop a flexible material using their low-temperature process that meets or exceeds the best performance of electrochromic materials made by conventional high-temperature processing.

“We want to see if we can marry the best performance with this new low-temperature processing strategy,” she said.

Here’s a link to and a citation for the paper,

Linear topology in amorphous metal oxide electrochromic networks obtained via low-temperature solution processing by Anna Llordés, Yang Wang, Alejandro Fernandez-Martinez, Penghao Xiao, Tom Lee, Agnieszka Poulain, Omid Zandi, Camila A. Saez Cabezas, Graeme Henkelman, & Delia J. Milliron. Nature Materials (2016)  doi:10.1038/nmat4734 Published online 22 August 2016

This paper is behind a paywall.

Exploring the fundamental limits of invisibility cloaks

There’s some interesting work on invisibility cloaks coming from the University of Texas at Austin according to a July 6, 2015 news item on Nanowerk,

Researchers in the Cockrell School of Engineering at The University of Texas at Austin have been able to quantify fundamental physical limitations on the performance of cloaking devices, a technology that allows objects to become invisible or undetectable to electromagnetic waves including radio waves, microwaves, infrared and visible light.

A July 5, 2016 University of Texas at Austin news release (also on EurekAlert), which originated the news item, expands on the theme,

The researchers’ theory confirms that it is possible to use cloaks to perfectly hide an object for a specific wavelength, but hiding an object from an illumination containing different wavelengths becomes more challenging as the size of the object increases.

Andrea Alù, an electrical and computer engineering professor and a leading researcher in the area of cloaking technology, along with graduate student Francesco Monticone, created a quantitative framework that now establishes boundaries on the bandwidth capabilities of electromagnetic cloaks for objects of different sizes and composition. As a result, researchers can calculate the expected optimal performance of invisibility devices before designing and developing a specific cloak for an object of interest. …

Cloaks are made from artificial materials, called metamaterials, that have special properties enabling a better control of the incoming wave, and can make an object invisible or transparent. The newly established boundaries apply to cloaks made of passive metamaterials — those that do not draw energy from an external power source.

Understanding the bandwidth and size limitations of cloaking is important to assess the potential of cloaking devices for real-world applications such as communication antennas, biomedical devices and military radars, Alù said. The researchers’ framework shows that the performance of a passive cloak is largely determined by the size of the object to be hidden compared with the wavelength of the incoming wave, and it quantifies how, for shorter wavelengths, cloaking gets drastically more difficult.

For example, it is possible to cloak a medium-size antenna from radio waves over relatively broad bandwidths for clearer communications, but it is essentially impossible to cloak large objects, such as a human body or a military tank, from visible light waves, which are much shorter than radio waves.

“We have shown that it will not be possible to drastically suppress the light scattering of a tank or an airplane for visible frequencies with currently available techniques based on passive materials,” Monticone said. “But for objects comparable in size to the wavelength that excites them (a typical radio-wave antenna, for example, or the tip of some optical microscopy tools), the derived bounds show that you can do something useful, the restrictions become looser, and we can quantify them.”

In addition to providing a practical guide for research on cloaking devices, the researchers believe that the proposed framework can help dispel some of the myths that have been developed around cloaking and its potential to make large objects invisible.
“The question is, ‘Can we make a passive cloak that makes human-scale objects invisible?’ ” Alù said. “It turns out that there are stringent constraints in coating an object with a passive material and making it look as if the object were not there, for an arbitrary incoming wave and observation point.”

Now that bandwidth limits on cloaking are available, researchers can focus on developing practical applications with this technology that get close to these limits.

“If we want to go beyond the performance of passive cloaks, there are other options,” Monticone said. “Our group and others have been exploring active and nonlinear cloaking techniques, for which these limits do not apply. Alternatively, we can aim for looser forms of invisibility, as in cloaking devices that introduce phase delays as light is transmitted through, camouflaging techniques, or other optical tricks that give the impression of transparency, without actually reducing the overall scattering of light.”

Alù’s lab is working on the design of active cloaks that use metamaterials plugged to an external energy source to achieve broader transparency bandwidths.

“Even with active cloaks, Einstein’s theory of relativity fundamentally limits the ultimate performance for invisibility,” Alù said. “Yet, with new concepts and designs, such as active and nonlinear metamaterials, it is possible to move forward in the quest for transparency and invisibility.”

The researchers have prepared a diagram illustrating their work,

The graph shows the trade-off between how much an object can be made transparent (scattering reduction; vertical axis) and the color span (bandwidth; horizontal axis) over which this phenomenon can be achieved. Courtesy: University of Texas at Austin

The graph shows the trade-off between how much an object can be made transparent (scattering reduction; vertical axis) and the color span (bandwidth; horizontal axis) over which this phenomenon can be achieved. Courtesy: University of Texas at Austin

Here’s a link to and a citation for the paper,

Invisibility exposed: physical bounds on passive cloaking by Francesco Monticone and Andrea Alù. Optica Vol. 3, Issue 7, pp. 718-724 (2016) •doi: 10.1364/OPTICA.3.000718

This paper is open access.

Origami and our pop-up future

They should have declared Jan. 25, 2016 ‘L. Mahadevan Day’ at Harvard University. The researcher was listed as an author on two major papers. I covered the first piece of research, 4D printed hydrogels, in this Jan. 26, 2016 posting. Now for Mahadevan’s other work, from a Jan. 27, 2016 news item on Nanotechnology Now,

What if you could make any object out of a flat sheet of paper?

That future is on the horizon thanks to new research by L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, Organismic and Evolutionary Biology, and Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). He is also a core faculty member of the Wyss Institute for Biologically Inspired Engineering, and member of the Kavli Institute for Bionano Science and Technology, at Harvard University.

Mahadevan and his team have characterized a fundamental origami fold, or tessellation, that could be used as a building block to create almost any three-dimensional shape, from nanostructures to buildings. …

A Jan. 26, 2016 Harvard University news release by Leah Burrows, which originated the news item, provides more detail about the specific fold the team has been investigating,

The folding pattern, known as the Miura-ori, is a periodic way to tile the plane using the simplest mountain-valley fold in origami. It was used as a decorative item in clothing at least as long ago as the 15th century. A folded Miura can be packed into a flat, compact shape and unfolded in one continuous motion, making it ideal for packing rigid structures like solar panels.  It also occurs in nature in a variety of situations, such as in insect wings and certain leaves.

“Could this simple folding pattern serve as a template for more complicated shapes, such as saddles, spheres, cylinders, and helices?” asked Mahadevan.

“We found an incredible amount of flexibility hidden inside the geometry of the Miura-ori,” said Levi Dudte, graduate student in the Mahadevan lab and first author of the paper. “As it turns out, this fold is capable of creating many more shapes than we imagined.”

Think surgical stents that can be packed flat and pop-up into three-dimensional structures once inside the body or dining room tables that can lean flat against the wall until they are ready to be used.

“The collapsibility, transportability and deployability of Miura-ori folded objects makes it a potentially attractive design for everything from space-bound payloads to small-space living to laparoscopic surgery and soft robotics,” said Dudte.

Here’s a .gif demonstrating the fold,

This spiral folds rigidly from flat pattern through the target surface and onto the flat-folded plane (Image courtesy of Mahadevan Lab) Harvard University

This spiral folds rigidly from flat pattern through the target surface and onto the flat-folded plane (Image courtesy of Mahadevan Lab) Harvard University

The news release offers some details about the research,

To explore the potential of the tessellation, the team developed an algorithm that can create certain shapes using the Miura-ori fold, repeated with small variations. Given the specifications of the target shape, the program lays out the folds needed to create the design, which can then be laser printed for folding.

The program takes into account several factors, including the stiffness of the folded material and the trade-off between the accuracy of the pattern and the effort associated with creating finer folds – an important characterization because, as of now, these shapes are all folded by hand.

“Essentially, we would like to be able to tailor any shape by using an appropriate folding pattern,” said Mahadevan. “Starting with the basic mountain-valley fold, our algorithm determines how to vary it by gently tweaking it from one location to the other to make a vase, a hat, a saddle, or to stitch them together to make more and more complex structures.”

“This is a step in the direction of being able to solve the inverse problem – given a functional shape, how can we design the folds on a sheet to achieve it,” Dudte said.

“The really exciting thing about this fold is it is completely scalable,” said Mahadevan. “You can do this with graphene, which is one atom thick, or you can do it on the architectural scale.”

Co-authors on the study include Etienne Vouga, currently at the University of Texas at Austin, and Tomohiro Tachi from the University of Tokyo. …

Here’s a link to and a citation for the paper,

Programming curvature using origami tessellations by Levi H. Dudte, Etienne Vouga, Tomohiro Tachi, & L. Mahadevan. Nature Materials (2016) doi:10.1038/nmat4540 Published online 25 January 2016

This paper is behind a paywall.

Nanotechnology-enabled flame retardant coating

This is a pretty remarkable demonstration made more so when you find out the flame retardant is naturally derived and nontoxic. From an Oct. 5, 2015 news item on Nanowerk,

Inspired by a naturally occurring material found in marine mussels, researchers at The University of Texas at Austin have created a new flame retardant to replace commercial additives that are often toxic and can accumulate over time in the environment and living animals, including humans.

An Oct. 5, 2015 University of Texas news release, which originated the news item, describes the situation with regard to standard flame retardants and what makes this new flame retardant technology so compelling,

Flame retardants are added to foams found in mattresses, sofas, car upholstery and many other consumer products. Once incorporated into foam, these chemicals can migrate out of the products over time, releasing toxic substances into the air and environment. Throughout the United States, there is pressure on state legislatures to ban flame retardants, especially those containing brominated compounds (BRFs), a mix of human-made chemicals thought to pose a risk to public health.

A team led by Cockrell School of Engineering associate professor Christopher Ellison found that a synthetic coating of polydopamine — derived from the natural compound dopamine — can be used as a highly effective, water-applied flame retardant for polyurethane foam. Dopamine is a chemical compound found in humans and animals that helps in the transmission of signals in the brain and other vital areas. The researchers believe their dopamine-based nanocoating could be used in lieu of conventional flame retardants.

“Since polydopamine is natural and already present in animals, this question of toxicity immediately goes away,” Ellison said. “We believe polydopamine could cheaply and easily replace the flame retardants found in many of the products that we use every day, making these products safer for both children and adults.”

Using far less polydopamine by weight than typical of conventional flame retardant additives, the UT Austin team found that the polydopamine coating on foams leads to a 67 percent reduction in peak heat release rate, a measure of fire intensity and imminent danger to building occupants or firefighters. The polydopamine flame retardant’s ability to reduce the fire’s intensity is about 20 percent better than existing flame retardants commonly used today.

Researchers have studied the use of synthetic polydopamaine for a number of health-related applications, including cancer drug delivery and implantable biomedical devices. However, the UT Austin team is thought to be one of the first to pursue the use of polydopamine as a flame retardant. To the research team’s surprise, they did not have to change the structure of the polydopamine from its natural form to use it as a flame retardant. The polydopamine was coated onto the interior and exterior surfaces of the polyurethane foam by simply dipping it into a water solution of dopamine for several days.

Ellison said he and his team were drawn to polydopamine because of its ability to adhere to surfaces as demonstrated by marine mussels who use the compound to stick to virtually any surface, including Teflon, the material used in nonstick cookware. Polydopamine also contains a dihydroxy-ring structure linked with an amine group that can be used to scavenge or remove free radicals. Free radicals are produced during the fire cycle as a polymer degrades, and their removal is critical to stopping the fire from continuing to spread. Polydopamine also produces a protective coating called char, which blocks fire’s access to its fuel source — the polymer. The synergistic combination of both these processes makes polydopamine an attractive and powerful flame retardant.

Ellison and his team are now testing to see whether they can shorten the nanocoating treatment process or develop a more convenient application process.

“We believe this alternative to flame retardants can prove very useful to removing potential hazards from products that children and adults use every day,” Ellison said. “We weren’t expecting to find a flame retardant in nature, but it was a serendipitous discovery.”

Here’s a link to and a citation for the paper,

Bioinspired Catecholic Flame Retardant Nanocoating for Flexible Polyurethane Foams by Joon Hee Cho, Vivek Vasagar, Kadhiravan Shanmuganathan, Amanda R. Jones, Sergei Nazarenko, and Christopher J. Ellison. Chem. Mater., 2015, 27 (19), pp 6784–6790 DOI: 10.1021/acs.chemmater.5b03013
Publication Date (Web): September 9, 2015
Copyright © 2015 American Chemical Society

This paper is behind a paywall. It should be noted that researchers from the University of Southern Mississippi and the Council of Scientific & Industrial Research (CSIR)-National Chemical Laboratory in Pune, India were also involved in this work.

An easier and cheaper way to make: wearable and disposable medical tattoolike patches

A Sept. 29, 2015 news item on ScienceDaily features an electronic health patch that’s cheaper and easier to make,

A team of researchers has invented a method for producing inexpensive and high-performing wearable patches that can continuously monitor the body’s vital signs for human health and performance tracking. The researchers believe their new method is compatible with roll-to-roll manufacturing.

The researchers have provided a photograph of a prototype patch,

Assitant professor Nanshu Lu and her team have developed a faster, inexpensive method for making epidermal electronics. Cockrell School of Engineering

Assitant professor Nanshu Lu and her team have developed a faster, inexpensive method for making epidermal electronics. Cockrell School of Engineering

A University of Texas at Austin Sept. 29, 2015 news release (also on EurekAlert), which originated the news item, provides more details,

Led by Assistant Professor Nanshu Lu, the team’s manufacturing method aims to construct disposable tattoo-like health monitoring patches for the mass production of epidermal electronics, a popular technology that Lu helped develop in 2011.

The team’s breakthrough is a repeatable “cut-and-paste” method that cuts manufacturing time from several days to only 20 minutes. The researchers believe their new method is compatible with roll-to-roll manufacturing — an existing method for creating devices in bulk using a roll of flexible plastic and a processing machine.

Reliable, ultrathin wearable electronic devices that stick to the skin like a temporary tattoo are a relatively new innovation. These devices have the ability to pick up and transmit the human body’s vital signals, tracking heart rate, hydration level, muscle movement, temperature and brain activity.

Although it is a promising invention, a lengthy, tedious and costly production process has until now hampered these wearables’ potential.

“One of the most attractive aspects of epidermal electronics is their ability to be disposable,” Lu said. “If you can make them inexpensively, say for $1, then more people will be able to use them more frequently. This will open the door for a number of mobile medical applications and beyond.”

The UT Austin method is the first dry and portable process for producing these electronics, which, unlike the current method, does not require a clean room, wafers and other expensive resources and equipment. Instead, the technique relies on freeform manufacturing, which is similar in scope to 3-D printing but different in that material is removed instead of added.

The two-step process starts with inexpensive, pre-fabricated, industrial-quality metal deposited on polymer sheets. First, an electronic mechanical cutter is used to form patterns on the metal-polymer sheets. Second, after removing excessive areas, the electronics are printed onto any polymer adhesives, including temporary tattoo films. The cutter is programmable so the size of the patch and pattern can be easily customized.

Deji Akinwande, an associate professor and materials expert in the Cockrell School, believes Lu’s method can be transferred to roll-to-roll manufacturing.

“These initial prototype patches can be adapted to roll-to-roll manufacturing that can reduce the cost significantly for mass production,” Akinwande said. “In this light, Lu’s invention represents a major advancement for the mobile health industry.”

After producing the cut-and-pasted patches, the researchers tested them as part of their study. In each test, the researchers’ newly fabricated patches picked up body signals that were stronger than those taken by existing medical devices, including an ECG/EKG, a tool used to assess the electrical and muscular function of the heart. The team also found that their patch conforms almost perfectly to the skin, minimizing motion-induced false signals or errors.

The UT Austin wearable patches are so sensitive that Lu and her team can envision humans wearing the patches to more easily maneuver a prosthetic hand or limb using muscle signals. For now, Lu said, “We are trying to add more types of sensors including blood pressure and oxygen saturation monitors to the low-cost patch.”

Here’s a link to and a citation for the paper,

“Cut-and-Paste” Manufacture of Multiparametric Epidermal Sensor Systems by Shixuan Yang, Ying-Chen Chen, Luke Nicolini, Praveenkumar Pasupathy, Jacob Sacks, Su Becky, Russell Yang, Sanchez Daniel, Yao-Feng Chang, Pulin Wang, David Schnyer, Dean Neikirk, and Nanshu Lu. Advanced Materials DOI: 10.1002/adma.201502386 First published: 23 September 2015

This paper is behind a paywall.

$81M for US National Nanotechnology Coordinated Infrastructure (NNCI)

Academics, small business, and industry researchers are the big winners in a US National Science Foundation bonanza according to a Sept. 16, 2015 news item on Nanowerk,

To advance research in nanoscale science, engineering and technology, the National Science Foundation (NSF) will provide a total of $81 million over five years to support 16 sites and a coordinating office as part of a new National Nanotechnology Coordinated Infrastructure (NNCI).

The NNCI sites will provide researchers from academia, government, and companies large and small with access to university user facilities with leading-edge fabrication and characterization tools, instrumentation, and expertise within all disciplines of nanoscale science, engineering and technology.

A Sept. 16, 2015 NSF news release provides a brief history of US nanotechnology infrastructures and describes this latest effort in slightly more detail (Note: Links have been removed),

The NNCI framework builds on the National Nanotechnology Infrastructure Network (NNIN), which enabled major discoveries, innovations, and contributions to education and commerce for more than 10 years.

“NSF’s long-standing investments in nanotechnology infrastructure have helped the research community to make great progress by making research facilities available,” said Pramod Khargonekar, assistant director for engineering. “NNCI will serve as a nationwide backbone for nanoscale research, which will lead to continuing innovations and economic and societal benefits.”

The awards are up to five years and range from $500,000 to $1.6 million each per year. Nine of the sites have at least one regional partner institution. These 16 sites are located in 15 states and involve 27 universities across the nation.

Through a fiscal year 2016 competition, one of the newly awarded sites will be chosen to coordinate the facilities. This coordinating office will enhance the sites’ impact as a national nanotechnology infrastructure and establish a web portal to link the individual facilities’ websites to provide a unified entry point to the user community of overall capabilities, tools and instrumentation. The office will also help to coordinate and disseminate best practices for national-level education and outreach programs across sites.

New NNCI awards:

Mid-Atlantic Nanotechnology Hub for Research, Education and Innovation, University of Pennsylvania with partner Community College of Philadelphia, principal investigator (PI): Mark Allen
Texas Nanofabrication Facility, University of Texas at Austin, PI: Sanjay Banerjee

Northwest Nanotechnology Infrastructure, University of Washington with partner Oregon State University, PI: Karl Bohringer

Southeastern Nanotechnology Infrastructure Corridor, Georgia Institute of Technology with partners North Carolina A&T State University and University of North Carolina-Greensboro, PI: Oliver Brand

Midwest Nano Infrastructure Corridor, University of  Minnesota Twin Cities with partner North Dakota State University, PI: Stephen Campbell

Montana Nanotechnology Facility, Montana State University with partner Carlton College, PI: David Dickensheets
Soft and Hybrid Nanotechnology Experimental Resource,

Northwestern University with partner University of Chicago, PI: Vinayak Dravid

The Virginia Tech National Center for Earth and Environmental Nanotechnology Infrastructure, Virginia Polytechnic Institute and State University, PI: Michael Hochella

North Carolina Research Triangle Nanotechnology Network, North Carolina State University with partners Duke University and University of North Carolina-Chapel Hill, PI: Jacob Jones

San Diego Nanotechnology Infrastructure, University of California, San Diego, PI: Yu-Hwa Lo

Stanford Site, Stanford University, PI: Kathryn Moler

Cornell Nanoscale Science and Technology Facility, Cornell University, PI: Daniel Ralph

Nebraska Nanoscale Facility, University of Nebraska-Lincoln, PI: David Sellmyer

Nanotechnology Collaborative Infrastructure Southwest, Arizona State University with partners Maricopa County Community College District and Science Foundation Arizona, PI: Trevor Thornton

The Kentucky Multi-scale Manufacturing and Nano Integration Node, University of Louisville with partner University of Kentucky, PI: Kevin Walsh

The Center for Nanoscale Systems at Harvard University, Harvard University, PI: Robert Westervelt

The universities are trumpeting this latest nanotechnology funding,

NSF-funded network set to help businesses, educators pursue nanotechnology innovation (North Carolina State University, Duke University, and University of North Carolina at Chapel Hill)

Nanotech expertise earns Virginia Tech a spot in National Science Foundation network

ASU [Arizona State University] chosen to lead national nanotechnology site

UChicago, Northwestern awarded $5 million nanotechnology infrastructure grant

That is a lot of excitement.