Tag Archives: Harvard University

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Harvard University announced new Center on Nano-safety Research

The nano safety center at Harvard University (Massachusetts, US) is a joint center with the US National Institute of Environmental Health  Sciences according to an Aug. 29, 2016 news item on Nanowerk,

Engineered nanomaterials (ENMs)—which are less than 100 nanometers (one millionth of a millimeter) in diameter—can make the colors in digital printer inks pop and help sunscreens better protect against radiation, among many other applications in industry and science. They may even help prevent infectious diseases. But as the technology becomes more widespread, questions remain about the potential risks that ENMs may pose to health and the environment.

Researchers at the new Harvard-NIEHS [US National Institute of Environmental Health Sciences] Nanosafety Research Center at Harvard T.H. Chan School of Public Health are working to understand the unique properties of ENMs—both beneficial and harmful—and to ultimately establish safety standards for the field.

An Aug. 16, 2016 Harvard University press release, which originated the news item, provides more detail (Note: Links have been removed),

“We want to help nanotechnology develop as a scientific and economic force while maintaining safeguards for public health,” said Center Director Philip Demokritou, associate professor of aerosol physics at Harvard Chan School. “If you understand the rules of nanobiology, you can design safer nanomaterials.”

ENMs can enter the body through inhalation, ingestion, and skin contact, and toxicological studies have shown that some can penetrate cells and tissues and potentially cause biochemical damage. Because the field of nanoparticle science is relatively new, no standards currently exist for assessing the health risks of exposure to ENMs—or even for how studies of nano-biological interactions should be conducted.

Much of the work of the new Center will focus on building a fundamental understanding of why some ENMs are potentially more harmful than others. The team will also establish a “reference library” of ENMs, each with slightly varied properties, which will be utilized in nanotoxicology research across the country to assess safety. This will allow researchers to pinpoint exactly what aspect of an ENM’s properties may impact health. The researchers will also work to develop standardized methods for nanotoxicology studies evaluating the safety of nanomaterials.

The Center was established with a $4 million dollar grant from the National Institute of Environmental Health Science (NIEHS) last month, and is the only nanosafety research center to receive NIEHS funding for the next five years. It will also play a coordinating role with existing and future NIEHS nanotoxicology research projects nantionwide. Scientists from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), MIT, University of Maine, and University of Florida will collaborate on the new effort.

The Center builds on the existing Center for Nanotechnology and Nanotoxicology at Harvard Chan School, established by Demokritou and Joseph Brain, Cecil K. and Philip Drinker Professor of Environmental Physiology, in the School’s Department of Environmental Health in 2010.

A July 5, 2016 Harvard University press release announcing the $4M grant provides more information about which ENMs are to be studied,

The main focus of the new HSPH-NIEHS Center is to bring together  scientists from across disciplines- material science, chemistry, exposure assessment, risk assessment, nanotoxicology and nanobiology- to assess the potential  environmental Health and safety (EHS) implications of engineered nanomaterials (ENMs).

The $4 million dollar HSPH based Center  which is the only Nanosafety Research  Center to be funded by NIEHS this funding cycle, … The new HSPH-NIEHS Nanosafety Center builds upon the nano-related infrastructure in [the] collaborating Universities, developed over the past 10 years, which includes an inter-disciplinary research group of faculty, research staff and students, as well as state-of-the-art platforms for high throughput synthesis of ENMs, including metal and metal oxides, cutting edge 2D/3D ENMs such as CNTs [carbon nanotubes] and graphene, nanocellulose, and advanced nanocomposites, [emphasis mine] coupled with innovative tools to assess the fate and transport of ENMs in biological systems, statistical and exposure assessment tools, and novel in vitro and in vivo platforms for nanotoxicology research.

“Our mission is to integrate material/exposure/chemical sciences and nanotoxicology-nanobiology   to facilitate assessment of potential risks from emerging nanomaterials.  In doing so, we are bringing together the material synthesis/applications and nanotoxicology communities and other stakeholders including industry,   policy makers and the general public to maximize innovation and growth and minimize environmental and public health risks from nanotechnology”, quoted by  Dr Philip Demokritou, …

This effort certainly falls in line with the current emphasis on interdisciplinary research and creating standards and protocols for researching the toxicology of engineered nanomaterials.

Robots built from living tissue

Biohybrid robots, as they are known, are built from living tissue but not in a Frankenstein kind of way as Victoria Webster PhD candidate at Case Western Reserve University (US) explains in her Aug. 9, 2016 essay on The Conversation (also on phys.org as an Aug. 10, 2016 news item; Note: Links have been removed),

Researchers are increasingly looking for solutions to make robots softer or more compliant – less like rigid machines, more like animals. With traditional actuators – such as motors – this can mean using air muscles or adding springs in parallel with motors. …

But there’s a growing area of research that’s taking a different approach. By combining robotics with tissue engineering, we’re starting to build robots powered by living muscle tissue or cells. These devices can be stimulated electrically or with light to make the cells contract to bend their skeletons, causing the robot to swim or crawl. The resulting biobots can move around and are soft like animals. They’re safer around people and typically less harmful to the environment they work in than a traditional robot might be. And since, like animals, they need nutrients to power their muscles, not batteries, biohybrid robots tend to be lighter too.

Webster explains how these biobots are built,

Researchers fabricate biobots by growing living cells, usually from heart or skeletal muscle of rats or chickens, on scaffolds that are nontoxic to the cells. If the substrate is a polymer, the device created is a biohybrid robot – a hybrid between natural and human-made materials.

If you just place cells on a molded skeleton without any guidance, they wind up in random orientations. That means when researchers apply electricity to make them move, the cells’ contraction forces will be applied in all directions, making the device inefficient at best.

So to better harness the cells’ power, researchers turn to micropatterning. We stamp or print microscale lines on the skeleton made of substances that the cells prefer to attach to. These lines guide the cells so that as they grow, they align along the printed pattern. With the cells all lined up, researchers can direct how their contraction force is applied to the substrate. So rather than just a mess of firing cells, they can all work in unison to move a leg or fin of the device.

Researchers sometimes mimic animals when creating their biobots (Note: Links have been removed),

Others have taken their cues from nature, creating biologically inspired biohybrids. For example, a group led by researchers at California Institute of Technology developed a biohybrid robot inspired by jellyfish. This device, which they call a medusoid, has arms arranged in a circle. Each arm is micropatterned with protein lines so that cells grow in patterns similar to the muscles in a living jellyfish. When the cells contract, the arms bend inwards, propelling the biohybrid robot forward in nutrient-rich liquid.

More recently, researchers have demonstrated how to steer their biohybrid creations. A group at Harvard used genetically modified heart cells to make a biologically inspired manta ray-shaped robot swim. The heart cells were altered to contract in response to specific frequencies of light – one side of the ray had cells that would respond to one frequency, the other side’s cells responded to another.

Amazing, eh? And, this is quite a recent video; it was published on YouTube on July 7, 2016.

Webster goes on to describe work designed to make these robots hardier and more durable so they can leave the laboratory,

… Here at Case Western Reserve University, we’ve recently begun to investigate … by turning to the hardy marine sea slug Aplysia californica. Since A. californica lives in the intertidal region, it can experience big changes in temperature and environmental salinity over the course of a day. When the tide goes out, the sea slugs can get trapped in tide pools. As the sun beats down, water can evaporate and the temperature will rise. Conversely in the event of rain, the saltiness of the surrounding water can decrease. When the tide eventually comes in, the sea slugs are freed from the tidal pools. Sea slugs have evolved very hardy cells to endure this changeable habitat.

We’ve been able to use Aplysia tissue to actuate a biohybrid robot, suggesting that we can manufacture tougher biobots using these resilient tissues. The devices are large enough to carry a small payload – approximately 1.5 inches long and one inch wide.

Webster has written a fascinating piece and, if you have time, I encourage you to read it in its entirety.

Vitamin-inspired batteries

Vitamin-inspired batteries from Harvard University? According to a July 18, 2016 news item on ScienceDaily that’s exactly the case,

Harvard researchers have identified a whole new class of high-performing organic molecules, inspired by vitamin B2, that can safely store electricity from intermittent energy sources like solar and wind power in large batteries.

The development builds on previous work in which the team developed a high-capacity flow battery that stored energy in organic molecules called quinones and a food additive called ferrocyanide. That advance was a game-changer, delivering the first high-performance, non-flammable, non-toxic, non-corrosive, and low-cost chemicals that could enable large-scale, inexpensive electricity storage.

While the versatile quinones show great promise for flow batteries, Harvard researchers continued to explore other organic molecules in pursuit of even better performance. But finding that same versatility in other organic systems has been challenging.

“Now, after considering about a million different quinones, we have developed a new class of battery electrolyte material that expands the possibilities of what we can do,” said Kaixiang Lin, a Ph.D. student at Harvard and first author of the paper. “Its simple synthesis means it should be manufacturable on a large scale at a very low cost, which is an important goal of this project.”

A July 18, 2016 Harvard University John A. Paulson School of Engineering and Applied Sciences press release (also on EurekAlert) by Leah Burrows, which originated the news item, expands on the theme,

Flow batteries store energy in solutions in external tanks — the bigger the tanks, the more energy they store. In 2014, Michael J. Aziz, the Gene and Tracy Sykes Professor of Materials and Energy Technologies at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), Roy Gordon, the Thomas Dudley Cabot Professor of Chemistry and Professor of Materials Science, Alán Aspuru-Guzik, Professor of Chemistry and their team at Harvard replaced metal ions used as conventional battery electrolyte materials in acidic electrolytes with quinones, molecules that store energy in plants and animals. In 2015, they developed a quinone that could work in alkaline solutions alongside a common food additive.

In this most recent research, the team found inspiration in vitamin B2, which helps to store energy from food in the body. The key difference between B2 and quinones is that nitrogen atoms, instead of oxygen atoms, are involved in picking up and giving off electrons.

“With only a couple of tweaks to the original B2 molecule, this new group of molecules becomes a good candidate for alkaline flow batteries,” said Aziz.

“They have high stability and solubility and provide high battery voltage and storage capacity. Because vitamins are remarkably easy to make, this molecule could be manufactured on a large scale at a very low cost.”

“We designed these molecules to suit the needs of our battery, but really it was nature that hinted at this way to store energy,” said Gordon, co-senior author of the paper. “Nature came up with similar molecules that are very important in storing energy in our bodies.”

The team will continue to explore quinones, as well as this new universe of molecules, in pursuit of a high-performing, long-lasting and inexpensive flow battery.

Here’s a link to and a citation for the paper,

A redox-flow battery with an alloxazine-based organic electrolyte by Kaixiang Lin, Rafael Gómez-Bombarelli, Eugene S. Beh, Liuchuan Tong, Qing Chen, Alvaro Valle, Alán Aspuru-Guzik, Michael J. Aziz, & Roy G. Gordon.  Nature Energy 1, Article number: 16102 (2016)  doi:10.1038/nenergy.2016.102 Published online: 18 July 2016

This paper is behind a paywall.

Sutures that can gather data wirelessly

Are sutures which gather data hackable? It’s a little early to start thinking about that issue as this seems to be brand new research. A July 18, 2016 news item on ScienceDaily tells more,

For the first time, researchers led by Tufts University engineers have integrated nano-scale sensors, electronics and microfluidics into threads — ranging from simple cotton to sophisticated synthetics — that can be sutured through multiple layers of tissue to gather diagnostic data wirelessly in real time, according to a paper published online July 18 [2016] in Microsystems & Nanoengineering. The research suggests that the thread-based diagnostic platform could be an effective substrate for a new generation of implantable diagnostic devices and smart wearable systems.

A July 18, 2016 Tufts University news release (also on EurekAlert), which originated the news item, provides more detail,

The researchers used a variety of conductive threads that were dipped in physical and chemical sensing compounds and connected to wireless electronic circuitry to create a flexible platform that they sutured into tissue in rats as well as in vitro. The threads collected data on tissue health (e.g. pressure, stress, strain and temperature), pH and glucose levels that can be used to determine such things as how a wound is healing, whether infection is emerging, or whether the body’s chemistry is out of balance. The results were transmitted wirelessly to a cell phone and computer.

The three-dimensional platform is able to conform to complex structures such as organs, wounds or orthopedic implants.

While more study is needed in a number of areas, including investigation of long-term biocompatibility, researchers said initial results raise the possibility of optimizing patient-specific treatments.

“The ability to suture a thread-based diagnostic device intimately in a tissue or organ environment in three dimensions adds a unique feature that is not available with other flexible diagnostic platforms,” said Sameer Sonkusale, Ph.D., corresponding author on the paper and director of the interdisciplinary Nano Lab in the Department of Electrical and Computer Engineering at Tufts School of Engineering. “We think thread-based devices could potentially be used as smart sutures for surgical implants, smart bandages to monitor wound healing, or integrated with textile or fabric as personalized health monitors and point-of-care diagnostics.”

Until now, the structure of substrates for implantable devices has essentially been two-dimensional, limiting their usefulness to flat tissue such as skin, according to the paper. Additionally, the materials in those substrates are expensive and require specialized processing.

Here’s a link to and a citation for the paper,

A toolkit of thread-based microfluidics, sensors, and electronics for 3D tissue embedding for medical diagnostics by Pooria Mostafalu, Mohsen Akbari, Kyle A. Alberti, Qiaobing Xu, Ali Khademhosseini, & Sameer R. Sonkusale. Microsystems & Nanoengineering 2, Article number: 16039 (2016) doi:10.1038/micronano.2016.39 Published online 18 July 2016

This paper is open access.

‘Bionic’ cardiac patch with nanoelectric scaffolds and living cells

A June 27, 2016 news item on Nanowerk announced that Harvard University researchers may have taken us a step closer to bionic cardiac patches for human hearts (Note: A link has been removed),

Scientists and doctors in recent decades have made vast leaps in the treatment of cardiac problems – particularly with the development in recent years of so-called “cardiac patches,” swaths of engineered heart tissue that can replace heart muscle damaged during a heart attack.

Thanks to the work of Charles Lieber and others, the next leap may be in sight.

The Mark Hyman, Jr. Professor of Chemistry and Chair of the Department of Chemistry and Chemical Biology, Lieber, postdoctoral fellow Xiaochuan Dai and other co-authors of a study that describes the construction of nanoscale electronic scaffolds that can be seeded with cardiac cells to produce a “bionic” cardiac patch. The study is described in a June 27 [2016] paper published in Nature Nanotechnology (“Three-dimensional mapping and regulation of action potential propagation in nanoelectronics-innervated tissues”).

A June 27, 2016 Harvard University press release on EurekAlert, which originated the news item, provides more information,

“I think one of the biggest impacts would ultimately be in the area that involves replaced of damaged cardiac tissue with pre-formed tissue patches,” Lieber said. “Rather than simply implanting an engineered patch built on a passive scaffold, our works suggests it will be possible to surgically implant an innervated patch that would now be able to monitor and subtly adjust its performance.”

Once implanted, Lieber said, the bionic patch could act similarly to a pacemaker – delivering electrical shocks to correct arrhythmia, but the possibilities don’t end there.

“In this study, we’ve shown we can change the frequency and direction of signal propagation,” he continued. “We believe it could be very important for controlling arrhythmia and other cardiac conditions.”

Unlike traditional pacemakers, Lieber said, the bionic patch – because its electronic components are integrated throughout the tissue – can detect arrhythmia far sooner, and operate at far lower voltages.

“Even before a person started to go into large-scale arrhythmia that frequently causes irreversible damage or other heart problems, this could detect the early-stage instabilities and intervene sooner,” he said. “It can also continuously monitor the feedback from the tissue and actively respond.”

“And a normal pacemaker, because it’s on the surface, has to use relatively high voltages,” Lieber added.

The patch might also find use, Lieber said, as a tool to monitor the responses under cardiac drugs, or to help pharmaceutical companies to screen the effectiveness of drugs under development.

Likewise, the bionic cardiac patch can also be a unique platform, he further mentioned, to study the tissue behavior evolving during some developmental processes, such as aging, ischemia or differentiation of stem cells into mature cardiac cells.

Although the bionic cardiac patch has not yet been implanted in animals, “we are interested in identifying collaborators already investigating cardiac patch implantation to treat myocardial infarction in a rodent model,” he said. “I don’t think it would be difficult to build this into a simpler, easily implantable system.”

In the long term, Lieber believes, the development of nanoscale tissue scaffolds represents a new paradigm for integrating biology with electronics in a virtually seamless way.

Using the injectable electronics technology he pioneered last year, Lieber even suggested that similar cardiac patches might one day simply be delivered by injection.

“It may actually be that, in the future, this won’t be done with a surgical patch,” he said. “We could simply do a co-injection of cells with the mesh, and it assembles itself inside the body, so it’s less invasive.”

Here’s a link to and a citation for the paper,

Three-dimensional mapping and regulation of action potential propagation in nanoelectronics-innervated tissues by Xiaochuan Dai, Wei Zhou, Teng Gao, Jia Liu & Charles M. Lieber. Nature Nanotechnology (2016)  doi:10.1038/nnano.2016.96 Published online 27 June 2016

This paper is behind a paywall.

Dexter Johnson in a June 27, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides more technical detail (Note: Links have been removed),

In research described in the journal Nature Nanotechnology, Lieber and his team employed a bottom-up approach that started with the fabrication of doped p-type silicon nanowires. Lieber has been spearheading the use of silicon nanowires as a scaffold for growing nerve, heart, and muscle tissue for years now.

In this latest work, Lieber and his team fabricated the nanowires, applied them onto a polymer surface, and arranged them into a field-effect transistor (FET). The researchers avoided an increase in the device’s impedance as its dimensions were reduced by adopting this FET approach as opposed to simply configuring the device as an electrode. Each FET, along with its source-drain interconnects, created a 4-micrometer-by-20-micrometer-by-350-nanometer pad. Each of these pads was, in effect, a single recording device.

I recommend reading Dexter’s posting in its entirety as Charles Lieber shares additional technical information not found in the news release.

Hologram with nanostructures could improve fraud protection

This research on holograms comes from Harvard University according to a May 13, 2016 news item on ScienceDaily,

Holograms are a ubiquitous part of our lives. They are in our wallets — protecting credit cards, cash and driver’s licenses from fraud — in grocery store scanners and biomedical devices.

Even though holographic technology has been around for decades, researchers still struggle to make compact holograms more efficient, complex and secure.

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences have programmed polarization into compact holograms. These holograms use nanostructures that are sensitive to polarization (the direction in which light vibrates) to produce different images depending on the polarization of incident light. This advancement, which works across the spectrum of light, improves anti-fraud holograms as well as those used in entertainment displays.

A May 13, 2016 Harvard University press release (also on EurekAlert) by Leah Burrows, which originated the news item, provides more detail,

“The novelty in this research is that by using nanotechnology, we’ve made holograms that are highly efficient, meaning that very little light is lost to create the image,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering and senior author of the paper. “By using incident polarized light, you can see far a crisper image and can store and retrieve more images. Polarization adds another dimension to holograms that can be used to protect against counterfeiting and in applications like displays.”

Harvard’s Office of Technology Development has filed patents on this and related technologies and is actively pursuing commercial opportunities.

Holograms, like digital photographs, capture a field of light around an object and encode it on a chip. However, photographs only record the intensity of light while holograms also capture the phase of light, which is why holograms appear three-dimensional.

“Our holograms work like any other but the image produced depends on the polarization state of the illuminating light, providing an extra degree of freedom in design for versatile applications,” said Mohammadreza Khorasaninejad, postdoctoral fellow in the Capasso Lab and first author of the paper.

There are several states of polarization. In linearly polarized light the direction of vibration remains constant while in circularly polarized light it rotates clockwise or counterclockwise. The direction of rotation is the chirality.

The team built silicon nanostructured patterns on a glass substrate, which act as superpixels. Each superpixel responds to a certain polarization state of the incident light. Even more information can be encoded in the hologram by designing and arranging the nanofins to respond differently to the chirality of the polarized incident light.

“Being able to encode chirality can have important applications in information security such as anti-counterfeiting,” said Antonio Ambrosio, a research scientist in the Capasso Lab and co-first author. “For example, chiral holograms can be made to display a sequence of certain images only when illuminated with light of specific polarization not known to the forger.”

“By using different nanofin designs in the future, one could store and retrieve far more images by employing light with many states of polarization,” said Capasso.

Because this system is compact, it has application in portable projectors, 3D movies and wearable optics.

“Modern polarization imaging systems require cascading several optical components such as beam splitters, polarizers and wave plates,” said Ambrosio. “Our metasurface can distinguish between incident polarization using a single layer dielectric surface.”

“We have also incorporated in some of the holograms a lens function that has allowed us to produce images at large angles,” said Khorasaninejad. “This functionality combined with the small footprint and lightweight, has significant potential for wearable optics applications.”

Here’s a link to and a citation for the paper,

Broadband and chiral binary dielectric meta-holograms by Mohammadreza Khorasaninejad, Antonio Ambrosio, Pritpal Kanhaiya, and Federico Capasso. Science Advances  13 May 2016: Vol. 2, no. 5, e1501258 DOI: 10.1126/sciadv.1501258

This paper is open access.

Printing in midair

Dexter Johnson’s May 16, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) was my first introduction to something wonder-inducing (Note: Links have been removed),

While the growth of 3-D printing has led us to believe we can produce just about any structure with it, the truth is that it still falls somewhat short.

Researchers at Harvard University are looking to realize a more complete range of capabilities for 3-D printing in fabricating both planar and freestanding 3-D structures and do it relatively quickly and on low-cost plastic substrates.

In research published in the journal Proceedings of the National Academy of Sciences (PNAS),  the researchers extruded a silver-nanoparticle ink and annealed it with a laser so quickly that the system let them easily “write” free-standing 3-D structures.

While this may sound humdrum, what really takes one’s breath away with this technique is that it can create 3-D structures seemingly suspended in air without any signs of support as though they were drawn there with a pen.

Laser-assisted direct ink writing allowed this delicate 3D butterfly to be printed without any auxiliary support structure (Image courtesy of the Lewis Lab/Harvard University)

Laser-assisted direct ink writing allowed this delicate 3D butterfly to be printed without any auxiliary support structure (Image courtesy of the Lewis Lab/Harvard University)

A May 16, 2016 Harvard University press release (also on EurekAlert) provides more detail about the work,

“Flat” and “rigid” are terms typically used to describe electronic devices. But the increasing demand for flexible, wearable electronics, sensors, antennas and biomedical devices has led a team at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS) and Wyss Institute for Biologically Inspired Engineering to innovate an eye-popping new way of printing complex metallic architectures – as though they are seemingly suspended in midair.

“I am truly excited by this latest advance from our lab, which allows one to 3D print and anneal flexible metal electrodes and complex architectures ‘on-the-fly,’ ” said Lewis [Jennifer Lewis, the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS and Wyss Core Faculty member].

Lewis’ team used an ink composed of silver nanoparticles, sending it through a printing nozzle and then annealing it using a precisely programmed laser that applies just the right amount of energy to drive the ink’s solidification. The printing nozzle moves along x, y, and z axes and is combined with a rotary print stage to enable freeform curvature. In this way, tiny hemispherical shapes, spiral motifs, even a butterfly made of silver wires less than the width of a hair can be printed in free space within seconds. The printed wires exhibit excellent electrical conductivity, almost matching that of bulk silver.

When compared to conventional 3D printing techniques used to fabricate conductive metallic features, laser-assisted direct ink writing is not only superior in its ability to produce curvilinear, complex wire patterns in one step, but also in the sense that localized laser heating enables electrically conductive silver wires to be printed directly on low-cost plastic substrates.

According to the study’s first author, Wyss Institute Postdoctoral Fellow Mark Skylar-Scott, Ph.D., the most challenging aspect of honing the technique was optimizing the nozzle-to-laser separation distance.

“If the laser gets too close to the nozzle during printing, heat is conducted upstream which clogs the nozzle with solidified ink,” said Skylar-Scott. “To address this, we devised a heat transfer model to account for temperature distribution along a given silver wire pattern, allowing us to modulate the printing speed and distance between the nozzle and laser to elegantly control the laser annealing process ‘on the fly.’ ”

The result is that the method can produce not only sweeping curves and spirals but also sharp angular turns and directional changes written into thin air with silver inks, opening up near limitless new potential applications in electronic and biomedical devices that rely on customized metallic architectures.

Seeing is believing, eh?

Here’s a link to and a citation for the paper,

Laser-assisted direct ink writing of planar and 3D metal architectures by Mark A. Skylar-Scott, Suman Gunasekaran, and Jennifer A. Lewis. PNAS [Proceedings of the National Academy of Sciences] 2016 doi: 10.1073/pnas.1525131113

I believe this paper is open access.

A question: I wonder what conditions are necessary before you can 3D print something in midair? Much as I’m dying to try this at home, I’m pretty that’s not possible.

Will AI ‘artists’ be able to fool a panel judging entries the Neukom Institute Prizes in Computational Arts?

There’s an intriguing competition taking place at Dartmouth College (US) according to a May 2, 2016 piece on phys.org (Note: Links have been removed),

Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

On May 18 [2016] at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

The piece on phys.org is a crossposting of a May 2, 2016 article by Michael Casey and Daniel N. Rockmore for The Conversation. The article goes on to describe the competitions,

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers [competition is now closed; the deadline was April 15, 2016]. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

The authors discuss issues with judging the entries,

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man [Alan Turing].) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

The authors also pose the question: Who is the artist?

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

That’s an interesting question and one I asked in the context of two ‘mashup’ art exhibitions in Vancouver (Canada) in my March 8, 2016 posting.

Getting back to back to Dartmouth College and its Neukom Institute Prizes in Computational Arts, here’s a list of the competition judges from the competition homepage,

David Cope (Composer, Algorithmic Music Pioneer, UCSC Music Professor)
David Krakauer (President, the Santa Fe Institute)
Louis Menand (Pulitzer Prize winning author and Professor at Harvard University)
Ray Monk (Author, Biographer, Professor of Philosophy)
Lynn Neary (NPR: Correspondent, Arts Desk and Guest Host)
Joe Palca (NPR: Correspondent, Science Desk)
Robert Siegel (NPR: Senior Host, All Things Considered)

The announcements will be made Wednesday, May 18, 2016. I can hardly wait!

Addendum

Martin Robbins has written a rather amusing May 6, 2016 post for the Guardian science blogs on AI and art critics where he also notes that the question: What is art? is unanswerable (Note: Links have been removed),

Jonathan Jones is unhappy about artificial intelligence. It might be hard to tell from a casual glance at the art critic’s recent column, “The digital Rembrandt: a new way to mock art, made by fools,” but if you look carefully the subtle clues are there. His use of the adjectives “horrible, tasteless, insensitive and soulless” in a single sentence, for example.

The source of Jones’s ire is a new piece of software that puts… I’m so sorry… the ‘art’ into ‘artificial intelligence’. By analyzing a subset of Rembrandt paintings that featured ‘bearded white men in their 40s looking to the right’, its algorithms were able to extract the key features that defined the Dutchman’s style. …

Of course an artificial intelligence is the worst possible enemy of a critic, because it has no ego and literally does not give a crap what you think. An arts critic trying to deal with an AI is like an old school mechanic trying to replace the battery in an iPhone – lost, possessing all the wrong tools and ultimately irrelevant. I’m not surprised Jones is angry. If I were in his shoes, a computer painting a Rembrandt would bring me out in hives.
Advertisement

Can a computer really produce art? We can’t answer that without dealing with another question: what exactly is art? …

I wonder what either Robbins or Jones will make of the Dartmouth competition?