Tag Archives: Boston University

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Congratulate China on the world’s first quantum communication network

China has some exciting news about the world’s first quantum network; it’s due to open in late August 2017 so you may want to have your congratulations in order for later this month.

An Aug. 4, 2017 news item on phys.org makes the announcement,

As malicious hackers find ever more sophisticated ways to launch attacks, China is about to launch the Jinan Project, the world’s first unhackable computer network, and a major milestone in the development of quantum technology.

Named after the eastern Chinese city where the technology was developed, the network is planned to be fully operational by the end of August 2017. Jinan is the hub of the Beijing-Shanghai quantum network due to its strategic location between the two principal Chinese metropolises.

“We plan to use the network for national defence, finance and other fields, and hope to spread it out as a pilot that if successful can be used across China and the whole world,” commented Zhou Fei, assistant director of the Jinan Institute of Quantum Technology, who was speaking to Britain’s Financial Times.

An Aug. 3, 2017 CORDIS (Community Research and Development Research Information Service [for the European Commission]) press release, which originated the news item, provides more detail about the technology,

By launching the network, China will become the first country worldwide to implement quantum technology for a real life, commercial end. It also highlights that China is a key global player in the rush to develop technologies based on quantum principles, with the EU and the United States also vying for world leadership in the field.

The network, known as a Quantum Key Distribution (QKD) network, is more secure than widely used electronic communication equivalents. Unlike a conventional telephone or internet cable, which can be tapped without the sender or recipient being aware, a QKD network alerts both users to any tampering with the system as soon as it occurs. This is because tampering immediately alters the information being relayed, with the disturbance being instantly recognisable. Once fully implemented, it will make it almost impossible for other governments to listen in on Chinese communications.

In the Jinan network, some 200 users from China’s military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world’s longest land-based quantum communications network, stretching over 2 000 km.

Also speaking to the ‘Financial Times’, quantum physicist Tim Byrnes, based at New York University’s (NYU) Shanghai campus commented: ‘China has achieved staggering things with quantum research… It’s amazing how quickly China has gotten on with quantum research projects that would be too expensive to do elsewhere… quantum communication has been taken up by the commercial sector much more in China compared to other countries, which means it is likely to pull ahead of Europe and US in the field of quantum communication.’

However, Europe is also determined to also be at the forefront of the ‘quantum revolution’ which promises to be one of the major defining technological phenomena of the twenty-first century. The EU has invested EUR 550 million into quantum technologies and has provided policy support to researchers through the 2016 Quantum Manifesto.

Moreover, with China’s latest achievement (and a previous one already notched up from July 2017 when its quantum satellite – the world’s first – sent a message to Earth on a quantum communication channel), it looks like the race to be crowned the world’s foremost quantum power is well and truly underway…

Prior to this latest announcement, Chinese scientists had published work about quantum satellite communications, a development that makes their imminent terrestrial quantum network possible. Gabriel Popkin wrote about the quantum satellite in a June 15, 2017 article Science magazine,

Quantum entanglement—physics at its strangest—has moved out of this world and into space. In a study that shows China’s growing mastery of both the quantum world and space science, a team of physicists reports that it sent eerily intertwined quantum particles from a satellite to ground stations separated by 1200 kilometers, smashing the previous world record. The result is a stepping stone to ultrasecure communication networks and, eventually, a space-based quantum internet.

“It’s a huge, major achievement,” says Thomas Jennewein, a physicist at the University of Waterloo in Canada. “They started with this bold idea and managed to do it.”

Entanglement involves putting objects in the peculiar limbo of quantum superposition, in which an object’s quantum properties occupy multiple states at once: like Schrödinger’s cat, dead and alive at the same time. Then those quantum states are shared among multiple objects. Physicists have entangled particles such as electrons and photons, as well as larger objects such as superconducting electric circuits.

Theoretically, even if entangled objects are separated, their precarious quantum states should remain linked until one of them is measured or disturbed. That measurement instantly determines the state of the other object, no matter how far away. The idea is so counterintuitive that Albert Einstein mocked it as “spooky action at a distance.”

Starting in the 1970s, however, physicists began testing the effect over increasing distances. In 2015, the most sophisticated of these tests, which involved measuring entangled electrons 1.3 kilometers apart, showed once again that spooky action is real.

Beyond the fundamental result, such experiments also point to the possibility of hack-proof communications. Long strings of entangled photons, shared between distant locations, can be “quantum keys” that secure communications. Anyone trying to eavesdrop on a quantum-encrypted message would disrupt the shared key, alerting everyone to a compromised channel.

But entangled photons degrade rapidly as they pass through the air or optical fibers. So far, the farthest anyone has sent a quantum key is a few hundred kilometers. “Quantum repeaters” that rebroadcast quantum information could extend a network’s reach, but they aren’t yet mature. Many physicists have dreamed instead of using satellites to send quantum information through the near-vacuum of space. “Once you have satellites distributing your quantum signals throughout the globe, you’ve done it,” says Verónica Fernández Mármol, a physicist at the Spanish National Research Council in Madrid. …

Popkin goes on to detail the process for making the discovery in easily accessible (for the most part) writing and in a video and a graphic.

Russell Brandom writing for The Verge in a June 15, 2017 article about the Chinese quantum satellite adds detail about previous work and teams in other countries also working on the challenge (Note: Links have been removed),

Quantum networking has already shown promise in terrestrial fiber networks, where specialized routing equipment can perform the same trick over conventional fiber-optic cable. The first such network was a DARPA-funded connection established in 2003 between Harvard, Boston University, and a private lab. In the years since, a number of companies have tried to build more ambitious connections. The Swiss company ID Quantique has mapped out a quantum network that would connect many of North America’s largest data centers; in China, a separate team is working on a 2,000-kilometer quantum link between Beijing and Shanghai, which would rely on fiber to span an even greater distance than the satellite link. Still, the nature of fiber places strict limits on how far a single photon can travel.

According to ID Quantique, a reliable satellite link could connect the existing fiber networks into a single globe-spanning quantum network. “This proves the feasibility of quantum communications from space,” ID Quantique CEO Gregoire Ribordy tells The Verge. “The vision is that you have regional quantum key distribution networks over fiber, which can connect to each other through the satellite link.”

China isn’t the only country working on bringing quantum networks to space. A collaboration between the UK’s University of Strathclyde and the National University of Singapore is hoping to produce the same entanglement in cheap, readymade satellites called Cubesats. A Canadian team is also developing a method of producing entangled photons on the ground before sending them into space.

I wonder if there’s going to be an invitational event for scientists around the world to celebrate the launch.

DARPA (US Defense Advanced Research Project Agency) ‘Atoms to Product’ program launched

It took over a year after announcing the ‘Atoms to Product’ program in 2014 for DARPA (US Defense Advanced Research Projects Agency) to select 10 proponents for three projects. Before moving onto the latest announcement, here’s a description of the ‘Atoms to Product’ program from its Aug. 27, 2014 announcement on Nanowerk,

Many common materials exhibit different and potentially useful characteristics when fabricated at extremely small scales—that is, at dimensions near the size of atoms, or a few ten-billionths of a meter. These “atomic scale” or “nanoscale” properties include quantized electrical characteristics, glueless adhesion, rapid temperature changes, and tunable light absorption and scattering that, if available in human-scale products and systems, could offer potentially revolutionary defense and commercial capabilities. Two as-yet insurmountable technical challenges, however, stand in the way: Lack of knowledge of how to retain nanoscale properties in materials at larger scales, and lack of assembly capabilities for items between nanoscale and 100 microns—slightly wider than a human hair.

DARPA has created the Atoms to Product (A2P) program to help overcome these challenges. The program seeks to develop enhanced technologies for assembling atomic-scale pieces. It also seeks to integrate these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

DARPA’s Atoms to Product (A2P) program seeks to develop enhanced technologies for assembling nanoscale items, and integrating these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

A Dec. 29, 2015 news item on Nanowerk features the latest about the project,

DARPA recently selected 10 performers to tackle this challenge: Zyvex Labs, Richardson, Texas; SRI, Menlo Park, California; Boston University, Boston, Massachusetts; University of Notre Dame, South Bend, Indiana; HRL Laboratories, Malibu, California; PARC, Palo Alto, California; Embody, Norfolk, Virginia; Voxtel, Beaverton, Oregon; Harvard University, Cambridge, Massachusetts; and Draper Laboratory, Cambridge, Massachusetts.

A Dec. 29, 2015 DARPA news release, which originated the news item, offers more information and an image illustrating the type of advances already made by one of the successful proponents,

DARPA recently launched its Atoms to Product (A2P) program, with the goal of developing technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. At the heart of that goal was a frustrating reality: Many common materials, when fabricated at nanometer-scale, exhibit unique and attractive “atomic-scale” behaviors including quantized current-voltage behavior, dramatically lower melting points and significantly higher specific heats—but they tend to lose these potentially beneficial traits when they are manufactured at larger “product-scale” dimensions, typically on the order of a few centimeters, for integration into devices and systems.

“The ability to assemble atomic-scale pieces into practical components and products is the key to unlocking the full potential of micromachines,” said John Main, DARPA program manager. “The DARPA Atoms to Product Program aims to bring the benefits of microelectronic-style miniaturization to systems and products that combine mechanical, electrical, and chemical processes.”

The program calls for closing the assembly gap in two steps: From atoms to microns and from microns to millimeters. Performers are tasked with addressing one or both of these steps and have been assigned to one of three working groups, each with a distinct focus area.

A2P

Image caption: Microscopic tools such as this nanoscale “atom writer” can be used to fabricate minuscule light-manipulating structures on surfaces. DARPA has selected 10 performers for its Atoms to Product (A2P) program whose goal is to develop technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. (Image credit: Boston University)

Here’s more about the projects and the performers (proponents) from the A2P performers page on the DARPA website,

Nanometer to Millimeter in a Single System – Embody, Draper and Voxtel

Current methods to treat ligament injuries in warfighters [also known as, soldiers]—which account for a significant portion of reported injuries—often fail to restore pre-injury performance, due to surgical complexities and an inadequate supply of donor tissue. Embody is developing reinforced collagen nanofibers that mimic natural ligaments and replicate the biological and biomechanical properties of native tissue. Embody aims to create a new standard of care and restore pre-injury performance for warfighters and sports injury patients at a 50% reduction compared to current costs.

Radio Frequency (RF) systems (e.g., cell phones, GPS) have performance limits due to alternating current loss. In lower frequency power systems this is addressed by braiding the wires, but this is not currently possibly in cell phones due to an inability to manufacture sufficiently small braided wires. Draper is developing submicron wires that can be braided using DNA self-assembly methods. If successful, portable RF systems will be more power efficient and able to send 10 times more information in a given channel.

For seamless control of structures, physics and surface chemistry—from the atomic-level to the meter-level—Voxtel Inc. and partner Oregon State University are developing an efficient, high-rate, fluid-based manufacturing process designed to imitate nature’s ability to manufacture complex multimaterial products across scales. Historically, challenges relating to the cost of atomic-level control, production speed, and printing capability have been effectively insurmountable. This team’s new process will combine synthesis and delivery of materials into a massively parallel inkjet operation that draws from nature to achieve a DNA-like mediated assembly. The goal is to assemble complex, 3-D multimaterial mixed organic and inorganic products quickly and cost-effectively—directly from atoms.

Optical Metamaterial Assembly – Boston University, University of Notre Dame, HRL and PARC.

Nanoscale devices have demonstrated nearly unlimited power and functionality, but there hasn’t been a general- purpose, high-volume, low-cost method for building them. Boston University is developing an atomic calligraphy technique that can spray paint atoms with nanometer precision to build tunable optical metamaterials for the photonic battlefield. If successful, this capability could enhance the survivability of a wide range of military platforms, providing advanced camouflage and other optical illusions in the visual range much as stealth technology has enabled in the radar range.

The University of Notre Dame is developing massively parallel nanomanufacturing strategies to overcome the requirement today that most optical metamaterials must be fabricated in “one-off” operations. The Notre Dame project aims to design and build optical metamaterials that can be reconfigured to rapidly provide on-demand, customized optical capabilities. The aim is to use holographic traps to produce optical “tiles” that can be assembled into a myriad of functional forms and further customized by single-atom electrochemistry. Integrating these materials on surfaces and within devices could provide both warfighters and platforms with transformational survivability.

HRL Laboratories is working on a fast, scalable and material-agnostic process for improving infrared (IR) reflectivity of materials. Current IR-reflective materials have limited use, because reflectivity is highly dependent on the specific angle at which light hits the material. HRL is developing a technique for allowing tailorable infrared reflectivity across a variety of materials. If successful, the process will enable manufacturable materials with up to 98% IR reflectivity at all incident angles.

PARC is working on building the first digital MicroAssembly Printer, where the “inks” are micrometer-size particles and the “image” outputs are centimeter-scale and larger assemblies. The goal is to print smart materials with the throughput and cost of laser printers, but with the precision and functionality of nanotechnology. If successful, the printer would enable the short-run production of large, engineered, customized microstructures, such as metamaterials with unique responses for secure communications, surveillance and electronic warfare.

Flexible, General Purpose Assembly – Zyvex, SRI, and Harvard.

Zyvex aims to create nano-functional micron-scale devices using customizable and scalable manufacturing that is top-down and atomically precise. These high-performance electronic, optical, and nano-mechanical components would be assembled by SRI micro-robots into fully-functional devices and sub-systems such as ultra-sensitive sensors for threat detection, quantum communication devices, and atomic clocks the size of a grain of sand.

SRI’s Levitated Microfactories will seek to combine the precision of MEMS [micro-electromechanical systems] flexures with the versatility and range of pick-and-place robots and the scalability of swarms [an idea Michael Crichton used in his 2002 novel Prey to induce horror] to assemble and electrically connect micron and millimeter components to build stronger materials, faster electronics, and better sensors.

Many high-impact, minimally invasive surgical techniques are currently performed only by elite surgeons due to the lack of tactile feedback at such small scales relative to what is experienced during conventional surgical procedures. Harvard is developing a new manufacturing paradigm for millimeter-scale surgical tools using low-cost 2D layer-by-layer processes and assembly by folding, resulting in arbitrarily complex meso-scale 3D devices. The goal is for these novel tools to restore the necessary tactile feedback and thereby nurture a new degree of dexterity to perform otherwise demanding micro- and minimally invasive surgeries, and thus expand the availability of life-saving procedures.

Sidebar

‘Sidebar’ is my way of indicating these comments have little to do with the matter at hand but could be interesting factoids for you.

First, Zyvex Labs was last mentioned here in a Sept. 10, 2014 posting titled: OCSiAL will not be acquiring Zyvex. Notice that this  announcement was made shortly after DARPA’s A2P program was announced and that OCSiAL is one of RUSNANO’s (a Russian funding agency focused on nanotechnology) portfolio companies (see my Oct. 23, 2015 posting for more).

HRL Laboratories, mentioned here in an April 19, 2012 posting mostly concerned with memristors (nanoscale devices that mimic neural or synaptic plasticity), has its roots in Howard Hughes’s research laboratories as noted in the posting. In 2012, HRL was involved in another DARPA project, SyNAPSE.

Finally and minimally, PARC also known as, Xerox PARC, was made famous by Steven Jobs and Steve Wozniak when they set up their own company (Apple) basing their products on innovations that PARC had rejected. There are other versions of the story and one by Malcolm Gladwell for the New Yorker May 16, 2011 issue which presents a more complicated and, at times, contradictory version of that particular ‘origins’ story.

Synthesizing spider silk

Most of the research on spider silk and spider webs that’s featured here is usually from the Massachusetts Institute of Technology (MIT) and, more specifically, from professor Markus J. Buehler. This May 28, 2015 news item on ScienceDaily, which heralds the development of synthetic spider silk, is no exception,

After years of research decoding the complex structure and production of spider silk, researchers have now succeeded in producing samples of this exceptionally strong and resilient material in the laboratory. The new development could lead to a variety of biomedical materials — from sutures to scaffolding for organ replacements — made from synthesized silk with properties specifically tuned for their intended uses.

The findings are published this week in the journal Nature Communications by MIT professor of civil and environmental engineering (CEE) Markus Buehler, postdocs Shangchao Lin and Seunghwa Ryu, and others at MIT, Tufts University, Boston University, and in Germany, Italy, and the U.K.

The research, which involved a combination of simulations and experiments, paves the way for “creating new fibers with improved characteristics” beyond those of natural silk, says Buehler, who is also the department head in CEE. The work, he says, should make it possible to design fibers with specific characteristics of strength, elasticity, and toughness.

The new synthetic fibers’ proteins — the basic building blocks of the material — were created by genetically modifying bacteria to make the proteins normally produced by spiders. These proteins were then extruded through microfluidic channels designed to mimic the effect of an organ, called a spinneret, that spiders use to produce natural silk fibers.

A May 28, 2015 MIT news release (also on EurekAlert), which originated the news item, describes the work in more detail,

While spider silk has long been recognized as among the strongest known materials, spiders cannot practically be bred to produce harvestable fibers — so this new approach to producing a synthetic, yet spider-like, silk could make such strong and flexible fibers available for biomedical applications. By their nature, spider silks are fully biocompatible and can be used in the body without risk of adverse reactions; they are ultimately simply absorbed by the body.

The researchers’ “spinning” process, in which the constituent proteins dissolved in water are extruded through a tiny opening at a controlled rate, causes the molecules to line up in a way that produces strong fibers. The molecules themselves are a mixture of hydrophobic and hydrophilic compounds, blended so as to naturally align to form fibers much stronger than their constituent parts. “When you spin it, you create very strong bonds in one direction,” Buehler says.

The team found that getting the blend of proteins right was crucial. “We found out that when there was a high proportion of hydrophobic proteins, it would not spin any fibers, it would just make an ugly mass,” says Ryu, who worked on the project as a postdoc at MIT and is now an assistant professor at the Korea Advanced Institute of Science and Technology. “We had to find the right mix” in order to produce strong fibers, he says.

The researchers made use of computational modelling to speed up the process of synthesizing proteins for synthetic spider silk, from the news release,

This project represents the first use of simulations to understand silk production at the molecular level. “Simulation is critical,” Buehler explains: Actually synthesizing a protein can take several months; if that protein doesn’t turn out to have exactly the right properties, the process would have to start all over.

Using simulations makes it possible to “scan through a large range of proteins until we see changes in the fiber stiffness,” and then home in on those compounds, says Lin, who worked on the project as a postdoc at MIT and is now an assistant professor at Florida State University.

Controlling the properties directly could ultimately make it possible to create fibers that are even stronger than natural ones, because engineers can choose characteristics for a particular use. For example, while spiders may need elasticity so their webs can capture insects without breaking, those designing fibers for use as surgical sutures would need more strength and less stretchiness. “Silk doesn’t give us that choice,” Buehler says.

The processing of the material can be done at room temperature using water-based solutions, so scaling up manufacturing should be relatively easy, team members say. So far, the fibers they have made in the lab are not as strong as natural spider silk, but now that the basic process has been established, it should be possible to fine-tune the materials and improve its strength, they say.

“Our goal is to improve the strength, elasticity, and toughness of artificially spun fibers by borrowing bright ideas from nature,” Lin says. This study could inspire the development of new synthetic fibers — or any materials requiring enhanced properties, such as in electrical and thermal transport, in a certain direction.

Here’s a link to and a citation for the paper,

Predictive modelling-based design and experiments for synthesis and spinning of bioinspired silk fibres by Shangchao Lin, Seunghwa Ryu, Olena Tokareva, Greta Gronau, Matthew M. Jacobsen, Wenwen Huang, Daniel J. Rizzo, David Li, Cristian Staii, Nicola M. Pugno, Joyce Y. Wong, David L. Kaplan, & Markus J. Buehler. Nature Communications 6, Article number: 6892 doi:10.1038/ncomms7892 Published 28 May 2015

This paper is behind a paywall.

My two most recent (before this one) postings about Buehler’s work are an August 5, 2014 piece about structural failures and a June 4, 2014 piece about spiderwebs and music.

Finally, I recognized one of the authors, Nicola Pugno from Italy. He’s been mentioned here more than once in regard to his biomimicry work which has often been focused on geckos and their adhesive qualities as per this April 3, 2014 post announcing his book ‘An Experimental Study on Adhesive or Anti-Adhesive, Bio-Inspired Experimental Nanomaterials‘ (co-authored with Emiliano Lepore).

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.

From monitoring glucose in kidneys to climate change in trees

That headline is almost poetic but I admit It’s a bit of a stretch rhymewise, kidneys/trees. In any event, a Feb. 6, 2015 news item on Azonano describes research into monitoring the effects of climate change on trees,

Serving as a testament to the far-reaching impact of Governor Andrew M. Cuomo’s commitment to maintaining New York State’s global leadership in nanotechnology innovation, SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE) today announced the National Science Foundation (NSF) has awarded $837,000 to support development of a first of its kind nanoscale sensor to monitor the effects of climate change on trees.

A Feb. 5, 2015 SUNY Poly CNSE news release, which originated the news item, provides more details including information about the sensor’s link to measuring glucose in kidneys,

The NSF grant was generated through the Instrument Development for Biological Research (IDBR) program, which provides funds to develop new classes of devices for bio-related research. The NANAPHID, a novel aphid-like nanosensor, will provide real-time measurements of carbohydrates in live plant tissue. Carbohydrate levels in trees are directly connected to plant productivity, such as maple sap production and survival. The NANAPHID will enable researchers to determine the effects of a variety of environmental changes including temperature, precipitation, carbon dioxide, soil acidity, pests and pathogens. The nanosensor can also provide real-time monitoring of sugar concentration levels, which are of signficant importance in maple syrup production and apple and grape farming.

“The technology for the NANAPHID is rooted in a nanoscale sensor SUNY Poly CNSE developed to monitor glucose levels in human kidneys being prepared for transplant. Our team determined that certain adjustments would enable the sensor to provide similar monitoring for plants, and provide a critical insight to the effects of climate change on the environment,” said Dr. James Castracane, professor and head of the Nanobioscience Constellation at SUNY Polytechnic Institute. “This is a perfect example of the cycle of innovation made possible through the ongoing nanotechnology research and development at SUNY Poly CNSE’s NanoTech Complex.”

“This new sensor will be used in several field experiments on measuring sensitivity of boreal forest to climate warming. Questions about forest response to rising air and soil temperatures are extremely important for forecasting future atmospheric carbon dioxide levels, climate change and forest health,” said Dr. Andrei Lapenas, principal investigator and associate professor of climatology at the University at Albany. “At the same time, we already see some potential commercial application for NANAPHID-type sensors in agriculture, food industry and other fields. Our collaboration with SUNY Poly CNSE has been extremely productive and I look forward to continuing our work together.”

The NANAPHID project began in 2014 with a $135,000 SUNY Research Foundation Network of Excellence grant. SUNY Poly CNSE will receive $400,000 of the NSF award for the manufacturing aspects of the sensor array development and testing. The remaining funds will be shared between Dr. Lapenas and researchers Dr. Ruth Yanai (ESF), Dr. Thomas Horton (ESF), and Dr. Pamela Templer (Boston University) for data collection and analysis.

“With current technology, analyzing carbohydrates in plant tissues requires hours in the lab or more than $100 a sample if you want to send them out. And you can’t sample the same tissue twice, the sample is destroyed in the analysis,” said Dr. Yanai. “The implantable device will be cheap to produce and will provide continuous monitoring of sugar concentrations, which is orders of magnitude better in both cost and in the information provided. Research questions we never dreamed of asking before will become possible, like tracking changes in photosynthate over the course of a day or along the stem of a plant, because it’s a nondestructive assay.”

“I see incredible promise for the NANAPHID device in plant ecology. We can use the sensors at the root tip where plants give sugars to symbiotic fungi in exchange for soil nutrients,” said Dr. Horton. “Some fungi are believed to be significant carbon sinks because they produce extensive fungal networks in soils and we can use the sensors to compare the allocation of photosynthate to roots colonized by these fungi versus the allocation to less carbon demanding fungi. Further, the vast majority of these symbiotic fungi cannot be cultured in lab. These sensors will provide valuable insights into plant-microbe interactions under field conditions.”

“The creation of this new sensor will make understanding the effects of a variety of environmental changes, including climate change, on the health and productivity of forests much easier to measure,” said Dr. Templer. “For the first time, we will be able to measure concentrations of carbohydrates in living trees continuously and in real-time, expanding our ability to examine controls on photosynthesis, sap flow, carbon sequestration and other processes in forest ecosystems.”

Fascinating, eh? I wonder who made the connection between human kidneys and plants and how that person made the connection.

Ferroelectric switching in the lung, heart, and arteries

A June 23, 2014 University of Washington (state) news release (also on EurekAlert) describes how the human body (and other biological tissue) is capable of generating ferroelectricity,

University of Washington researchers have shown that a favorable electrical property is present in a type of protein found in organs that repeatedly stretch and retract, such as the lungs, heart and arteries. These findings are the first that clearly track this phenomenon, called ferroelectricity, occurring at the molecular level in biological tissues.

The news release gives a brief description of ferroelectricity and describes the research team’s latest work with biological tissues,

Ferroelectricity is a response to an electric field in which a molecule switches from having a positive to a negative charge. This switching process in synthetic materials serves as a way to power computer memory chips, display screens and sensors. This property only recently has been discovered in animal tissues and researchers think it may help build and support healthy connective tissues in mammals.

A research team led by Li first discovered ferroelectric properties in biological tissues in 2012, then in 2013 found that glucose can suppress this property in the body’s connective tissues, wherever the protein elastin is present. But while ferroelectricity is a proven entity in synthetic materials and has long been thought to be important in biological functions, its actual existence in biology hasn’t been firmly established.

This study proves that ferroelectric switching happens in the biological protein elastin. When the researchers looked at the base structures within the protein, they saw similar behavior to the unit cells of solid-state materials, where ferroelectricity is well understood.

“When we looked at the smallest structural unit of the biological tissue and how it was organized into a larger protein fiber, we then were able to see similarities to the classic ferroelectric model found in solids,” Li said.

The researchers wanted to establish a more concrete, precise way of verifying ferroelectricity in biological tissues. They used small samples of elastin taken from a pig’s aorta and poled the tissues using an electric field at high temperatures. They then measured the current with the poling field removed and found that the current switched direction when the poling electric field was switched, a sign of ferroelectricity.

They did the same thing at room temperature using a laser as the heat source, and the current also switched directions.

Then, the researchers tested for this behavior on the smallest-possible unit of elastin, called tropoelastin, and again observed the phenomenon. They concluded that this switching property is “intrinsic” to the molecular make-up of elastin.

The next step is to understand the biological and physiological significance of this property, Li said. One hypothesis is that if ferroelectricity helps elastin stay flexible and functional in the body, a lack of it could directly affect the hardening of arteries.

“We may be able to use this as a very sensitive technique to detect the initiation of the hardening process at a very early stage when no other imaging technique will be able to see it,” Li said.

The team also is looking at whether this property plays a role in normal biological functions, perhaps in regulating the growth of tissue.

Co-authors are Pradeep Sharma at the University of Houston, Yanhang Zhang at Boston University, and collaborators at Nanjing University and the Chinese Academy of Sciences.

Here’s a link to and a citation for the research paper,

Ferroelectric switching of elastin by Yuanming Liu, Hong-Ling Cai, Matthew Zelisko, Yunjie Wang, Jinglan Sun, Fei Yan, Feiyue Ma, Peiqi Wang, Qian Nataly Chen, Hairong Zheng, Xiangjian Meng, Pradeep Sharma, Yanhang Zhang, and Jiangyu Li. Proceedings of the National Academy of Sciences (PNAS) doi: 10.1073/pnas.1402909111

This paper is behind a paywall.

I think this is a new practice. There is a paragraph on the significance of this work (follow the link to the paper),

Ferroelectricity has long been speculated to have important biological functions, although its very existence in biology has never been firmly established. Here, we present, to our knowledge, the first macroscopic observation of ferroelectric switching in a biological system, and we elucidate the origin and mechanism underpinning ferroelectric switching of elastin. It is discovered that the polarization in elastin is intrinsic at the monomer level, analogous to the unit cell level polarization in classical perovskite ferroelectrics. Our findings settle a long-standing question on ferroelectric switching in biology and establish ferroelectricity as an important biophysical property of proteins. We believe this is a critical first step toward resolving its physiological significance and pathological implications.

Producing stronger silk musically

Markus Buehler and his interdisciplinary team (my previous posts on their work includes Gossamer silk that withstands hurricane force winds and Music, math, and spiderwebs) have synthesized a new material based on spider silk. From the Nov. 28, 2012 news item on ScienceDaily,

Pound for pound, spider silk is one of the strongest materials known: Research by MIT’s [Massachusetts Institute of Technology] Markus Buehler has helped explain that this strength arises from silk’s unusual hierarchical arrangement of protein building blocks.

Now Buehler — together with David Kaplan of Tufts University and Joyce Wong of Boston University — has synthesized new variants on silk’s natural structure, and found a method for making further improvements in the synthetic material.

And an ear for music, it turns out, might be a key to making those structural improvements.

Here’s Buehler describing the work in an MIT video clip,

The Nov. 28, 2012 MIT news release by David Chandler provides more details,

Buehler’s previous research has determined that fibers with a particular structure — highly ordered, layered protein structures alternating with densely packed, tangled clumps of proteins (ABABAB) — help to give silk its exceptional properties. For this initial attempt at synthesizing a new material, the team chose to look instead at patterns in which one of the structures occurred in triplets (AAAB and BBBA).

Making such structures is no simple task. Kaplan, a chemical and biomedical engineer, modified silk-producing genes to produce these new sequences of proteins. Then Wong, a bioengineer and materials scientist, created a microfluidic device that mimicked the spider’s silk-spinning organ, which is called a spinneret.

Even after the detailed computer modeling that went into it, the outcome came as a bit of a surprise, Buehler says. One of the new materials produced very strong protein molecules — but these did not stick together as a thread. The other produced weaker protein molecules that adhered well and formed a good thread. “This taught us that it’s not sufficient to consider the properties of the protein molecules alone,” he says. “Rather, [one must] think about how they can combine to form a well-connected network at a larger scale.”

The different levels of silk’s structure, Buehler says, are analogous to the hierarchical elements that make up a musical composition — including pitch, range, dynamics and tempo. The team enlisted the help of composer John McDonald, a professor of music at Tufts, and MIT postdoc David Spivak, a mathematician who specializes in a field called category theory. Together, using analytical tools derived from category theory to describe the protein structures, the team figured out how to translate the details of the artificial silk’s structure into musical compositions.

The differences were quite distinct: The strong but useless protein molecules translated into music that was aggressive and harsh, Buehler says, while the ones that formed usable fibers sound much softer and more fluid.

Combining materials modeling with mathematical and musical tools, Buehler says, could provide a much faster way of designing new biosynthesized materials, replacing the trial-and-error approach that prevails today. Genetically engineering organisms to produce materials is a long, painstaking process, he says, but this work “has taught us a new approach, a fundamental lesson” in combining experiment, theory and simulation to speed up the discovery process.

Materials produced this way — which can be done under environmentally benign, room-temperature conditions — could lead to new building blocks for tissue engineering or other uses, Buehler says: scaffolds for replacement organs, skin, blood vessels, or even new materials for use in civil engineering.

It may be that the complex structures of music can reveal the underlying complex structures of biomaterials found in nature, Buehler says. “There might be an underlying structural expression in music that tells us more about the proteins that make up our bodies. After all, our organs — including the brain — are made from these building blocks, and humans’ expression of music may inadvertently include more information that we are aware of.”

“Nobody has tapped into this,” he says, adding that with the breadth of his multidisciplinary team, “We could do this — making better bio-inspired materials by using music, and using music to better understand biology.”

At the end of Chandler’s news release there’s a notice about a summer course with Markus Buehler,

For those interested in the work Professor Buehler is doing, you may also be interested to know that he is offering a short course on campus this summer called Materials By Design.

Materials By Design
June 17-20, 2013
shortprograms.mit.edu/mbd

Through lectures and hands-on labs, participants will learn how materials failure, studied from a first principles perspective, can be applied in an effective “learning-from-failure approach” to design and make novel materials. Participants will also learn how superior material properties in nature and biology can be mimicked in bioinspired materials for applications in new technology. This course will be of interest to scientists, engineers, managers, and policy makers working in the area of materials design, development, manufacturing, and testing. [emphasis mine]

I wasn’t expecting to see managers and policy makers as possible students for this course.

By the way, Buehler is not the only scientist to make a connection between music and biology (although he seems to be the only person using the concept for applications), there’s also geneticist and biophysicist, Mae Wan Ho and her notion of quantum jazz. From the Quantum Jazz Biology* article by David Reilly in the June 23, 2010 Isis Report,

I use the analogy of ‘quantum jazz’ to express the quantum coherence of the organism. It goes through a fantastic range of space and time scales, from the tiniest atom or subatomic particle to the whole organism and beyond. Organisms communicate with other organisms, and are attuned to natural rhythms, so they have circadian rhythms, annual rhythms, and so on. At the other extreme, you have very fast reactions that take place in femtoseconds. And all these rhythms are coordinated, there is evidence for that.

Purpose in nature (and the universe): even scientists believe

An intriguing research article titled, Professional Physical Scientists Display Tenacious Teleological Tendencies: Purpose-Based Reasoning as a Cognitive Default, is behind a paywall making it difficult to do much more than comment on the Oct. 17, 2012 news item (on ScienceDaily),

A team of researchers in Boston University’s Psychology Department has found that, despite years of scientific training, even professional chemists, geologists, and physicists from major universities such as Harvard, MIT, and Yale cannot escape a deep-seated belief that natural phenomena exist for a purpose.

Although purpose-based “teleological” explanations are often found in religion, such as in creationist accounts of Earth’s origins, they are generally discredited in science. When physical scientists have time to ruminate about the reasons why natural objects and events occur, they explicitly reject teleological accounts, instead favoring causal, more mechanical explanations. However, the study by lead author Deborah Kelemen, associate professor of psychology, and collaborators Joshua Rottman and Rebecca Seston finds that when scientists are required to think under time pressure, an underlying tendency to find purpose in nature is revealed.

“It is quite surprising what these studies show,” says Kelemen. “Even though advanced scientific training can reduce acceptance of scientifically inaccurate teleological explanations, it cannot erase a tenacious early-emerging human tendency to find purpose in nature. It seems that our minds may be naturally more geared to religion than science.”

I did find the abstract for the paper,

… In Study 2, we explored this further and found that the teleological tendencies of professional scientists did not differ from those of humanities scholars. Thus, although extended education appears to produce an overall reduction in inaccurate teleological explanation, specialization as a scientist does not, in itself, additionally ameliorate scientifically inaccurate purpose-based theories about the natural world. A religion-consistent default cognitive bias toward teleological explanation tenaciously persists and may have subtle but profound consequences for scientific progress.

Here’s the full citation for the paper if you want examine it yourself,

Professional Physical Scientists Display Tenacious Teleological Tendencies: Purpose-Based Reasoning as a Cognitive Default. By Kelemen, Deborah; Rottman, Joshua; Seston, Rebecca

Journal of Experimental Psychology: General, Oct 15, 2012.

What I find particularly intriguing about this work is that it helps to provide an explanation for a phenomenon I’ve observed at science conferences and science talks and in science books. The phenomenon is a tendency to ignore a particular set of questions, how did it start? where did it come from? etc. when discussing nature or, indeed, the universe.

I noticed the tendency again last night (Oct. 16, 2012) at the CBC (Canadian Broadcasting Corporation) Massey Lecture being given by Neil Turok, director of the Canadian Perimeter Institute for Theoretical Physics, and held in Vancouver (Canada). The event was mentioned in my  Oct. 12, 2012 posting (scroll down 2/3 of the way).

During this third lecture (What Banged?)  in a series of five Massey lectures. Turok asked the audience (there were roughly 800 people by my count) to imagine a millimetre ball of light as the starting point for the universe. He never did tell us where this ball of light came from. The entire issue as to how it all started (What Banged?) was avoided. Turok’s avoidance is not unusual. Somehow the question is always set aside, while the scientist jumps into the part of the story she or he can or wants to explain.

 

Interestingly, Turok has given the What Banged? talk previously in 2008 in Waterloo, Ontario. According to this description of the 2008 What Banged? talk, he did modify the presentation for last night,

The evidence that the universe emerged 14 billion years ago from an event called ‘the big bang’ is overwhelming. Yet the cause of this event remains deeply mysterious. In the conventional picture, the ‘initial singularity’ is unexplained. It is simply assumed that the universe somehow sprang into existence full of ‘inflationary’ energy, blowing up the universe into the large, smooth state we observe today. While this picture is in excellent agreement with current observations, it is both contrived and incomplete, leading us to suspect that it is not the final word. In this lecture, the standard inflationary picture will be contrasted with a new view of the initial singularity suggested by string and M-theory, in which the bang is a far more normal, albeit violent, event which occurred in a pre-existing universe. [emphasis mine] According to the new picture, a cyclical model of the universe becomes feasible in which one bang is followed by another, in a potentially endless series of cosmic cycles. The presentation will also review exciting recent theoretical developments and forthcoming observational tests which could distinguish between the rival inflationary and cyclical hypotheses.

Even this explanation doesn’t really answer the question. If there, is as suggested, a pre-existing universe, where did that come from? At the end of last night’s lecture, Turok seemed to be suggesting some kind of endless loop where past, present, and future are linked, which still begs the question: where did it all come from?

I can certainly understand how scientists who are trained to avoid teleological explanations (with their religious overtones) would want to avoid or rush over any question that might occasion just such an explanation.

Last night, the whole talk was a physics and history of physics lesson for ‘dummies’ that didn’t quite manage to be ‘dumb’ enough for me and didn’t really deliver on the promise in this description, from the Oct. 16, 2012 posting by Brian Lynch on the Georgia Straight website,

Don’t worry if your grasp of relativistic wave equations isn’t what it once was. The Waterloo, Ontario–based physicist is speaking the language of the general public here. Even though his subject dwarfs pretty much everything else, the focus of the series as a whole is human in scale. Turok sees our species as standing on the brink of a scientific revolution, where we can understand “how our ideas regarding our place in the universe may develop, and how our very nature may change.” [emphasis mine]

Perhaps Turok is building up to a discussion about “our place  in the universe” and “how our very nature may change,” sometime in the next two lectures.