Tag Archives: UC Berkeley

The wonder of movement in 3D

Shades of Eadweard Muybridge (English photographer who pioneered photographic motion studies)! A September 19, 2018 news item on ScienceDaily describes the latest efforts to ‘capture motion’,

Patriots quarterback Tom Brady has often credited his success to spending countless hours studying his opponent’s movements on film. This understanding of movement is necessary for all living species, whether it’s figuring out what angle to throw a ball at, or perceiving the motion of predators and prey. But simple videos can’t actually give us the full picture.

That’s because traditional videos and photos for studying motion are two-dimensional, and don’t show us the underlying 3-D structure of the person or subject of interest. Without the full geometry, we can’t inspect the small and subtle movements that help us move faster, or make sense of the precision needed to perfect our athletic form.

Recently, though, researchers from MIT’s [Massachusetts Institute of Technology] Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a way to get a better handle on this understanding of complex motion.

There isn’t a single reference to Muybridge, still, this September 18, 2018 Massachusetts Institute of Technology news release (also on EurekAlert but published September 19, 2018), which originated the news item, delves further into the research,

The new system uses an algorithm that can take 2-D videos and turn them into 3-D printed “motion sculptures” that show how a human body moves through space. In addition to being an intriguing aesthetic visualization of shape and time, the team envisions that their “MoSculp” system could enable a much more detailed study of motion for professional athletes, dancers, or anyone who wants to improve their physical skills.

“Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis,” says PhD student Xiuming Zhang, lead author of a new paper about the system. “You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve.”

Because motion sculptures are 3-D, users can use a computer interface to navigate around the structures and see them from different viewpoints, revealing motion-related information inaccessible from the original viewpoint.

Zhang wrote the paper alongside MIT professors William Freeman and Stefanie Mueller, PhD student Jiajun Wu, Google researchers Qiurui He and Tali Dekel, as well as U.C. Berkeley postdoc and former CSAIL PhD Andrew Owens.

How it works

Artists and scientists have long struggled to gain better insight into movement, limited by their own camera lens and what it could provide.

Previous work has mostly used so-called “stroboscopic” photography techniques, which look a lot like the images in a flip book stitched together. But since these photos only show snapshots of movement, you wouldn’t be able to see as much of the trajectory of a person’s arm when they’re hitting a golf ball, for example.

What’s more, these photographs also require laborious pre-shoot setup, such as using a clean background and specialized depth cameras and lighting equipment. All MoSculp needs is a video sequence.

Given an input video, the system first automatically detects 2-D key points on the subject’s body, such as the hip, knee, and ankle of a ballerina while she’s doing a complex dance sequence. Then, it takes the best possible poses from those points to be turned into 3-D “skeletons.”

After stitching these skeletons together, the system generates a motion sculpture that can be 3-D printed, showing the smooth, continuous path of movement traced out by the subject. Users can customize their figures to focus on different body parts, assign different materials to distinguish among parts, and even customize lighting.

In user studies, the researchers found that over 75 percent of subjects felt that MoSculp provided a more detailed visualization for studying motion than the standard photography techniques.

“Dance and highly-skilled athletic motions often seem like ‘moving sculptures’ but they only create fleeting and ephemeral shapes,” says Courtney Brigham, communications lead at Adobe. “This work shows how to take motions and turn them into real sculptures with objective visualizations of movement, providing a way for athletes to analyze their movements for training, requiring no more equipment than a mobile camera and some computing time.”

The system works best for larger movements, like throwing a ball or taking a sweeping leap during a dance sequence. It also works for situations that might obstruct or complicate movement, such as people wearing loose clothing or carrying objects.

Currently, the system only uses single-person scenarios, but the team soon hopes to expand to multiple people. This could open up the potential to study things like social disorders, interpersonal interactions, and team dynamics.

This work will be presented at the User Interface Software and Technology (UIST) symposium in Berlin, Germany in October 2018 and the team’s paper published as part of the proceedings.

As for anyone wondering about the Muybridge comment, here’s an image the MIT researchers have made available,

A new system uses an algorithm that can take 2-D videos and turn them into 3-D-printed “motion sculptures” that show how a human body moves through space. Image courtesy of MIT CSAIL

Contrast that MIT image with some of the images in this video capturing parts of a theatre production, Studies in Motion: The Hauntings of Eadweard Muybridge,

Getting back to MIT, here’s their MoSculp video,

There are some startling similarities, eh? I suppose there are only so many ways one can capture movement be it in studies of Eadweard Muybridge, a theatre production about his work, or an MIT video the latest in motion capture technology.

It’s a very ‘carbony’ time: graphene jacket, graphene-skinned airplane, and schwarzite

In August 2018, I been stumbled across several stories about graphene-based products and a new form of carbon.

Graphene jacket

The company producing this jacket has as its goal “… creating bionic clothing that is both bulletproof and intelligent.” Well, ‘bionic‘ means biologically-inspired engineering and ‘intelligent‘ usually means there’s some kind of computing capability in the product. This jacket, which is the first step towards the company’s goal, is not bionic, bulletproof, or intelligent. Nonetheless, it represents a very interesting science experiment in which you, the consumer, are part of step two in the company’s R&D (research and development).

Onto Vollebak’s graphene jacket,

Courtesy: Vollebak

From an August 14, 2018 article by Jesus Diaz for Fast Company,

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that have long threatened to revolutionize everything from aerospace engineering to medicine. …

Despite its immense promise, graphene still hasn’t found much use in consumer products, thanks to the fact that it’s hard to manipulate and manufacture in industrial quantities. The process of developing Vollebak’s jacket, according to the company’s cofounders, brothers Steve and Nick Tidball, took years of intensive research, during which the company worked with the same material scientists who built Michael Phelps’ 2008 Olympic Speedo swimsuit (which was famously banned for shattering records at the event).

The jacket is made out of a two-sided material, which the company invented during the extensive R&D process. The graphene side looks gunmetal gray, while the flipside appears matte black. To create it, the scientists turned raw graphite into something called graphene “nanoplatelets,” which are stacks of graphene that were then blended with polyurethane to create a membrane. That, in turn, is bonded to nylon to form the other side of the material, which Vollebak says alters the properties of the nylon itself. “Adding graphene to the nylon fundamentally changes its mechanical and chemical properties–a nylon fabric that couldn’t naturally conduct heat or energy, for instance, now can,” the company claims.

The company says that it’s reversible so you can enjoy graphene’s properties in different ways as the material interacts with either your skin or the world around you. “As physicists at the Max Planck Institute revealed, graphene challenges the fundamental laws of heat conduction, which means your jacket will not only conduct the heat from your body around itself to equalize your skin temperature and increase it, but the jacket can also theoretically store an unlimited amount of heat, which means it can work like a radiator,” Tidball explains.

He means it literally. You can leave the jacket out in the sun, or on another source of warmth, as it absorbs heat. Then, the company explains on its website, “If you then turn it inside out and wear the graphene next to your skin, it acts like a radiator, retaining its heat and spreading it around your body. The effect can be visibly demonstrated by placing your hand on the fabric, taking it away and then shooting the jacket with a thermal imaging camera. The heat of the handprint stays long after the hand has left.”

There’s a lot more to the article although it does feature some hype and I’m not sure I believe Diaz’s claim (August 14, 2018 article) that ‘graphene-based’ hair dye is perfectly safe ( Note: A link has been removed),

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that will one day revolutionize everything from aerospace engineering to medicine. Its diverse uses are seemingly endless: It can stop a bullet if you add enough layers. It can change the color of your hair with no adverse effects. [emphasis mine] It can turn the walls of your home into a giant fire detector. “It’s so strong and so stretchy that the fibers of a spider web coated in graphene could catch a falling plane,” as Vollebak puts it in its marketing materials.

Not unless things have changed greatly since March 2018. My August 2, 2018 posting featured the graphene-based hair dye announcement from March 2018 and a cautionary note from Dr. Andrew Maynard (scroll down ab out 50% of the way for a longer excerpt of Maynard’s comments),

Northwestern University’s press release proudly announced, “Graphene finds new application as nontoxic, anti-static hair dye.” The announcement spawned headlines like “Enough with the toxic hair dyes. We could use graphene instead,” and “’Miracle material’ graphene used to create the ultimate hair dye.”

From these headlines, you might be forgiven for getting the idea that the safety of graphene-based hair dyes is a done deal. Yet having studied the potential health and environmental impacts of engineered nanomaterials for more years than I care to remember, I find such overly optimistic pronouncements worrying – especially when they’re not backed up by clear evidence.

These studies need to be approached with care, as the precise risks of graphene exposure will depend on how the material is used, how exposure occurs and how much of it is encountered. Yet there’s sufficient evidence to suggest that this substance should be used with caution – especially where there’s a high chance of exposure or that it could be released into the environment.

The full text of Dr. Maynard’s comments about graphene hair dyes and risk can be found here.

Bearing in mind  that graphene-based hair dye is an entirely different class of product from the jacket, I wouldn’t necessarily dismiss risks; I would like to know what kind of risk assessment and safety testing has been done. Due to their understandable enthusiasm, the brothers Tidball have focused all their marketing on the benefits and the opportunity for the consumer to test their product (from graphene jacket product webpage),

While it’s completely invisible and only a single atom thick, graphene is the lightest, strongest, most conductive material ever discovered, and has the same potential to change life on Earth as stone, bronze and iron once did. But it remains difficult to work with, extremely expensive to produce at scale, and lives mostly in pioneering research labs. So following in the footsteps of the scientists who discovered it through their own highly speculative experiments, we’re releasing graphene-coated jackets into the world as experimental prototypes. Our aim is to open up our R&D and accelerate discovery by getting graphene out of the lab and into the field so that we can harness the collective power of early adopters as a test group. No-one yet knows the true limits of what graphene can do, so the first edition of the Graphene Jacket is fully reversible with one side coated in graphene and the other side not. If you’d like to take part in the next stage of this supermaterial’s history, the experiment is now open. You can now buy it, test it and tell us about it. [emphasis mine]

How maverick experiments won the Nobel Prize

While graphene’s existence was first theorised in the 1940s, it wasn’t until 2004 that two maverick scientists, Andre Geim and Konstantin Novoselov, were able to isolate and test it. Through highly speculative and unfunded experimentation known as their ‘Friday night experiments,’ they peeled layer after layer off a shaving of graphite using Scotch tape until they produced a sample of graphene just one atom thick. After similarly leftfield thinking won Geim the 2000 Ig Nobel prize for levitating frogs using magnets, the pair won the Nobel prize in 2010 for the isolation of graphene.

Should you be interested, in beta-testing the jacket, it will cost you $695 (presumably USD); order here. One last thing, Vollebak is based in the UK.

Graphene skinned plane

An August 14, 2018 news item (also published as an August 1, 2018 Haydale press release) by Sue Keighley on Azonano heralds a new technology for airplans,

Haydale, (AIM: HAYD), the global advanced materials group, notes the announcement made yesterday from the University of Central Lancashire (UCLAN) about the recent unveiling of the world’s first graphene skinned plane at the internationally renowned Farnborough air show.

The prepreg material, developed by Haydale, has potential value for fuselage and wing surfaces in larger scale aero and space applications especially for the rapidly expanding drone market and, in the longer term, the commercial aerospace sector. By incorporating functionalised nanoparticles into epoxy resins, the electrical conductivity of fibre-reinforced composites has been significantly improved for lightning-strike protection, thereby achieving substantial weight saving and removing some manufacturing complexities.

Before getting to the photo, here’s a definition for pre-preg from its Wikipedia entry (Note: Links have been removed),

Pre-preg is “pre-impregnated” composite fibers where a thermoset polymer matrix material, such as epoxy, or a thermoplastic resin is already present. The fibers often take the form of a weave and the matrix is used to bond them together and to other components during manufacture.

Haydale has supplied graphene enhanced prepreg material for Juno, a three-metre wide graphene-enhanced composite skinned aircraft, that was revealed as part of the ‘Futures Day’ at Farnborough Air Show 2018. [downloaded from https://www.azonano.com/news.aspx?newsID=36298]

A July 31, 2018 University of Central Lancashire (UCLan) press release provides a tiny bit more (pun intended) detail,

The University of Central Lancashire (UCLan) has unveiled the world’s first graphene skinned plane at an internationally renowned air show.

Juno, a three-and-a-half-metre wide graphene skinned aircraft, was revealed on the North West Aerospace Alliance (NWAA) stand as part of the ‘Futures Day’ at Farnborough Air Show 2018.

The University’s aerospace engineering team has worked in partnership with the Sheffield Advanced Manufacturing Research Centre (AMRC), the University of Manchester’s National Graphene Institute (NGI), Haydale Graphene Industries (Haydale) and a range of other businesses to develop the unmanned aerial vehicle (UAV), which also includes graphene batteries and 3D printed parts.

Billy Beggs, UCLan’s Engineering Innovation Manager, said: “The industry reaction to Juno at Farnborough was superb with many positive comments about the work we’re doing. Having Juno at one the world’s biggest air shows demonstrates the great strides we’re making in leading a programme to accelerate the uptake of graphene and other nano-materials into industry.

“The programme supports the objectives of the UK Industrial Strategy and the University’s Engineering Innovation Centre (EIC) to increase industry relevant research and applications linked to key local specialisms. Given that Lancashire represents the fourth largest aerospace cluster in the world, there is perhaps no better place to be developing next generation technologies for the UK aerospace industry.”

Previous graphene developments at UCLan have included the world’s first flight of a graphene skinned wing and the launch of a specially designed graphene-enhanced capsule into near space using high altitude balloons.

UCLan engineering students have been involved in the hands-on project, helping build Juno on the Preston Campus.

Haydale supplied much of the material and all the graphene used in the aircraft. Ray Gibbs, Chief Executive Officer, said: “We are delighted to be part of the project team. Juno has highlighted the capability and benefit of using graphene to meet key issues faced by the market, such as reducing weight to increase range and payload, defeating lightning strike and protecting aircraft skins against ice build-up.”

David Bailey Chief Executive of the North West Aerospace Alliance added: “The North West aerospace cluster contributes over £7 billion to the UK economy, accounting for one quarter of the UK aerospace turnover. It is essential that the sector continues to develop next generation technologies so that it can help the UK retain its competitive advantage. It has been a pleasure to support the Engineering Innovation Centre team at the University in developing the world’s first full graphene skinned aircraft.”

The Juno project team represents the latest phase in a long-term strategic partnership between the University and a range of organisations. The partnership is expected to go from strength to strength following the opening of the £32m EIC facility in February 2019.

The next step is to fly Juno and conduct further tests over the next two months.

Next item, a new carbon material.

Schwarzite

I love watching this gif of a schwarzite,

The three-dimensional cage structure of a schwarzite that was formed inside the pores of a zeolite. (Graphics by Yongjin Lee and Efrem Braun)

An August 13, 2018 news item on Nanowerk announces the new carbon structure,

The discovery of buckyballs [also known as fullerenes, C60, or buckminsterfullerenes] surprised and delighted chemists in the 1980s, nanotubes jazzed physicists in the 1990s, and graphene charged up materials scientists in the 2000s, but one nanoscale carbon structure – a negatively curved surface called a schwarzite – has eluded everyone. Until now.

University of California, Berkeley [UC Berkeley], chemists have proved that three carbon structures recently created by scientists in South Korea and Japan are in fact the long-sought schwarzites, which researchers predict will have unique electrical and storage properties like those now being discovered in buckminsterfullerenes (buckyballs or fullerenes for short), nanotubes and graphene.

An August 13, 2018 UC Berkeley news release by Robert Sanders, which originated the news item, describes how the Berkeley scientists and the members of their international  collaboration from Germany, Switzerland, Russia, and Italy, have contributed to the current state of schwarzite research,

The new structures were built inside the pores of zeolites, crystalline forms of silicon dioxide – sand – more commonly used as water softeners in laundry detergents and to catalytically crack petroleum into gasoline. Called zeolite-templated carbons (ZTC), the structures were being investigated for possible interesting properties, though the creators were unaware of their identity as schwarzites, which theoretical chemists have worked on for decades.

Based on this theoretical work, chemists predict that schwarzites will have unique electronic, magnetic and optical properties that would make them useful as supercapacitors, battery electrodes and catalysts, and with large internal spaces ideal for gas storage and separation.

UC Berkeley postdoctoral fellow Efrem Braun and his colleagues identified these ZTC materials as schwarzites based of their negative curvature, and developed a way to predict which zeolites can be used to make schwarzites and which can’t.

“We now have the recipe for how to make these structures, which is important because, if we can make them, we can explore their behavior, which we are working hard to do now,” said Berend Smit, an adjunct professor of chemical and biomolecular engineering at UC Berkeley and an expert on porous materials such as zeolites and metal-organic frameworks.

Smit, the paper’s corresponding author, Braun and their colleagues in Switzerland, China, Germany, Italy and Russia will report their discovery this week in the journal Proceedings of the National Academy of Sciences. Smit is also a faculty scientist at Lawrence Berkeley National Laboratory.

Playing with carbon

Diamond and graphite are well-known three-dimensional crystalline arrangements of pure carbon, but carbon atoms can also form two-dimensional “crystals” — hexagonal arrangements patterned like chicken wire. Graphene is one such arrangement: a flat sheet of carbon atoms that is not only the strongest material on Earth, but also has a high electrical conductivity that makes it a promising component of electronic devices.

schwarzite carbon cage

The cage structure of a schwarzite that was formed inside the pores of a zeolite. The zeolite is subsequently dissolved to release the new material. (Graphics by Yongjin Lee and Efrem Braun)

Graphene sheets can be wadded up to form soccer ball-shaped fullerenes – spherical carbon cages that can store molecules and are being used today to deliver drugs and genes into the body. Rolling graphene into a cylinder yields fullerenes called nanotubes, which are being explored today as highly conductive wires in electronics and storage vessels for gases like hydrogen and carbon dioxide. All of these are submicroscopic, 10,000 times smaller than the width of a human hair.

To date, however, only positively curved fullerenes and graphene, which has zero curvature, have been synthesized, feats rewarded by Nobel Prizes in 1996 and 2010, respectively.

In the 1880s, German physicist Hermann Schwarz investigated negatively curved structures that resemble soap-bubble surfaces, and when theoretical work on carbon cage molecules ramped up in the 1990s, Schwarz’s name became attached to the hypothetical negatively curved carbon sheets.

“The experimental validation of schwarzites thus completes the triumvirate of possible curvatures to graphene; positively curved, flat, and now negatively curved,” Braun added.

Minimize me

Like soap bubbles on wire frames, schwarzites are topologically minimal surfaces. When made inside a zeolite, a vapor of carbon-containing molecules is injected, allowing the carbon to assemble into a two-dimensional graphene-like sheet lining the walls of the pores in the zeolite. The surface is stretched tautly to minimize its area, which makes all the surfaces curve negatively, like a saddle. The zeolite is then dissolved, leaving behind the schwarzite.

soap bubble schwarzite structure

A computer-rendered negatively curved soap bubble that exhibits the geometry of a carbon schwarzite. (Felix Knöppel image)

“These negatively-curved carbons have been very hard to synthesize on their own, but it turns out that you can grow the carbon film catalytically at the surface of a zeolite,” Braun said. “But the schwarzites synthesized to date have been made by choosing zeolite templates through trial and error. We provide very simple instructions you can follow to rationally make schwarzites and we show that, by choosing the right zeolite, you can tune schwarzites to optimize the properties you want.”

Researchers should be able to pack unusually large amounts of electrical charge into schwarzites, which would make them better capacitors than conventional ones used today in electronics. Their large interior volume would also allow storage of atoms and molecules, which is also being explored with fullerenes and nanotubes. And their large surface area, equivalent to the surface areas of the zeolites they’re grown in, could make them as versatile as zeolites for catalyzing reactions in the petroleum and natural gas industries.

Braun modeled ZTC structures computationally using the known structures of zeolites, and worked with topological mathematician Senja Barthel of the École Polytechnique Fédérale de Lausanne in Sion, Switzerland, to determine which of the minimal surfaces the structures resembled.

The team determined that, of the approximately 200 zeolites created to date, only 15 can be used as a template to make schwarzites, and only three of them have been used to date to produce schwarzite ZTCs. Over a million zeolite structures have been predicted, however, so there could be many more possible schwarzite carbon structures made using the zeolite-templating method.

Other co-authors of the paper are Yongjin Lee, Seyed Mohamad Moosavi and Barthel of the École Polytechnique Fédérale de Lausanne, Rocio Mercado of UC Berkeley, Igor Baburin of the Technische Universität Dresden in Germany and Davide Proserpio of the Università degli Studi di Milano in Italy and Samara State Technical University in Russia.

Here’s a link to and a citation for the paper,

Generating carbon schwarzites via zeolite-templating by Efrem Braun, Yongjin Lee, Seyed Mohamad Moosavi, Senja Barthel, Rocio Mercado, Igor A. Baburin, Davide M. Proserpio, and Berend Smit. PNAS August 14, 2018. 201805062; published ahead of print August 14, 2018. https://doi.org/10.1073/pnas.1805062115

This paper appears to be open access.

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,

De-Extinction

In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

CRISPR-Cas12a as a new diagnostic tool

Similar to Cas9, Cas12a is has an added feature as noted in this February 15, 2018 news item on ScienceDaily,

Utilizing an unsuspected activity of the CRISPR-Cas12a protein, researchers created a simple diagnostic system called DETECTR to analyze cells, blood, saliva, urine and stool to detect genetic mutations, cancer and antibiotic resistance and also diagnose bacterial and viral infections. The scientists discovered that when Cas12a binds its double-stranded DNA target, it indiscriminately chews up all single-stranded DNA. They then created reporter molecules attached to single-stranded DNA to signal when Cas12a finds its target.

A February 15, 2018 University of California at Berkeley (UC Berkeley) news release by Robert Sanders and which originated the news item, provides more detail and history,

CRISPR-Cas12a, one of the DNA-cutting proteins revolutionizing biology today, has an unexpected side effect that makes it an ideal enzyme for simple, rapid and accurate disease diagnostics.

blood in test tube

(iStock)

Cas12a, discovered in 2015 and originally called Cpf1, is like the well-known Cas9 protein that UC Berkeley’s Jennifer Doudna and colleague Emmanuelle Charpentier turned into a powerful gene-editing tool in 2012.

CRISPR-Cas9 has supercharged biological research in a mere six years, speeding up exploration of the causes of disease and sparking many potential new therapies. Cas12a was a major addition to the gene-cutting toolbox, able to cut double-stranded DNA at places that Cas9 can’t, and, because it leaves ragged edges, perhaps easier to use when inserting a new gene at the DNA cut.

But co-first authors Janice Chen, Enbo Ma and Lucas Harrington in Doudna’s lab discovered that when Cas12a binds and cuts a targeted double-stranded DNA sequence, it unexpectedly unleashes indiscriminate cutting of all single-stranded DNA in a test tube.

Most of the DNA in a cell is in the form of a double-stranded helix, so this is not necessarily a problem for gene-editing applications. But it does allow researchers to use a single-stranded “reporter” molecule with the CRISPR-Cas12a protein, which produces an unambiguous fluorescent signal when Cas12a has found its target.

“We continue to be fascinated by the functions of bacterial CRISPR systems and how mechanistic understanding leads to opportunities for new technologies,” said Doudna, a professor of molecular and cell biology and of chemistry and a Howard Hughes Medical Institute investigator.

DETECTR diagnostics

The new DETECTR system based on CRISPR-Cas12a can analyze cells, blood, saliva, urine and stool to detect genetic mutations, cancer and antibiotic resistance as well as diagnose bacterial and viral infections. Target DNA is amplified by RPA to make it easier for Cas12a to find it and bind, unleashing indiscriminate cutting of single-stranded DNA, including DNA attached to a fluorescent marker (gold star) that tells researchers that Cas12a has found its target.

The UC Berkeley researchers, along with their colleagues at UC San Francisco, will publish their findings Feb. 15 [2018] via the journal Science’s fast-track service, First Release.

The researchers developed a diagnostic system they dubbed the DNA Endonuclease Targeted CRISPR Trans Reporter, or DETECTR, for quick and easy point-of-care detection of even small amounts of DNA in clinical samples. It involves adding all reagents in a single reaction: CRISPR-Cas12a and its RNA targeting sequence (guide RNA), fluorescent reporter molecule and an isothermal amplification system called recombinase polymerase amplification (RPA), which is similar to polymerase chain reaction (PCR). When warmed to body temperature, RPA rapidly multiplies the number of copies of the target DNA, boosting the chances Cas12a will find one of them, bind and unleash single-strand DNA cutting, resulting in a fluorescent readout.

The UC Berkeley researchers tested this strategy using patient samples containing human papilloma virus (HPV), in collaboration with Joel Palefsky’s lab at UC San Francisco. Using DETECTR, they were able to demonstrate accurate detection of the “high-risk” HPV types 16 and 18 in samples infected with many different HPV types.

“This protein works as a robust tool to detect DNA from a variety of sources,” Chen said. “We want to push the limits of the technology, which is potentially applicable in any point-of-care diagnostic situation where there is a DNA component, including cancer and infectious disease.”

The indiscriminate cutting of all single-stranded DNA, which the researchers discovered holds true for all related Cas12 molecules, but not Cas9, may have unwanted effects in genome editing applications, but more research is needed on this topic, Chen said. During the transcription of genes, for example, the cell briefly creates single strands of DNA that could accidentally be cut by Cas12a.

The activity of the Cas12 proteins is similar to that of another family of CRISPR enzymes, Cas13a, which chew up RNA after binding to a target RNA sequence. Various teams, including Doudna’s, are developing diagnostic tests using Cas13a that could, for example, detect the RNA genome of HIV.

infographic about DETECTR system

(Infographic by the Howard Hughes Medical Institute)

These new tools have been repurposed from their original role in microbes where they serve as adaptive immune systems to fend off viral infections. In these bacteria, Cas proteins store records of past infections and use these “memories” to identify harmful DNA during infections. Cas12a, the protein used in this study, then cuts the invading DNA, saving the bacteria from being taken over by the virus.

The chance discovery of Cas12a’s unusual behavior highlights the importance of basic research, Chen said, since it came from a basic curiosity about the mechanism Cas12a uses to cleave double-stranded DNA.

“It’s cool that, by going after the question of the cleavage mechanism of this protein, we uncovered what we think is a very powerful technology useful in an array of applications,” Chen said.

Here’s a link to and a citation for the paper,

CRISPR-Cas12a target binding unleashes indiscriminate single-stranded DNase activity by Janice S. Chen, Enbo Ma, Lucas B. Harrington, Maria Da Costa, Xinran Tian, Joel M. Palefsky, Jennifer A. Doudna. Science 15 Feb 2018: eaar6245 DOI: 10.1126/science.aar6245

This paper is behind a paywall.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

Artificial intelligence and metaphors

This is a different approach to artificial intelligence. From a June 27, 2017 news item on ScienceDaily,

Ask Siri to find a math tutor to help you “grasp” calculus and she’s likely to respond that your request is beyond her abilities. That’s because metaphors like “grasp” are difficult for Apple’s voice-controlled personal assistant to, well, grasp.

But new UC Berkeley research suggests that Siri and other digital helpers could someday learn the algorithms that humans have used for centuries to create and understand metaphorical language.

Mapping 1,100 years of metaphoric English language, researchers at UC Berkeley and Lehigh University in Pennsylvania have detected patterns in how English speakers have added figurative word meanings to their vocabulary.

The results, published in the journal Cognitive Psychology, demonstrate how throughout history humans have used language that originally described palpable experiences such as “grasping an object” to describe more intangible concepts such as “grasping an idea.”

Unfortunately, this image is not the best quality,

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

A June 27, 2017 University of California at Berkeley (or UC Berkeley) news release by Yasmin Anwar, which originated the news item,

“The use of concrete language to talk about abstract ideas may unlock mysteries about how we are able to communicate and conceptualize things we can never see or touch,” said study senior author Mahesh Srinivasan, an assistant professor of psychology at UC Berkeley. “Our results may also pave the way for future advances in artificial intelligence.”

The findings provide the first large-scale evidence that the creation of new metaphorical word meanings is systematic, researchers said. They can also inform efforts to design natural language processing systems like Siri to help them understand creativity in human language.

“Although such systems are capable of understanding many words, they are often tripped up by creative uses of words that go beyond their existing, pre-programmed vocabularies,” said study lead author Yang Xu, a postdoctoral researcher in linguistics and cognitive science at UC Berkeley.

“This work brings opportunities toward modeling metaphorical words at a broad scale, ultimately allowing the construction of artificial intelligence systems that are capable of creating and comprehending metaphorical language,” he added.

Srinivasan and Xu conducted the study with Lehigh University psychology professor Barbara Malt.

Using the Metaphor Map of English database, researchers examined more than 5,000 examples from the past millennium in which word meanings from one semantic domain, such as “water,” were extended to another semantic domain, such as “mind.”

Researchers called the original semantic domain the “source domain” and the domain that the metaphorical meaning was extended to, the “target domain.”

More than 1,400 online participants were recruited to rate semantic domains such as “water” or “mind” according to the degree to which they were related to the external world (light, plants), animate things (humans, animals), or intense emotions (excitement, fear).

These ratings were fed into computational models that the researchers had developed to predict which semantic domains had been the sources or targets of metaphorical extension.

In comparing their computational predictions against the actual historical record provided by the Metaphor Map of English, researchers found that their models correctly forecast about 75 percent of recorded metaphorical language mappings over the past millennium.

Furthermore, they found that the degree to which a domain is tied to experience in the external world, such as “grasping a rope,” was the primary predictor of how a word would take on a new metaphorical meaning such as “grasping an idea.”

For example, time and again, researchers found that words associated with textiles, digestive organs, wetness, solidity and plants were more likely to provide sources for metaphorical extension, while mental and emotional states, such as excitement, pride and fear were more likely to be the targets of metaphorical extension.

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

Here’s a link to and a citation for the paper,

Evolution of word meanings through metaphorical mapping: Systematicity over the past millennium by Yang Xu, Barbara C. Malt, Mahesh Srinivasan. Cognitive Psychology Volume 96, August 2017, Pages 41–53 DOI: https://doi.org/10.1016/j.cogpsych.2017.05.005

The early web version of this paper is behind a paywall.

For anyone interested in the ‘Metaphor Map of English’ database mentioned in the news release, you find it here on the University of Glasgow website. By the way, it also seems to be known as ‘Mapping Metaphor with the Historical Thesaurus‘.

Creating multiferroic material at room temperature

A Sept. 23, 2016 news item on ScienceDaily describes some research from Cornell University (US),

Multiferroics — materials that exhibit both magnetic and electric order — are of interest for next-generation computing but difficult to create because the conditions conducive to each of those states are usually mutually exclusive. And in most multiferroics found to date, their respective properties emerge only at extremely low temperatures.

Two years ago, researchers in the labs of Darrell Schlom, the Herbert Fisk Johnson Professor of Industrial Chemistry in the Department of Materials Science and Engineering, and Dan Ralph, the F.R. Newman Professor in the College of Arts and Sciences, in collaboration with professor Ramamoorthy Ramesh at UC Berkeley, published a paper announcing a breakthrough in multiferroics involving the only known material in which magnetism can be controlled by applying an electric field at room temperature: the multiferroic bismuth ferrite.

Schlom’s group has partnered with David Muller and Craig Fennie, professors of applied and engineering physics, to take that research a step further: The researchers have combined two non-multiferroic materials, using the best attributes of both to create a new room-temperature multiferroic.

Their paper, “Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic,” was published — along with a companion News & Views piece — Sept. 22 [2016] in Nature. …

A Sept. 22, 2016 Cornell University news release by Tom Fleischman, which originated the news item, details more about the work (Note: A link has been removed),

The group engineered thin films of hexagonal lutetium iron oxide (LuFeO3), a material known to be a robust ferroelectric but not strongly magnetic. The LuFeO3 consists of alternating single monolayers of lutetium oxide and iron oxide, and differs from a strong ferrimagnetic oxide (LuFe2O4), which consists of alternating monolayers of lutetium oxide with double monolayers of iron oxide.

The researchers found, however, that they could combine these two materials at the atomic-scale to create a new compound that was not only multiferroic but had better properties that either of the individual constituents. In particular, they found they need to add just one extra monolayer of iron oxide to every 10 atomic repeats of the LuFeO3 to dramatically change the properties of the system.

That precision engineering was done via molecular-beam epitaxy (MBE), a specialty of the Schlom lab. A technique Schlom likens to “atomic spray painting,” MBE let the researchers design and assemble the two different materials in layers, a single atom at a time.

The combination of the two materials produced a strongly ferrimagnetic layer near room temperature. They then tested the new material at the Lawrence Berkeley National Laboratory (LBNL) Advanced Light Source in collaboration with co-author Ramesh to show that the ferrimagnetic atoms followed the alignment of their ferroelectric neighbors when switched by an electric field.

“It was when our collaborators at LBNL demonstrated electrical control of magnetism in the material that we made that things got super exciting,” Schlom said. “Room-temperature multiferroics are exceedingly rare and only multiferroics that enable electrical control of magnetism are relevant to applications.”

In electronics devices, the advantages of multiferroics include their reversible polarization in response to low-power electric fields – as opposed to heat-generating and power-sapping electrical currents – and their ability to hold their polarized state without the need for continuous power. High-performance memory chips make use of ferroelectric or ferromagnetic materials.

“Our work shows that an entirely different mechanism is active in this new material,” Schlom said, “giving us hope for even better – higher-temperature and stronger – multiferroics for the future.”

Collaborators hailed from the University of Illinois at Urbana-Champaign, the National Institute of Standards and Technology, the University of Michigan and Penn State University.

Here is a link and a citation to the paper and to a companion piece,

Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic by Julia A. Mundy, Charles M. Brooks, Megan E. Holtz, Jarrett A. Moyer, Hena Das, Alejandro F. Rébola, John T. Heron, James D. Clarkson, Steven M. Disseler, Zhiqi Liu, Alan Farhan, Rainer Held, Robert Hovden, Elliot Padgett, Qingyun Mao, Hanjong Paik, Rajiv Misra, Lena F. Kourkoutis, Elke Arenholz, Andreas Scholl, Julie A. Borchers, William D. Ratcliff, Ramamoorthy Ramesh, Craig J. Fennie, Peter Schiffer et al. Nature 537, 523–527 (22 September 2016) doi:10.1038/nature19343 Published online 21 September 2016

Condensed-matter physics: Multitasking materials from atomic templates by Manfred Fiebig. Nature 537, 499–500  (22 September 2016) doi:10.1038/537499a Published online 21 September 2016

Both the paper and its companion piece are behind a paywall.

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

‘Neural dust’ could lead to introduction of electroceuticals

In case anyone is wondering, the woman who’s manipulating a prosthetic arm so she can eat or a drink of coffee probably has a bulky implant/docking station in her head. Right now that bulky implant is the latest and greatest innovation for tetraplegics (aka, quadriplegics) as it frees, to some extent, people who’ve had no independent movement of any kind. By virtue of the juxtaposition of the footage of the woman with the ‘neural dust’ footage, they seem to be suggesting that neural dust might some day accomplish the same type of connection. At this point, hopes for the ‘neural dust’ are more modest.

An Aug. 3, 2016 news item on ScienceDaily announces the ‘neural dust’,

University of California, Berkeley engineers have built the first dust-sized, wireless sensors that can be implanted in the body, bringing closer the day when a Fitbit-like device could monitor internal nerves, muscles or organs in real time.

Because these batteryless sensors could also be used to stimulate nerves and muscles, the technology also opens the door to “electroceuticals” to treat disorders such as epilepsy or to stimulate the immune system or tamp down inflammation.

An Aug. 3, 2016 University of California at Berkeley news release (also on EurekAlert) by Robert Sanders, which originated the news item, explains further and describes the researchers’ hope that one day the neural dust could be used to control implants and prosthetics,

The so-called neural dust, which the team implanted in the muscles and peripheral nerves of rats, is unique in that ultrasound is used both to power and read out the measurements. Ultrasound technology is already well-developed for hospital use, and ultrasound vibrations can penetrate nearly anywhere in the body, unlike radio waves, the researchers say.

“I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader,“ said Michel Maharbiz, an associate professor of electrical engineering and computer sciences and one of the study’s two main authors. “Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.“

Maharbiz, neuroscientist Jose Carmena, a professor of electrical engineering and computer sciences and a member of the Helen Wills Neuroscience Institute, and their colleagues will report their findings in the August 3 [2016] issue of the journal Neuron.

The sensors, which the researchers have already shrunk to a 1 millimeter cube – about the size of a large grain of sand – contain a piezoelectric crystal that converts ultrasound vibrations from outside the body into electricity to power a tiny, on-board transistor that is in contact with a nerve or muscle fiber. A voltage spike in the fiber alters the circuit and the vibration of the crystal, which changes the echo detected by the ultrasound receiver, typically the same device that generates the vibrations. The slight change, called backscatter, allows them to determine the voltage.

Motes sprinkled thoughout the body

In their experiment, the UC Berkeley team powered up the passive sensors every 100 microseconds with six 540-nanosecond ultrasound pulses, which gave them a continual, real-time readout. They coated the first-generation motes – 3 millimeters long, 1 millimeter high and 4/5 millimeter thick – with surgical-grade epoxy, but they are currently building motes from biocompatible thin films which would potentially last in the body without degradation for a decade or more.

While the experiments so far have involved the peripheral nervous system and muscles, the neural dust motes could work equally well in the central nervous system and brain to control prosthetics, the researchers say. Today’s implantable electrodes degrade within 1 to 2 years, and all connect to wires that pass through holes in the skull. Wireless sensors – dozens to a hundred – could be sealed in, avoiding infection and unwanted movement of the electrodes.

“The original goal of the neural dust project was to imagine the next generation of brain-machine interfaces, and to make it a viable clinical technology,” said neuroscience graduate student Ryan Neely. “If a paraplegic wants to control a computer or a robotic arm, you would just implant this electrode in the brain and it would last essentially a lifetime.”

In a paper published online in 2013, the researchers estimated that they could shrink the sensors down to a cube 50 microns on a side – about 2 thousandths of an inch, or half the width of a human hair. At that size, the motes could nestle up to just a few nerve axons and continually record their electrical activity.

“The beauty is that now, the sensors are small enough to have a good application in the peripheral nervous system, for bladder control or appetite suppression, for example,“ Carmena said. “The technology is not really there yet to get to the 50-micron target size, which we would need for the brain and central nervous system. Once it’s clinically proven, however, neural dust will just replace wire electrodes. This time, once you close up the brain, you’re done.“

The team is working now to miniaturize the device further, find more biocompatible materials and improve the surface transceiver that sends and receives the ultrasounds, ideally using beam-steering technology to focus the sounds waves on individual motes. They are now building little backpacks for rats to hold the ultrasound transceiver that will record data from implanted motes.

They’re also working to expand the motes’ ability to detect non-electrical signals, such as oxygen or hormone levels.

“The vision is to implant these neural dust motes anywhere in the body, and have a patch over the implanted site send ultrasonic waves to wake up and receive necessary information from the motes for the desired therapy you want,” said Dongjin Seo, a graduate student in electrical engineering and computer sciences. “Eventually you would use multiple implants and one patch that would ping each implant individually, or all simultaneously.”

Ultrasound vs radio

Maharbiz and Carmena conceived of the idea of neural dust about five years ago, but attempts to power an implantable device and read out the data using radio waves were disappointing. Radio attenuates very quickly with distance in tissue, so communicating with devices deep in the body would be difficult without using potentially damaging high-intensity radiation.

Marharbiz hit on the idea of ultrasound, and in 2013 published a paper with Carmena, Seo and their colleagues describing how such a system might work. “Our first study demonstrated that the fundamental physics of ultrasound allowed for very, very small implants that could record and communicate neural data,” said Maharbiz. He and his students have now created that system.

“Ultrasound is much more efficient when you are targeting devices that are on the millimeter scale or smaller and that are embedded deep in the body,” Seo said. “You can get a lot of power into it and a lot more efficient transfer of energy and communication when using ultrasound as opposed to electromagnetic waves, which has been the go-to method for wirelessly transmitting power to miniature implants”

“Now that you have a reliable, minimally invasive neural pickup in your body, the technology could become the driver for a whole gamut of applications, things that today don’t even exist,“ Carmena said.

Here’s a link to and a citation for the team’s latest paper,

Wireless Recording in the Peripheral Nervous System with Ultrasonic Neural Dust by Dongjin Seo, Ryan M. Neely, Konlin Shen, Utkarsh Singhal, Elad Alon, Jan M. Rabaey, Jose M. Carmena. and Michel M. Maharbiz. Neuron Volume 91, Issue 3, p529–539, 3 August 2016 DOI: http://dx.doi.org/10.1016/j.neuron.2016.06.034

This paper appears to be open access.

A treasure trove of molecule and battery data released to the public

Scientists working on The Materials Project have taken the notion of open science to their hearts and opened up access to their data according to a June 9, 2016 news item on Nanowerk,

The Materials Project, a Google-like database of material properties aimed at accelerating innovation, has released an enormous trove of data to the public, giving scientists working on fuel cells, photovoltaics, thermoelectrics, and a host of other advanced materials a powerful tool to explore new research avenues. But it has become a particularly important resource for researchers working on batteries. Co-founded and directed by Lawrence Berkeley National Laboratory (Berkeley Lab) scientist Kristin Persson, the Materials Project uses supercomputers to calculate the properties of materials based on first-principles quantum-mechanical frameworks. It was launched in 2011 by the U.S. Department of Energy’s (DOE) Office of Science.

A June 8, 2016 Berkeley Lab news release, which originated the news item, provides more explanation about The Materials Project,

The idea behind the Materials Project is that it can save researchers time by predicting material properties without needing to synthesize the materials first in the lab. It can also suggest new candidate materials that experimentalists had not previously dreamed up. With a user-friendly web interface, users can look up the calculated properties, such as voltage, capacity, band gap, and density, for tens of thousands of materials.

Two sets of data were released last month: nearly 1,500 compounds investigated for multivalent intercalation electrodes and more than 21,000 organic molecules relevant for liquid electrolytes as well as a host of other research applications. Batteries with multivalent cathodes (which have multiple electrons per mobile ion available for charge transfer) are promising candidates for reducing cost and achieving higher energy density than that available with current lithium-ion technology.

The sheer volume and scope of the data is unprecedented, said Persson, who is also a professor in UC Berkeley’s Department of Materials Science and Engineering. “As far as the multivalent cathodes, there’s nothing similar in the world that exists,” she said. “To give you an idea, experimentalists are usually able to focus on one of these materials at a time. Using calculations, we’ve added data on 1,500 different compositions.”

While other research groups have made their data publicly available, what makes the Materials Project so useful are the online tools to search all that data. The recent release includes two new web apps—the Molecules Explorer and the Redox Flow Battery Dashboard—plus an add-on to the Battery Explorer web app enabling researchers to work with other ions in addition to lithium.

“Not only do we give the data freely, we also give algorithms and software to interpret or search over the data,” Persson said.

The Redox Flow Battery app gives scientific parameters as well as techno-economic ones, so battery designers can quickly rule out a molecule that might work well but be prohibitively expensive. The Molecules Explorer app will be useful to researchers far beyond the battery community.

“For multivalent batteries it’s so hard to get good experimental data,” Persson said. “The calculations provide rich and robust benchmarks to assess whether the experiments are actually measuring a valid intercalation process or a side reaction, which is particularly difficult for multivalent energy technology because there are so many problems with testing these batteries.”

Here’s a screen capture from the Battery Explorer app,

The Materials Project’s Battery Explorer app now allows researchers to work with other ions in addition to lithium.

The Materials Project’s Battery Explorer app now allows researchers to work with other ions in addition to lithium. Courtesy: The Materials Project

The news release goes on to describe a new discovery made possible by The Materials Project (Note: A link has been removed),

Together with Persson, Berkeley Lab scientist Gerbrand Ceder, postdoctoral associate Miao Liu, and MIT graduate student Ziqin Rong, the Materials Project team investigated some of the more promising materials in detail for high multivalent ion mobility, which is the most difficult property to achieve in these cathodes. This led the team to materials known as thiospinels. One of these thiospinels has double the capacity of the currently known multivalent cathodes and was recently synthesized and tested in the lab by JCESR researcher Linda Nazar of the University of Waterloo, Canada.

“These materials may not work well the first time you make them,” Persson said. “You have to be persistent; for example you may have to make the material very phase pure or smaller than a particular particle size and you have to test them under very controlled conditions. There are people who have actually tried this material before and discarded it because they thought it didn’t work particularly well. The power of the computations and the design metrics we have uncovered with their help is that it gives us the confidence to keep trying.”

The researchers were able to double the energy capacity of what had previously been achieved for this kind of multivalent battery. The study has been published in the journal Energy & Environmental Science in an article titled, “A High Capacity Thiospinel Cathode for Mg Batteries.”

“The new multivalent battery works really well,” Persson said. “It’s a significant advance and an excellent proof-of-concept for computational predictions as a valuable new tool for battery research.”

Here’s a link to and a citation for the paper,

A high capacity thiospinel cathode for Mg batteries by Xiaoqi Sun, Patrick Bonnick, Victor Duffort, Miao Liu, Ziqin Rong, Kristin A. Persson, Gerbrand Ceder and  Linda F. Nazar. Energy Environ. Sci., 2016, Advance Article DOI: 10.1039/C6EE00724D First published online 24 May 2016

This paper seems to be behind a paywall.

Getting back to the news release, there’s more about The Materials Project in relationship to its membership,

The Materials Project has attracted more than 20,000 users since launching five years ago. Every day about 20 new users register and 300 to 400 people log in to do research.

One of those users is Dane Morgan, a professor of engineering at the University of Wisconsin-Madison who develops new materials for a wide range of applications, including highly active catalysts for fuel cells, stable low-work function electron emitter cathodes for high-powered microwave devices, and efficient, inexpensive, and environmentally safe solar materials.

“The Materials Project has enabled some of the most exciting research in my group,” said Morgan, who also serves on the Materials Project’s advisory board. “By providing easy access to a huge database, as well as tools to process that data for thermodynamic predictions, the Materials Project has enabled my group to rapidly take on materials design projects that would have been prohibitive just a few years ago.”

More materials are being calculated and added to the database every day. In two years, Persson expects another trove of data to be released to the public.

“This is the way to reach a significant part of the research community, to reach students while they’re still learning material science,” she said. “It’s a teaching tool. It’s a science tool. It’s unprecedented.”

Supercomputing clusters at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility hosted at Berkeley Lab, provide the infrastructure for the Materials Project.

Funding for the Materials Project is provided by the Office of Science (US Department of Energy], including support through JCESR [Joint Center for Energy Storage Research].

Happy researching!