Tag Archives: UC Berkeley

A graphene ‘camera’ and your beating heart: say cheese

Comparing it to a ‘camera’, even with the quotes, is a bit of a stretch for my taste but I can’t come up with a better comparison. Here’s a video so you can judge for yourself,

Caption: This video repeats three times the graphene camera images of a single beat of an embryonic chicken heart. The images, separated by 5 milliseconds, were measured by a laser bouncing off a graphene sheet lying beneath the heart. The images are about 2 millimeters on a side. Credit: UC Berkeley images by Halleh Balch, Alister McGuire and Jason Horng

A June 16, 2021 news item on ScienceDaily announces the research,

Bay Area [San Francisco, California] scientists have captured the real-time electrical activity of a beating heart, using a sheet of graphene to record an optical image — almost like a video camera — of the faint electric fields generated by the rhythmic firing of the heart’s muscle cells.

A University of California at Berkeley (UC Berkeley) June 16, 2021 news release (also on EurekAlert) by Robert Sanders, which originated the news item, provides more detail,

The graphene camera represents a new type of sensor useful for studying cells and tissues that generate electrical voltages, including groups of neurons or cardiac muscle cells. To date, electrodes or chemical dyes have been used to measure electrical firing in these cells. But electrodes and dyes measure the voltage at one point only; a graphene sheet measures the voltage continuously over all the tissue it touches.

The development, published online last week in the journal Nano Letters, comes from a collaboration between two teams of quantum physicists at the University of California, Berkeley, and physical chemists at Stanford University.

“Because we are imaging all cells simultaneously onto a camera, we don’t have to scan, and we don’t have just a point measurement. We can image the entire network of cells at the same time,” said Halleh Balch, one of three first authors of the paper and a recent Ph.D. recipient in UC Berkeley’s Department of Physics.

While the graphene sensor works without having to label cells with dyes or tracers, it can easily be combined with standard microscopy to image fluorescently labeled nerve or muscle tissue while simultaneously recording the electrical signals the cells use to communicate.

“The ease with which you can image an entire region of a sample could be especially useful in the study of neural networks that have all sorts of cell types involved,” said another first author of the study, Allister McGuire, who recently received a Ph.D. from Stanford and. “If you have a fluorescently labeled cell system, you might only be targeting a certain type of neuron. Our system would allow you to capture electrical activity in all neurons and their support cells with very high integrity, which could really impact the way that people do these network level studies.”

Graphene is a one-atom thick sheet of carbon atoms arranged in a two-dimensional hexagonal pattern reminiscent of honeycomb. The 2D structure has captured the interest of physicists for several decades because of its unique electrical properties and robustness and its interesting optical and optoelectronic properties.

“This is maybe the first example where you can use an optical readout of 2D materials to measure biological electrical fields,” said senior author Feng Wang, UC Berkeley professor of physics. “People have used 2D materials to do some sensing with pure electrical readout before, but this is unique in that it works with microscopy so that you can do parallel detection.”

The team calls the tool a critically coupled waveguide-amplified graphene electric field sensor, or CAGE sensor.

“This study is just a preliminary one; we want to showcase to biologists that there is such a tool you can use, and you can do great imaging. It has fast time resolution and great electric field sensitivity,” said the third first author, Jason Horng, a UC Berkeley Ph.D. recipient who is now a postdoctoral fellow at the National Institute of Standards and Technology. “Right now, it is just a prototype, but in the future, I think we can improve the device.”

Graphene is sensitive to electric fields

Ten years ago, Wang discovered that an electric field affects how graphene reflects or absorbs light. Balch and Horng exploited this discovery in designing the graphene camera. They obtained a sheet of graphene about 1 centimeter on a side produced by chemical vapor deposition in the lab of UC Berkeley physics professor Michael Crommie and placed on it a live heart from a chicken embryo, freshly extracted from a fertilized egg. These experiments were performed in the Stanford lab of Bianxiao Cui, who develops nanoscale tools to study electrical signaling in neurons and cardiac cells.

The team showed that when the graphene was tuned properly, the electrical signals that flowed along the surface of the heart during a beat were sufficient to change the reflectance of the graphene sheet.

“When cells contract, they fire action potentials that generate a small electric field outside of the cell,” Balch said. “The absorption of graphene right under that cell is modified, so we will see a change in the amount of light that comes back from that position on the large area of graphene.”

In initial studies, however, Horng found that the change in reflectance was too small to detect easily. An electric field reduces the reflectance of graphene by at most 2%; the effect was much less from changes in the electric field when the heart muscle cells fired an action potential.

Together, Balch, Horng and Wang found a way to amplify this signal by adding a thin waveguide below graphene, forcing the reflected laser light to bounce internally about 100 times before escaping. This made the change in reflectance detectable by a normal optical video camera.

“One way of thinking about it is that the more times that light bounces off of graphene as it propagates through this little cavity, the more effects that light feels from graphene’s response, and that allows us to obtain very, very high sensitivity to electric fields and voltages down to microvolts,” Balch said.

The increased amplification necessarily lowers the resolution of the image, but at 10 microns, it is more than enough to study cardiac cells that are several tens of microns across, she said.

Another application, McGuire said, is to test the effect of drug candidates on heart muscle before these drugs go into clinical trials to see whether, for example, they induce an unwanted arrhythmia. To demonstrate this, he and his colleagues observed the beating chicken heart with CAGE and an optical microscope while infusing it with a drug, blebbistatin, that inhibits the muscle protein myosin. They observed the heart stop beating, but CAGE showed that the electrical signals were unaffected.

Because graphene sheets are mechanically tough, they could also be placed directly on the surface of the brain to get a continuous measure of electrical activity — for example, to monitor neuron firing in the brains of those with epilepsy or to study fundamental brain activity. Today’s electrode arrays measure activity at a few hundred points, not continuously over the brain surface.

“One of the things that is amazing to me about this project is that electric fields mediate chemical interactions, mediate biophysical interactions — they mediate all sorts of processes in the natural world — but we never measure them. We measure current, and we measure voltage,” Balch said. “The ability to actually image electric fields gives you a look at a modality that you previously had little insight into.”

Here’s a link to and a citation for the paper,

Graphene Electric Field Sensor Enables Single Shot Label-Free Imaging of Bioelectric Potentials by Halleh B. Balch, Allister F. McGuire, Jason Horng, Hsin-Zon Tsai, Kevin K. Qi, Yi-Shiou Duh, Patrick R. Forrester, Michael F. Crommie, Bianxiao Cui, and Feng Wang. Nano Lett. 2021, XXXX, XXX, XXX-XXX OI: https://doi.org/10.1021/acs.nanolett.1c00543 Publication Date: June 8, 2021 © 2021 American Chemical Society

This paper is behind a paywall.

Controlling agricultural pests with CRISPR-based technology

CRISPR (clustered regularly interspaced short palindromic repeats) technology is often touted as being ‘precise’, which as far as I can tell, is not exactly the case (see my Nov. 28, 2018 posting about the CRISPR babies [scroll down about 30% of the way for the first hint that CRISPR isn’t]). So, it’s a bit odd to see the word ‘precise’ used as part of a new CRISPR-based technology’s name (from a January 8, 2019 news item on ScienceDaily,

Using the CRISPR gene editing tool, Nikolay Kandul, Omar Akbari and their colleagues at UC San Diego [UC is University of California] and UC Berkeley devised a method of altering key genes that control insect sex determination and fertility.

A description of the new “precision-guided sterile insect technique,” [emphasis mine] or pgSIT, is published Jan. 8 [2019] in the journal Nature Communications.

A January 8, 209 UCSD press release (also on EurekAlert) by Mario Aguilera, which originated the news item, delves further into the research,

When pgSIT-derived eggs are introduced into targeted populations, the researchers report, only adult sterile males emerge, resulting in a novel, environmentally friendly and relatively low-cost method of controlling pest populations in the future.

“CRISPR technology has empowered our team to innovate a new, effective, species-specific, self-limiting, safe and scalable genetic population control technology with remarkable potential to be developed and utilized in a plethora of insect pests and disease vectors,” said Akbari, an assistant professor in UC San Diego’s Division of Biological Sciences. “In the future, we strongly believe this technology will be safely used in the field to suppress and even eradicate target species locally, thereby revolutionizing how insects are managed and controlled going forward.”

Since the 1930s, agricultural researchers have used select methods to release sterile male insects into the wild to control and eradicate pest populations. In the 1950s, a method using irradiated males was implemented in the United States to eliminate the pest species known as the New World Screwworm fly, which consumes animal flesh and causes extensive damage to livestock. Such radiation-based methods were later used in Mexico and parts of Central America and continue today.

Instead of radiation, the new pgSIT (precision-guided sterile insect technique), developed over the past year-and-a-half by Kandul and Akbari in the fruit fly Drosophila, uses CRISPR to simultaneously disrupt key genes that control female viability and male fertility in pest species. pgSIT, the researchers say, results in sterile male progeny with 100 percent efficiency. Because the targeted genes are common to a vast cross-section of insects, the researchers are confident the technology can be applied to a range of insects, including disease-spreading mosquitoes.

The researchers envision a system in which scientists genetically alter and produce eggs of a targeted pest species. The eggs are then shipped to a pest location virtually anywhere in the world, circumventing the need for a production facility on-site. Once the eggs are deployed at the pest location, the researchers say, the newly born sterile males will mate with females in the wild and be incapable of producing offspring, driving down the population.

“This is a novel twist of a very old technology,” said Kandul, an assistant project scientist in UC San Diego’s Division of Biological Sciences. “That novel twist makes it extremely portable from one species to another species to suppress populations of mosquitoes or agricultural pests, for example those that feed on valuable wine grapes.”

The new technology is distinct from continuously self-propagating “gene drive” systems that propagate genetic alterations from generation to generation. Instead, pgSIT is considered a “dead end” since male sterility effectively closes the door on future generations.

“The sterile insect technique is an environmentally safe and proven technology,” [emphasis mine] the researchers note in the paper. “We aimed to develop a novel, safe, controllable, non-invasive genetic CRISPR-based technology that could be transferred across species and implemented worldwide in the short-term to combat wild populations.”

With pgSIT proven in fruit flies, the scientists are hoping to develop the technology in Aedes aegypti, the mosquito species responsible for transmitting dengue fever, Zika, yellow fever and other diseases to millions of people.

“The extension of this work to other insect pests could prove to be a general and very useful strategy to deal with many vector-borne diseases that plague humanity and wreak havoc an agriculture globally,” said Suresh Subramani, global director of the Tata Institute for Genetics and Society.

I have one comment about the ‘safety’ of the sterile insect technique. It’s been safe up until now but, assuming this technique works as described: What happens as this new and more powerful technique is more widely deployed possibly eliminating whole species of insects? Might these ‘pests’ have a heretofore unknown beneficial effect somewhere in the food chain or in an ecosystem? Or, there may be other unintended consequences.

Moving on, here’s a link to and a citation for the paper,

Transforming insect population control with precision guided sterile males with demonstration in flies by Nikolay P. Kandul, Junru Liu, Hector M. Sanchez C., Sean L. Wu, John M. Marshall, & Omar S. Akbari. Nature Communications volume 10, Article number: 84 (2019) DOI: https://doi.org/10.1038/s41467-018-07964-7 Published 08 January 2019

This paper is open access.

The researchers have made this illustrative image available,

Caption: This is a schematic of the new precision-guided sterile insect technique (pgSIT), which uses components of the CRISPR/Cas9 system to disrupt key genes that control female viability and male fertility, resulting in sterile male progeny. Credit: Nikolay Kandul, Akbari Lab, UC San Diego

Jiggly jell-o as a new hydrogen fuel catalyst

Jello [uploaded from https://www.organicauthority.com/eco-chic-table/new-jell-o-mold-jiggle-chic-holidays]

I’m quite intrigued by this ‘jell-o’ story. It’s hard to believe a childhood dessert might prove to have an application as a catalyst for producing hydrogen fuel. From a December 14, 2018 news item on Nanowerk,

A cheap and effective new catalyst developed by researchers at the University of California, Berkeley, can generate hydrogen fuel from water just as efficiently as platinum, currently the best — but also most expensive — water-splitting catalyst out there.

The catalyst, which is composed of nanometer-thin sheets of metal carbide, is manufactured using a self-assembly process that relies on a surprising ingredient: gelatin, the material that gives Jell-O its jiggle.

Two-dimensional metal carbides spark a reaction that splits water into oxygen and valuable hydrogen gas. Berkeley researchers have discovered an easy new recipe for cooking up these nanometer-thin sheets that is nearly as simple as making Jell-O from a box. (Xining Zang graphic, copyright Wiley)

A December 13, 2018 University of California at Berkeley (UC Berkeley) news release by Kara Manke (also on EurekAlert but published on Dec. 14, 2018), which originated the news item, provides more technical detail,

“Platinum is expensive, so it would be desirable to find other alternative materials to replace it,” said senior author Liwei Lin, professor of mechanical engineering at UC Berkeley. “We are actually using something similar to the Jell-O that you can eat as the foundation, and mixing it with some of the abundant earth elements to create an inexpensive new material for important catalytic reactions.”

The work appears in the Dec. 13 [2018] print edition of the journal Advanced Materials.

A zap of electricity can break apart the strong bonds that tie water molecules together, creating oxygen and hydrogen gas, the latter of which is an extremely valuable source of energy for powering hydrogen fuel cells. Hydrogen gas can also be used to help store energy from renewable yet intermittent energy sources like solar and wind power, which produce excess electricity when the sun shines or when the wind blows, but which go dormant on rainy or calm days.

A black and white image of metal carbide under high magnification.

When magnified, the two-dimensional metal carbides resemble sheets of cell[o]phane. (Xining Zang photo, copyright Wiley)

But simply sticking an electrode in a glass of water is an extremely inefficient method of generating hydrogen gas. For the past 20 years, scientists have been searching for catalysts that can speed up this reaction, making it practical for large-scale use.

“The traditional way of using water gas to generate hydrogen still dominates in industry. However, this method produces carbon dioxide as byproduct,” said first author Xining Zang, who conducted the research as a graduate student in mechanical engineering at UC Berkeley. “Electrocatalytic hydrogen generation is growing in the past decade, following the global demand to lower emissions. Developing a highly efficient and low-cost catalyst for electrohydrolysis will bring profound technical, economical and societal benefit.”

To create the catalyst, the researchers followed a recipe nearly as simple as making Jell-O from a box. They mixed gelatin and a metal ion — either molybdenum, tungsten or cobalt — with water, and then let the mixture dry.

“We believe that as gelatin dries, it self-assembles layer by layer,” Lin said. “The metal ion is carried by the gelatin, so when the gelatin self-assembles, your metal ion is also arranged into these flat layers, and these flat sheets are what give Jell-O its characteristic mirror-like surface.”

Heating the mixture to 600 degrees Celsius triggers the metal ion to react with the carbon atoms in the gelatin, forming large, nanometer-thin sheets of metal carbide. The unreacted gelatin burns away.

The researchers tested the efficiency of the catalysts by placing them in water and running an electric current through them. When stacked up against each other, molybdenum carbide split water the most efficiently, followed by tungsten carbide and then cobalt carbide, which didn’t form thin layers as well as the other two. Mixing molybdenum ions with a small amount of cobalt boosted the performance even more.

“It is possible that other forms of carbide may provide even better performance,” Lin said.

On the left, an illustration of blue spheres, representing gelatin molecules, arranged in a lattice shape. On the right, an illustration of thin sheets of metal carbide.

Molecules in gelatin naturally self-assemble in flat sheets, carrying the metal ions with them (left). Heating the mixture to 600 degrees Celsius burns off the gelatin, leaving nanometer-thin sheets of metal carbide. (Xining Zang illustration, copyright Wiley)

The two-dimensional shape of the catalyst is one of the reasons why it is so successful. That is because the water has to be in contact with the surface of the catalyst in order to do its job, and the large surface area of the sheets mean that the metal carbides are extremely efficient for their weight.

Because the recipe is so simple, it could easily be scaled up to produce large quantities of the catalyst, the researchers say.

“We found that the performance is very close to the best catalyst made of platinum and carbon, which is the gold standard in this area,” Lin said. “This means that we can replace the very expensive platinum with our material, which is made in a very scalable manufacturing process.”

Co-authors on the study are Lujie Yang, Buxuan Li and Minsong Wei of UC Berkeley, J. Nathan Hohman and Chenhui Zhu of Lawrence Berkeley National Lab; Wenshu Chen and Jiajun Gu of Shanghai Jiao Tong University; Xiaolong Zou and Jiaming Liang of the Shenzhen Institute; and Mohan Sanghasadasa of the U.S. Army RDECOM AMRDEC.

Here’s a link to and a citation for the paper,

Self‐Assembly of Large‐Area 2D Polycrystalline Transition Metal Carbides for Hydrogen Electrocatalysis by Xining Zang, Wenshu Chen, Xiaolong Zou, J. Nathan Hohman, Lujie Yang
Buxuan Li, Minsong Wei, Chenhui Zhu, Jiaming Liang, Mohan Sanghadasa, Jiajun Gu, Liwei Lin. Advanced Materials Volume30, Issue 50 December 13, 2018 1805188 DOI: https://doi.org/10.1002/adma.201805188 First published [online]: 09 October 2018

This paper is behind a paywall.

Unusual appetite for gold

This bacterium (bacteria being the plural) loves gold, which is lucky for anyone trying to develop artificial photosynthesis.From an October 9, 2018 news item on ScienceDaily,

A bacterium named Moorella thermoacetica won’t work for free. But UC Berkeley [University of California at Berkeley] researchers have figured out it has an appetite for gold. And in exchange for this special treat, the bacterium has revealed a more efficient path to producing solar fuels through artificial photosynthesis.

An October 5, 2018 UC Berkeley news release by Theresa Duque (also on EurekAlert but published on October 9, 2018), which originated the news item, expands on the theme,

M. thermoacetica first made its debut as the first non-photosensitive bacterium to carry out artificial photosynthesis (link is external) in a study led by Peidong Yang, a professor in UC Berkeley’s College of Chemistry. By attaching light-absorbing nanoparticles made of cadmium sulfide (CdS) to the bacterial membrane exterior, the researchers turned M. thermoacetica into a tiny photosynthesis machine, converting sunlight and carbon dioxide into useful chemicals.

Now Yang and his team of researchers have found a better way to entice this CO2-hungry bacterium into being even more productive. By placing light-absorbing gold nanoclusters inside the bacterium, they have created a biohybrid system that produces a higher yield of chemical products than previously demonstrated. The research, funded by the National Institutes of Health, was published on Oct. 1 in Nature Nanotechnology (link is external).

For the first hybrid model, M. thermoacetica-CdS, the researchers chose cadmium sulfide as the semiconductor for its ability to absorb visible light. But because cadmium sulfide is toxic to bacteria, the nanoparticles had to be attached to the cell membrane “extracellularly,” or outside the M. thermoacetica-CdS system. Sunlight excites each cadmium-sulfide nanoparticle into generating a charged particle known as an electron. As these light-generated electrons travel through the bacterium, they interact with multiple enzymes in a process known as “CO2 reduction,” triggering a cascade of reactions that eventually turns CO2 into acetate, a valuable chemical for making solar fuels.

But within the extracellular model, the electrons end up interacting with other chemicals that have no part in turning CO2 into acetate. And as a result, some electrons are lost and never reach the enzymes. So to improve what’s known as “quantum efficiency,” or the bacterium’s ability to produce acetate each time it gains an electron, the researchers found another semiconductor: nanoclusters made of 22 gold atoms (Au22), a material that M. thermoacetica took a surprising shine to.

A single nanocluster of 22 gold atoms

Figure: A single nanocluster of 22 gold atoms – Au22 – is only 1 nanometer in diameter, allowing it to easily slip through the bacterial cell wall.

“We selected Au22 because it’s ideal for absorbing visible light and has the potential for driving the CO2 reduction process, but we weren’t sure whether it would be compatible with the bacteria,” Yang said. “When we inspected them under the microscope, we discovered that the bacteria were loaded with these Au22 clusters – and were still happily alive.”

Imaging of the M. thermoacetica-Au22 system was done at UC Berkeley’s Molecular Imaging Center (link is external).

The researchers also selected Au22 ­– dubbed by the researchers as “magic” gold nanoclusters – for its ultrasmall size: A single Au22nanocluster is only 1 nanometer in diameter, allowing each nanocluster to easily slip through the bacterial cell wall.

“By feeding bacteria with Au22 nanoclusters, we’ve effectively streamlined the electron transfer process for the CO2 reduction pathway inside the bacteria, as evidenced by a 2.86 percent quantum efficiency – or 33 percent more acetate produced within the M. thermoacetica-Au22 system than the CdS model,” Yang said.

The magic gold nanocluster is the latest discovery coming out of Yang’s lab, which for the past six years has focused on using biohybrid nanostructures to convert CO2 into useful chemicals as part of an ongoing effort to find affordable, abundant resources for renewable fuels, and potential solutions to thwart the effects of climate change.

“Next, we’d like to find a way to reduce costs, improve the lifetimes for these biohybrid systems, and improve quantum efficiency,” Yang said. “By continuing to look at the fundamental aspect of how gold nanoclusters are being photoactivated, and by following the electron transfer process within the CO2 reduction pathway, we hope to find even better solutions.”

Co-authors with Yang are UC Berkeley graduate student Hao Zhang and former postdoctoral fellow Hao Liu, now at Donghua University in Shanghai, China.

Here’s a link to and a citation for the paper,

Bacteria photosensitized by intracellular gold nanoclusters for solar fuel production by Hao Zhang, Hao Liu, Zhiquan Tian, Dylan Lu, Yi Yu, Stefano Cestellos-Blanco, Kelsey K. Sakimoto, & Peidong Yang. Nature Nanotechnologyvolume 13, pages900–905 (2018). DOI: https://doi.org/10.1038/s41565-018-0267-z Published: 01 October 2018

This paper is behind a paywall.

For lovers of animation, the folks at UC Berkeley have produced this piece about the ‘gold-loving’ bacterium,

The wonder of movement in 3D

Shades of Eadweard Muybridge (English photographer who pioneered photographic motion studies)! A September 19, 2018 news item on ScienceDaily describes the latest efforts to ‘capture motion’,

Patriots quarterback Tom Brady has often credited his success to spending countless hours studying his opponent’s movements on film. This understanding of movement is necessary for all living species, whether it’s figuring out what angle to throw a ball at, or perceiving the motion of predators and prey. But simple videos can’t actually give us the full picture.

That’s because traditional videos and photos for studying motion are two-dimensional, and don’t show us the underlying 3-D structure of the person or subject of interest. Without the full geometry, we can’t inspect the small and subtle movements that help us move faster, or make sense of the precision needed to perfect our athletic form.

Recently, though, researchers from MIT’s [Massachusetts Institute of Technology] Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a way to get a better handle on this understanding of complex motion.

There isn’t a single reference to Muybridge, still, this September 18, 2018 Massachusetts Institute of Technology news release (also on EurekAlert but published September 19, 2018), which originated the news item, delves further into the research,

The new system uses an algorithm that can take 2-D videos and turn them into 3-D printed “motion sculptures” that show how a human body moves through space. In addition to being an intriguing aesthetic visualization of shape and time, the team envisions that their “MoSculp” system could enable a much more detailed study of motion for professional athletes, dancers, or anyone who wants to improve their physical skills.

“Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis,” says PhD student Xiuming Zhang, lead author of a new paper about the system. “You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve.”

Because motion sculptures are 3-D, users can use a computer interface to navigate around the structures and see them from different viewpoints, revealing motion-related information inaccessible from the original viewpoint.

Zhang wrote the paper alongside MIT professors William Freeman and Stefanie Mueller, PhD student Jiajun Wu, Google researchers Qiurui He and Tali Dekel, as well as U.C. Berkeley postdoc and former CSAIL PhD Andrew Owens.

How it works

Artists and scientists have long struggled to gain better insight into movement, limited by their own camera lens and what it could provide.

Previous work has mostly used so-called “stroboscopic” photography techniques, which look a lot like the images in a flip book stitched together. But since these photos only show snapshots of movement, you wouldn’t be able to see as much of the trajectory of a person’s arm when they’re hitting a golf ball, for example.

What’s more, these photographs also require laborious pre-shoot setup, such as using a clean background and specialized depth cameras and lighting equipment. All MoSculp needs is a video sequence.

Given an input video, the system first automatically detects 2-D key points on the subject’s body, such as the hip, knee, and ankle of a ballerina while she’s doing a complex dance sequence. Then, it takes the best possible poses from those points to be turned into 3-D “skeletons.”

After stitching these skeletons together, the system generates a motion sculpture that can be 3-D printed, showing the smooth, continuous path of movement traced out by the subject. Users can customize their figures to focus on different body parts, assign different materials to distinguish among parts, and even customize lighting.

In user studies, the researchers found that over 75 percent of subjects felt that MoSculp provided a more detailed visualization for studying motion than the standard photography techniques.

“Dance and highly-skilled athletic motions often seem like ‘moving sculptures’ but they only create fleeting and ephemeral shapes,” says Courtney Brigham, communications lead at Adobe. “This work shows how to take motions and turn them into real sculptures with objective visualizations of movement, providing a way for athletes to analyze their movements for training, requiring no more equipment than a mobile camera and some computing time.”

The system works best for larger movements, like throwing a ball or taking a sweeping leap during a dance sequence. It also works for situations that might obstruct or complicate movement, such as people wearing loose clothing or carrying objects.

Currently, the system only uses single-person scenarios, but the team soon hopes to expand to multiple people. This could open up the potential to study things like social disorders, interpersonal interactions, and team dynamics.

This work will be presented at the User Interface Software and Technology (UIST) symposium in Berlin, Germany in October 2018 and the team’s paper published as part of the proceedings.

As for anyone wondering about the Muybridge comment, here’s an image the MIT researchers have made available,

A new system uses an algorithm that can take 2-D videos and turn them into 3-D-printed “motion sculptures” that show how a human body moves through space. Image courtesy of MIT CSAIL

Contrast that MIT image with some of the images in this video capturing parts of a theatre production, Studies in Motion: The Hauntings of Eadweard Muybridge,

Getting back to MIT, here’s their MoSculp video,

There are some startling similarities, eh? I suppose there are only so many ways one can capture movement be it in studies of Eadweard Muybridge, a theatre production about his work, or an MIT video the latest in motion capture technology.

It’s a very ‘carbony’ time: graphene jacket, graphene-skinned airplane, and schwarzite

In August 2018, I been stumbled across several stories about graphene-based products and a new form of carbon.

Graphene jacket

The company producing this jacket has as its goal “… creating bionic clothing that is both bulletproof and intelligent.” Well, ‘bionic‘ means biologically-inspired engineering and ‘intelligent‘ usually means there’s some kind of computing capability in the product. This jacket, which is the first step towards the company’s goal, is not bionic, bulletproof, or intelligent. Nonetheless, it represents a very interesting science experiment in which you, the consumer, are part of step two in the company’s R&D (research and development).

Onto Vollebak’s graphene jacket,

Courtesy: Vollebak

From an August 14, 2018 article by Jesus Diaz for Fast Company,

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that have long threatened to revolutionize everything from aerospace engineering to medicine. …

Despite its immense promise, graphene still hasn’t found much use in consumer products, thanks to the fact that it’s hard to manipulate and manufacture in industrial quantities. The process of developing Vollebak’s jacket, according to the company’s cofounders, brothers Steve and Nick Tidball, took years of intensive research, during which the company worked with the same material scientists who built Michael Phelps’ 2008 Olympic Speedo swimsuit (which was famously banned for shattering records at the event).

The jacket is made out of a two-sided material, which the company invented during the extensive R&D process. The graphene side looks gunmetal gray, while the flipside appears matte black. To create it, the scientists turned raw graphite into something called graphene “nanoplatelets,” which are stacks of graphene that were then blended with polyurethane to create a membrane. That, in turn, is bonded to nylon to form the other side of the material, which Vollebak says alters the properties of the nylon itself. “Adding graphene to the nylon fundamentally changes its mechanical and chemical properties–a nylon fabric that couldn’t naturally conduct heat or energy, for instance, now can,” the company claims.

The company says that it’s reversible so you can enjoy graphene’s properties in different ways as the material interacts with either your skin or the world around you. “As physicists at the Max Planck Institute revealed, graphene challenges the fundamental laws of heat conduction, which means your jacket will not only conduct the heat from your body around itself to equalize your skin temperature and increase it, but the jacket can also theoretically store an unlimited amount of heat, which means it can work like a radiator,” Tidball explains.

He means it literally. You can leave the jacket out in the sun, or on another source of warmth, as it absorbs heat. Then, the company explains on its website, “If you then turn it inside out and wear the graphene next to your skin, it acts like a radiator, retaining its heat and spreading it around your body. The effect can be visibly demonstrated by placing your hand on the fabric, taking it away and then shooting the jacket with a thermal imaging camera. The heat of the handprint stays long after the hand has left.”

There’s a lot more to the article although it does feature some hype and I’m not sure I believe Diaz’s claim (August 14, 2018 article) that ‘graphene-based’ hair dye is perfectly safe ( Note: A link has been removed),

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that will one day revolutionize everything from aerospace engineering to medicine. Its diverse uses are seemingly endless: It can stop a bullet if you add enough layers. It can change the color of your hair with no adverse effects. [emphasis mine] It can turn the walls of your home into a giant fire detector. “It’s so strong and so stretchy that the fibers of a spider web coated in graphene could catch a falling plane,” as Vollebak puts it in its marketing materials.

Not unless things have changed greatly since March 2018. My August 2, 2018 posting featured the graphene-based hair dye announcement from March 2018 and a cautionary note from Dr. Andrew Maynard (scroll down ab out 50% of the way for a longer excerpt of Maynard’s comments),

Northwestern University’s press release proudly announced, “Graphene finds new application as nontoxic, anti-static hair dye.” The announcement spawned headlines like “Enough with the toxic hair dyes. We could use graphene instead,” and “’Miracle material’ graphene used to create the ultimate hair dye.”

From these headlines, you might be forgiven for getting the idea that the safety of graphene-based hair dyes is a done deal. Yet having studied the potential health and environmental impacts of engineered nanomaterials for more years than I care to remember, I find such overly optimistic pronouncements worrying – especially when they’re not backed up by clear evidence.

These studies need to be approached with care, as the precise risks of graphene exposure will depend on how the material is used, how exposure occurs and how much of it is encountered. Yet there’s sufficient evidence to suggest that this substance should be used with caution – especially where there’s a high chance of exposure or that it could be released into the environment.

The full text of Dr. Maynard’s comments about graphene hair dyes and risk can be found here.

Bearing in mind  that graphene-based hair dye is an entirely different class of product from the jacket, I wouldn’t necessarily dismiss risks; I would like to know what kind of risk assessment and safety testing has been done. Due to their understandable enthusiasm, the brothers Tidball have focused all their marketing on the benefits and the opportunity for the consumer to test their product (from graphene jacket product webpage),

While it’s completely invisible and only a single atom thick, graphene is the lightest, strongest, most conductive material ever discovered, and has the same potential to change life on Earth as stone, bronze and iron once did. But it remains difficult to work with, extremely expensive to produce at scale, and lives mostly in pioneering research labs. So following in the footsteps of the scientists who discovered it through their own highly speculative experiments, we’re releasing graphene-coated jackets into the world as experimental prototypes. Our aim is to open up our R&D and accelerate discovery by getting graphene out of the lab and into the field so that we can harness the collective power of early adopters as a test group. No-one yet knows the true limits of what graphene can do, so the first edition of the Graphene Jacket is fully reversible with one side coated in graphene and the other side not. If you’d like to take part in the next stage of this supermaterial’s history, the experiment is now open. You can now buy it, test it and tell us about it. [emphasis mine]

How maverick experiments won the Nobel Prize

While graphene’s existence was first theorised in the 1940s, it wasn’t until 2004 that two maverick scientists, Andre Geim and Konstantin Novoselov, were able to isolate and test it. Through highly speculative and unfunded experimentation known as their ‘Friday night experiments,’ they peeled layer after layer off a shaving of graphite using Scotch tape until they produced a sample of graphene just one atom thick. After similarly leftfield thinking won Geim the 2000 Ig Nobel prize for levitating frogs using magnets, the pair won the Nobel prize in 2010 for the isolation of graphene.

Should you be interested, in beta-testing the jacket, it will cost you $695 (presumably USD); order here. One last thing, Vollebak is based in the UK.

Graphene skinned plane

An August 14, 2018 news item (also published as an August 1, 2018 Haydale press release) by Sue Keighley on Azonano heralds a new technology for airplans,

Haydale, (AIM: HAYD), the global advanced materials group, notes the announcement made yesterday from the University of Central Lancashire (UCLAN) about the recent unveiling of the world’s first graphene skinned plane at the internationally renowned Farnborough air show.

The prepreg material, developed by Haydale, has potential value for fuselage and wing surfaces in larger scale aero and space applications especially for the rapidly expanding drone market and, in the longer term, the commercial aerospace sector. By incorporating functionalised nanoparticles into epoxy resins, the electrical conductivity of fibre-reinforced composites has been significantly improved for lightning-strike protection, thereby achieving substantial weight saving and removing some manufacturing complexities.

Before getting to the photo, here’s a definition for pre-preg from its Wikipedia entry (Note: Links have been removed),

Pre-preg is “pre-impregnated” composite fibers where a thermoset polymer matrix material, such as epoxy, or a thermoplastic resin is already present. The fibers often take the form of a weave and the matrix is used to bond them together and to other components during manufacture.

Haydale has supplied graphene enhanced prepreg material for Juno, a three-metre wide graphene-enhanced composite skinned aircraft, that was revealed as part of the ‘Futures Day’ at Farnborough Air Show 2018. [downloaded from https://www.azonano.com/news.aspx?newsID=36298]

A July 31, 2018 University of Central Lancashire (UCLan) press release provides a tiny bit more (pun intended) detail,

The University of Central Lancashire (UCLan) has unveiled the world’s first graphene skinned plane at an internationally renowned air show.

Juno, a three-and-a-half-metre wide graphene skinned aircraft, was revealed on the North West Aerospace Alliance (NWAA) stand as part of the ‘Futures Day’ at Farnborough Air Show 2018.

The University’s aerospace engineering team has worked in partnership with the Sheffield Advanced Manufacturing Research Centre (AMRC), the University of Manchester’s National Graphene Institute (NGI), Haydale Graphene Industries (Haydale) and a range of other businesses to develop the unmanned aerial vehicle (UAV), which also includes graphene batteries and 3D printed parts.

Billy Beggs, UCLan’s Engineering Innovation Manager, said: “The industry reaction to Juno at Farnborough was superb with many positive comments about the work we’re doing. Having Juno at one the world’s biggest air shows demonstrates the great strides we’re making in leading a programme to accelerate the uptake of graphene and other nano-materials into industry.

“The programme supports the objectives of the UK Industrial Strategy and the University’s Engineering Innovation Centre (EIC) to increase industry relevant research and applications linked to key local specialisms. Given that Lancashire represents the fourth largest aerospace cluster in the world, there is perhaps no better place to be developing next generation technologies for the UK aerospace industry.”

Previous graphene developments at UCLan have included the world’s first flight of a graphene skinned wing and the launch of a specially designed graphene-enhanced capsule into near space using high altitude balloons.

UCLan engineering students have been involved in the hands-on project, helping build Juno on the Preston Campus.

Haydale supplied much of the material and all the graphene used in the aircraft. Ray Gibbs, Chief Executive Officer, said: “We are delighted to be part of the project team. Juno has highlighted the capability and benefit of using graphene to meet key issues faced by the market, such as reducing weight to increase range and payload, defeating lightning strike and protecting aircraft skins against ice build-up.”

David Bailey Chief Executive of the North West Aerospace Alliance added: “The North West aerospace cluster contributes over £7 billion to the UK economy, accounting for one quarter of the UK aerospace turnover. It is essential that the sector continues to develop next generation technologies so that it can help the UK retain its competitive advantage. It has been a pleasure to support the Engineering Innovation Centre team at the University in developing the world’s first full graphene skinned aircraft.”

The Juno project team represents the latest phase in a long-term strategic partnership between the University and a range of organisations. The partnership is expected to go from strength to strength following the opening of the £32m EIC facility in February 2019.

The next step is to fly Juno and conduct further tests over the next two months.

Next item, a new carbon material.

Schwarzite

I love watching this gif of a schwarzite,

The three-dimensional cage structure of a schwarzite that was formed inside the pores of a zeolite. (Graphics by Yongjin Lee and Efrem Braun)

An August 13, 2018 news item on Nanowerk announces the new carbon structure,

The discovery of buckyballs [also known as fullerenes, C60, or buckminsterfullerenes] surprised and delighted chemists in the 1980s, nanotubes jazzed physicists in the 1990s, and graphene charged up materials scientists in the 2000s, but one nanoscale carbon structure – a negatively curved surface called a schwarzite – has eluded everyone. Until now.

University of California, Berkeley [UC Berkeley], chemists have proved that three carbon structures recently created by scientists in South Korea and Japan are in fact the long-sought schwarzites, which researchers predict will have unique electrical and storage properties like those now being discovered in buckminsterfullerenes (buckyballs or fullerenes for short), nanotubes and graphene.

An August 13, 2018 UC Berkeley news release by Robert Sanders, which originated the news item, describes how the Berkeley scientists and the members of their international  collaboration from Germany, Switzerland, Russia, and Italy, have contributed to the current state of schwarzite research,

The new structures were built inside the pores of zeolites, crystalline forms of silicon dioxide – sand – more commonly used as water softeners in laundry detergents and to catalytically crack petroleum into gasoline. Called zeolite-templated carbons (ZTC), the structures were being investigated for possible interesting properties, though the creators were unaware of their identity as schwarzites, which theoretical chemists have worked on for decades.

Based on this theoretical work, chemists predict that schwarzites will have unique electronic, magnetic and optical properties that would make them useful as supercapacitors, battery electrodes and catalysts, and with large internal spaces ideal for gas storage and separation.

UC Berkeley postdoctoral fellow Efrem Braun and his colleagues identified these ZTC materials as schwarzites based of their negative curvature, and developed a way to predict which zeolites can be used to make schwarzites and which can’t.

“We now have the recipe for how to make these structures, which is important because, if we can make them, we can explore their behavior, which we are working hard to do now,” said Berend Smit, an adjunct professor of chemical and biomolecular engineering at UC Berkeley and an expert on porous materials such as zeolites and metal-organic frameworks.

Smit, the paper’s corresponding author, Braun and their colleagues in Switzerland, China, Germany, Italy and Russia will report their discovery this week in the journal Proceedings of the National Academy of Sciences. Smit is also a faculty scientist at Lawrence Berkeley National Laboratory.

Playing with carbon

Diamond and graphite are well-known three-dimensional crystalline arrangements of pure carbon, but carbon atoms can also form two-dimensional “crystals” — hexagonal arrangements patterned like chicken wire. Graphene is one such arrangement: a flat sheet of carbon atoms that is not only the strongest material on Earth, but also has a high electrical conductivity that makes it a promising component of electronic devices.

schwarzite carbon cage

The cage structure of a schwarzite that was formed inside the pores of a zeolite. The zeolite is subsequently dissolved to release the new material. (Graphics by Yongjin Lee and Efrem Braun)

Graphene sheets can be wadded up to form soccer ball-shaped fullerenes – spherical carbon cages that can store molecules and are being used today to deliver drugs and genes into the body. Rolling graphene into a cylinder yields fullerenes called nanotubes, which are being explored today as highly conductive wires in electronics and storage vessels for gases like hydrogen and carbon dioxide. All of these are submicroscopic, 10,000 times smaller than the width of a human hair.

To date, however, only positively curved fullerenes and graphene, which has zero curvature, have been synthesized, feats rewarded by Nobel Prizes in 1996 and 2010, respectively.

In the 1880s, German physicist Hermann Schwarz investigated negatively curved structures that resemble soap-bubble surfaces, and when theoretical work on carbon cage molecules ramped up in the 1990s, Schwarz’s name became attached to the hypothetical negatively curved carbon sheets.

“The experimental validation of schwarzites thus completes the triumvirate of possible curvatures to graphene; positively curved, flat, and now negatively curved,” Braun added.

Minimize me

Like soap bubbles on wire frames, schwarzites are topologically minimal surfaces. When made inside a zeolite, a vapor of carbon-containing molecules is injected, allowing the carbon to assemble into a two-dimensional graphene-like sheet lining the walls of the pores in the zeolite. The surface is stretched tautly to minimize its area, which makes all the surfaces curve negatively, like a saddle. The zeolite is then dissolved, leaving behind the schwarzite.

soap bubble schwarzite structure

A computer-rendered negatively curved soap bubble that exhibits the geometry of a carbon schwarzite. (Felix Knöppel image)

“These negatively-curved carbons have been very hard to synthesize on their own, but it turns out that you can grow the carbon film catalytically at the surface of a zeolite,” Braun said. “But the schwarzites synthesized to date have been made by choosing zeolite templates through trial and error. We provide very simple instructions you can follow to rationally make schwarzites and we show that, by choosing the right zeolite, you can tune schwarzites to optimize the properties you want.”

Researchers should be able to pack unusually large amounts of electrical charge into schwarzites, which would make them better capacitors than conventional ones used today in electronics. Their large interior volume would also allow storage of atoms and molecules, which is also being explored with fullerenes and nanotubes. And their large surface area, equivalent to the surface areas of the zeolites they’re grown in, could make them as versatile as zeolites for catalyzing reactions in the petroleum and natural gas industries.

Braun modeled ZTC structures computationally using the known structures of zeolites, and worked with topological mathematician Senja Barthel of the École Polytechnique Fédérale de Lausanne in Sion, Switzerland, to determine which of the minimal surfaces the structures resembled.

The team determined that, of the approximately 200 zeolites created to date, only 15 can be used as a template to make schwarzites, and only three of them have been used to date to produce schwarzite ZTCs. Over a million zeolite structures have been predicted, however, so there could be many more possible schwarzite carbon structures made using the zeolite-templating method.

Other co-authors of the paper are Yongjin Lee, Seyed Mohamad Moosavi and Barthel of the École Polytechnique Fédérale de Lausanne, Rocio Mercado of UC Berkeley, Igor Baburin of the Technische Universität Dresden in Germany and Davide Proserpio of the Università degli Studi di Milano in Italy and Samara State Technical University in Russia.

Here’s a link to and a citation for the paper,

Generating carbon schwarzites via zeolite-templating by Efrem Braun, Yongjin Lee, Seyed Mohamad Moosavi, Senja Barthel, Rocio Mercado, Igor A. Baburin, Davide M. Proserpio, and Berend Smit. PNAS August 14, 2018. 201805062; published ahead of print August 14, 2018. https://doi.org/10.1073/pnas.1805062115

This paper appears to be open access.

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,

De-Extinction

In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

CRISPR-Cas12a as a new diagnostic tool

Similar to Cas9, Cas12a is has an added feature as noted in this February 15, 2018 news item on ScienceDaily,

Utilizing an unsuspected activity of the CRISPR-Cas12a protein, researchers created a simple diagnostic system called DETECTR to analyze cells, blood, saliva, urine and stool to detect genetic mutations, cancer and antibiotic resistance and also diagnose bacterial and viral infections. The scientists discovered that when Cas12a binds its double-stranded DNA target, it indiscriminately chews up all single-stranded DNA. They then created reporter molecules attached to single-stranded DNA to signal when Cas12a finds its target.

A February 15, 2018 University of California at Berkeley (UC Berkeley) news release by Robert Sanders and which originated the news item, provides more detail and history,

CRISPR-Cas12a, one of the DNA-cutting proteins revolutionizing biology today, has an unexpected side effect that makes it an ideal enzyme for simple, rapid and accurate disease diagnostics.

blood in test tube

(iStock)

Cas12a, discovered in 2015 and originally called Cpf1, is like the well-known Cas9 protein that UC Berkeley’s Jennifer Doudna and colleague Emmanuelle Charpentier turned into a powerful gene-editing tool in 2012.

CRISPR-Cas9 has supercharged biological research in a mere six years, speeding up exploration of the causes of disease and sparking many potential new therapies. Cas12a was a major addition to the gene-cutting toolbox, able to cut double-stranded DNA at places that Cas9 can’t, and, because it leaves ragged edges, perhaps easier to use when inserting a new gene at the DNA cut.

But co-first authors Janice Chen, Enbo Ma and Lucas Harrington in Doudna’s lab discovered that when Cas12a binds and cuts a targeted double-stranded DNA sequence, it unexpectedly unleashes indiscriminate cutting of all single-stranded DNA in a test tube.

Most of the DNA in a cell is in the form of a double-stranded helix, so this is not necessarily a problem for gene-editing applications. But it does allow researchers to use a single-stranded “reporter” molecule with the CRISPR-Cas12a protein, which produces an unambiguous fluorescent signal when Cas12a has found its target.

“We continue to be fascinated by the functions of bacterial CRISPR systems and how mechanistic understanding leads to opportunities for new technologies,” said Doudna, a professor of molecular and cell biology and of chemistry and a Howard Hughes Medical Institute investigator.

DETECTR diagnostics

The new DETECTR system based on CRISPR-Cas12a can analyze cells, blood, saliva, urine and stool to detect genetic mutations, cancer and antibiotic resistance as well as diagnose bacterial and viral infections. Target DNA is amplified by RPA to make it easier for Cas12a to find it and bind, unleashing indiscriminate cutting of single-stranded DNA, including DNA attached to a fluorescent marker (gold star) that tells researchers that Cas12a has found its target.

The UC Berkeley researchers, along with their colleagues at UC San Francisco, will publish their findings Feb. 15 [2018] via the journal Science’s fast-track service, First Release.

The researchers developed a diagnostic system they dubbed the DNA Endonuclease Targeted CRISPR Trans Reporter, or DETECTR, for quick and easy point-of-care detection of even small amounts of DNA in clinical samples. It involves adding all reagents in a single reaction: CRISPR-Cas12a and its RNA targeting sequence (guide RNA), fluorescent reporter molecule and an isothermal amplification system called recombinase polymerase amplification (RPA), which is similar to polymerase chain reaction (PCR). When warmed to body temperature, RPA rapidly multiplies the number of copies of the target DNA, boosting the chances Cas12a will find one of them, bind and unleash single-strand DNA cutting, resulting in a fluorescent readout.

The UC Berkeley researchers tested this strategy using patient samples containing human papilloma virus (HPV), in collaboration with Joel Palefsky’s lab at UC San Francisco. Using DETECTR, they were able to demonstrate accurate detection of the “high-risk” HPV types 16 and 18 in samples infected with many different HPV types.

“This protein works as a robust tool to detect DNA from a variety of sources,” Chen said. “We want to push the limits of the technology, which is potentially applicable in any point-of-care diagnostic situation where there is a DNA component, including cancer and infectious disease.”

The indiscriminate cutting of all single-stranded DNA, which the researchers discovered holds true for all related Cas12 molecules, but not Cas9, may have unwanted effects in genome editing applications, but more research is needed on this topic, Chen said. During the transcription of genes, for example, the cell briefly creates single strands of DNA that could accidentally be cut by Cas12a.

The activity of the Cas12 proteins is similar to that of another family of CRISPR enzymes, Cas13a, which chew up RNA after binding to a target RNA sequence. Various teams, including Doudna’s, are developing diagnostic tests using Cas13a that could, for example, detect the RNA genome of HIV.

infographic about DETECTR system

(Infographic by the Howard Hughes Medical Institute)

These new tools have been repurposed from their original role in microbes where they serve as adaptive immune systems to fend off viral infections. In these bacteria, Cas proteins store records of past infections and use these “memories” to identify harmful DNA during infections. Cas12a, the protein used in this study, then cuts the invading DNA, saving the bacteria from being taken over by the virus.

The chance discovery of Cas12a’s unusual behavior highlights the importance of basic research, Chen said, since it came from a basic curiosity about the mechanism Cas12a uses to cleave double-stranded DNA.

“It’s cool that, by going after the question of the cleavage mechanism of this protein, we uncovered what we think is a very powerful technology useful in an array of applications,” Chen said.

Here’s a link to and a citation for the paper,

CRISPR-Cas12a target binding unleashes indiscriminate single-stranded DNase activity by Janice S. Chen, Enbo Ma, Lucas B. Harrington, Maria Da Costa, Xinran Tian, Joel M. Palefsky, Jennifer A. Doudna. Science 15 Feb 2018: eaar6245 DOI: 10.1126/science.aar6245

This paper is behind a paywall.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

Artificial intelligence and metaphors

This is a different approach to artificial intelligence. From a June 27, 2017 news item on ScienceDaily,

Ask Siri to find a math tutor to help you “grasp” calculus and she’s likely to respond that your request is beyond her abilities. That’s because metaphors like “grasp” are difficult for Apple’s voice-controlled personal assistant to, well, grasp.

But new UC Berkeley research suggests that Siri and other digital helpers could someday learn the algorithms that humans have used for centuries to create and understand metaphorical language.

Mapping 1,100 years of metaphoric English language, researchers at UC Berkeley and Lehigh University in Pennsylvania have detected patterns in how English speakers have added figurative word meanings to their vocabulary.

The results, published in the journal Cognitive Psychology, demonstrate how throughout history humans have used language that originally described palpable experiences such as “grasping an object” to describe more intangible concepts such as “grasping an idea.”

Unfortunately, this image is not the best quality,

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

A June 27, 2017 University of California at Berkeley (or UC Berkeley) news release by Yasmin Anwar, which originated the news item,

“The use of concrete language to talk about abstract ideas may unlock mysteries about how we are able to communicate and conceptualize things we can never see or touch,” said study senior author Mahesh Srinivasan, an assistant professor of psychology at UC Berkeley. “Our results may also pave the way for future advances in artificial intelligence.”

The findings provide the first large-scale evidence that the creation of new metaphorical word meanings is systematic, researchers said. They can also inform efforts to design natural language processing systems like Siri to help them understand creativity in human language.

“Although such systems are capable of understanding many words, they are often tripped up by creative uses of words that go beyond their existing, pre-programmed vocabularies,” said study lead author Yang Xu, a postdoctoral researcher in linguistics and cognitive science at UC Berkeley.

“This work brings opportunities toward modeling metaphorical words at a broad scale, ultimately allowing the construction of artificial intelligence systems that are capable of creating and comprehending metaphorical language,” he added.

Srinivasan and Xu conducted the study with Lehigh University psychology professor Barbara Malt.

Using the Metaphor Map of English database, researchers examined more than 5,000 examples from the past millennium in which word meanings from one semantic domain, such as “water,” were extended to another semantic domain, such as “mind.”

Researchers called the original semantic domain the “source domain” and the domain that the metaphorical meaning was extended to, the “target domain.”

More than 1,400 online participants were recruited to rate semantic domains such as “water” or “mind” according to the degree to which they were related to the external world (light, plants), animate things (humans, animals), or intense emotions (excitement, fear).

These ratings were fed into computational models that the researchers had developed to predict which semantic domains had been the sources or targets of metaphorical extension.

In comparing their computational predictions against the actual historical record provided by the Metaphor Map of English, researchers found that their models correctly forecast about 75 percent of recorded metaphorical language mappings over the past millennium.

Furthermore, they found that the degree to which a domain is tied to experience in the external world, such as “grasping a rope,” was the primary predictor of how a word would take on a new metaphorical meaning such as “grasping an idea.”

For example, time and again, researchers found that words associated with textiles, digestive organs, wetness, solidity and plants were more likely to provide sources for metaphorical extension, while mental and emotional states, such as excitement, pride and fear were more likely to be the targets of metaphorical extension.

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

Here’s a link to and a citation for the paper,

Evolution of word meanings through metaphorical mapping: Systematicity over the past millennium by Yang Xu, Barbara C. Malt, Mahesh Srinivasan. Cognitive Psychology Volume 96, August 2017, Pages 41–53 DOI: https://doi.org/10.1016/j.cogpsych.2017.05.005

The early web version of this paper is behind a paywall.

For anyone interested in the ‘Metaphor Map of English’ database mentioned in the news release, you find it here on the University of Glasgow website. By the way, it also seems to be known as ‘Mapping Metaphor with the Historical Thesaurus‘.