Tag Archives: Japan

It’s a very ‘carbony’ time: graphene jacket, graphene-skinned airplane, and schwarzite

In August 2018, I been stumbled across several stories about graphene-based products and a new form of carbon.

Graphene jacket

The company producing this jacket has as its goal “… creating bionic clothing that is both bulletproof and intelligent.” Well, ‘bionic‘ means biologically-inspired engineering and ‘intelligent‘ usually means there’s some kind of computing capability in the product. This jacket, which is the first step towards the company’s goal, is not bionic, bulletproof, or intelligent. Nonetheless, it represents a very interesting science experiment in which you, the consumer, are part of step two in the company’s R&D (research and development).

Onto Vollebak’s graphene jacket,

Courtesy: Vollebak

From an August 14, 2018 article by Jesus Diaz for Fast Company,

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that have long threatened to revolutionize everything from aerospace engineering to medicine. …

Despite its immense promise, graphene still hasn’t found much use in consumer products, thanks to the fact that it’s hard to manipulate and manufacture in industrial quantities. The process of developing Vollebak’s jacket, according to the company’s cofounders, brothers Steve and Nick Tidball, took years of intensive research, during which the company worked with the same material scientists who built Michael Phelps’ 2008 Olympic Speedo swimsuit (which was famously banned for shattering records at the event).

The jacket is made out of a two-sided material, which the company invented during the extensive R&D process. The graphene side looks gunmetal gray, while the flipside appears matte black. To create it, the scientists turned raw graphite into something called graphene “nanoplatelets,” which are stacks of graphene that were then blended with polyurethane to create a membrane. That, in turn, is bonded to nylon to form the other side of the material, which Vollebak says alters the properties of the nylon itself. “Adding graphene to the nylon fundamentally changes its mechanical and chemical properties–a nylon fabric that couldn’t naturally conduct heat or energy, for instance, now can,” the company claims.

The company says that it’s reversible so you can enjoy graphene’s properties in different ways as the material interacts with either your skin or the world around you. “As physicists at the Max Planck Institute revealed, graphene challenges the fundamental laws of heat conduction, which means your jacket will not only conduct the heat from your body around itself to equalize your skin temperature and increase it, but the jacket can also theoretically store an unlimited amount of heat, which means it can work like a radiator,” Tidball explains.

He means it literally. You can leave the jacket out in the sun, or on another source of warmth, as it absorbs heat. Then, the company explains on its website, “If you then turn it inside out and wear the graphene next to your skin, it acts like a radiator, retaining its heat and spreading it around your body. The effect can be visibly demonstrated by placing your hand on the fabric, taking it away and then shooting the jacket with a thermal imaging camera. The heat of the handprint stays long after the hand has left.”

There’s a lot more to the article although it does feature some hype and I’m not sure I believe Diaz’s claim (August 14, 2018 article) that ‘graphene-based’ hair dye is perfectly safe ( Note: A link has been removed),

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that will one day revolutionize everything from aerospace engineering to medicine. Its diverse uses are seemingly endless: It can stop a bullet if you add enough layers. It can change the color of your hair with no adverse effects. [emphasis mine] It can turn the walls of your home into a giant fire detector. “It’s so strong and so stretchy that the fibers of a spider web coated in graphene could catch a falling plane,” as Vollebak puts it in its marketing materials.

Not unless things have changed greatly since March 2018. My August 2, 2018 posting featured the graphene-based hair dye announcement from March 2018 and a cautionary note from Dr. Andrew Maynard (scroll down ab out 50% of the way for a longer excerpt of Maynard’s comments),

Northwestern University’s press release proudly announced, “Graphene finds new application as nontoxic, anti-static hair dye.” The announcement spawned headlines like “Enough with the toxic hair dyes. We could use graphene instead,” and “’Miracle material’ graphene used to create the ultimate hair dye.”

From these headlines, you might be forgiven for getting the idea that the safety of graphene-based hair dyes is a done deal. Yet having studied the potential health and environmental impacts of engineered nanomaterials for more years than I care to remember, I find such overly optimistic pronouncements worrying – especially when they’re not backed up by clear evidence.

These studies need to be approached with care, as the precise risks of graphene exposure will depend on how the material is used, how exposure occurs and how much of it is encountered. Yet there’s sufficient evidence to suggest that this substance should be used with caution – especially where there’s a high chance of exposure or that it could be released into the environment.

The full text of Dr. Maynard’s comments about graphene hair dyes and risk can be found here.

Bearing in mind  that graphene-based hair dye is an entirely different class of product from the jacket, I wouldn’t necessarily dismiss risks; I would like to know what kind of risk assessment and safety testing has been done. Due to their understandable enthusiasm, the brothers Tidball have focused all their marketing on the benefits and the opportunity for the consumer to test their product (from graphene jacket product webpage),

While it’s completely invisible and only a single atom thick, graphene is the lightest, strongest, most conductive material ever discovered, and has the same potential to change life on Earth as stone, bronze and iron once did. But it remains difficult to work with, extremely expensive to produce at scale, and lives mostly in pioneering research labs. So following in the footsteps of the scientists who discovered it through their own highly speculative experiments, we’re releasing graphene-coated jackets into the world as experimental prototypes. Our aim is to open up our R&D and accelerate discovery by getting graphene out of the lab and into the field so that we can harness the collective power of early adopters as a test group. No-one yet knows the true limits of what graphene can do, so the first edition of the Graphene Jacket is fully reversible with one side coated in graphene and the other side not. If you’d like to take part in the next stage of this supermaterial’s history, the experiment is now open. You can now buy it, test it and tell us about it. [emphasis mine]

How maverick experiments won the Nobel Prize

While graphene’s existence was first theorised in the 1940s, it wasn’t until 2004 that two maverick scientists, Andre Geim and Konstantin Novoselov, were able to isolate and test it. Through highly speculative and unfunded experimentation known as their ‘Friday night experiments,’ they peeled layer after layer off a shaving of graphite using Scotch tape until they produced a sample of graphene just one atom thick. After similarly leftfield thinking won Geim the 2000 Ig Nobel prize for levitating frogs using magnets, the pair won the Nobel prize in 2010 for the isolation of graphene.

Should you be interested, in beta-testing the jacket, it will cost you $695 (presumably USD); order here. One last thing, Vollebak is based in the UK.

Graphene skinned plane

An August 14, 2018 news item (also published as an August 1, 2018 Haydale press release) by Sue Keighley on Azonano heralds a new technology for airplans,

Haydale, (AIM: HAYD), the global advanced materials group, notes the announcement made yesterday from the University of Central Lancashire (UCLAN) about the recent unveiling of the world’s first graphene skinned plane at the internationally renowned Farnborough air show.

The prepreg material, developed by Haydale, has potential value for fuselage and wing surfaces in larger scale aero and space applications especially for the rapidly expanding drone market and, in the longer term, the commercial aerospace sector. By incorporating functionalised nanoparticles into epoxy resins, the electrical conductivity of fibre-reinforced composites has been significantly improved for lightning-strike protection, thereby achieving substantial weight saving and removing some manufacturing complexities.

Before getting to the photo, here’s a definition for pre-preg from its Wikipedia entry (Note: Links have been removed),

Pre-preg is “pre-impregnated” composite fibers where a thermoset polymer matrix material, such as epoxy, or a thermoplastic resin is already present. The fibers often take the form of a weave and the matrix is used to bond them together and to other components during manufacture.

Haydale has supplied graphene enhanced prepreg material for Juno, a three-metre wide graphene-enhanced composite skinned aircraft, that was revealed as part of the ‘Futures Day’ at Farnborough Air Show 2018. [downloaded from https://www.azonano.com/news.aspx?newsID=36298]

A July 31, 2018 University of Central Lancashire (UCLan) press release provides a tiny bit more (pun intended) detail,

The University of Central Lancashire (UCLan) has unveiled the world’s first graphene skinned plane at an internationally renowned air show.

Juno, a three-and-a-half-metre wide graphene skinned aircraft, was revealed on the North West Aerospace Alliance (NWAA) stand as part of the ‘Futures Day’ at Farnborough Air Show 2018.

The University’s aerospace engineering team has worked in partnership with the Sheffield Advanced Manufacturing Research Centre (AMRC), the University of Manchester’s National Graphene Institute (NGI), Haydale Graphene Industries (Haydale) and a range of other businesses to develop the unmanned aerial vehicle (UAV), which also includes graphene batteries and 3D printed parts.

Billy Beggs, UCLan’s Engineering Innovation Manager, said: “The industry reaction to Juno at Farnborough was superb with many positive comments about the work we’re doing. Having Juno at one the world’s biggest air shows demonstrates the great strides we’re making in leading a programme to accelerate the uptake of graphene and other nano-materials into industry.

“The programme supports the objectives of the UK Industrial Strategy and the University’s Engineering Innovation Centre (EIC) to increase industry relevant research and applications linked to key local specialisms. Given that Lancashire represents the fourth largest aerospace cluster in the world, there is perhaps no better place to be developing next generation technologies for the UK aerospace industry.”

Previous graphene developments at UCLan have included the world’s first flight of a graphene skinned wing and the launch of a specially designed graphene-enhanced capsule into near space using high altitude balloons.

UCLan engineering students have been involved in the hands-on project, helping build Juno on the Preston Campus.

Haydale supplied much of the material and all the graphene used in the aircraft. Ray Gibbs, Chief Executive Officer, said: “We are delighted to be part of the project team. Juno has highlighted the capability and benefit of using graphene to meet key issues faced by the market, such as reducing weight to increase range and payload, defeating lightning strike and protecting aircraft skins against ice build-up.”

David Bailey Chief Executive of the North West Aerospace Alliance added: “The North West aerospace cluster contributes over £7 billion to the UK economy, accounting for one quarter of the UK aerospace turnover. It is essential that the sector continues to develop next generation technologies so that it can help the UK retain its competitive advantage. It has been a pleasure to support the Engineering Innovation Centre team at the University in developing the world’s first full graphene skinned aircraft.”

The Juno project team represents the latest phase in a long-term strategic partnership between the University and a range of organisations. The partnership is expected to go from strength to strength following the opening of the £32m EIC facility in February 2019.

The next step is to fly Juno and conduct further tests over the next two months.

Next item, a new carbon material.

Schwarzite

I love watching this gif of a schwarzite,

The three-dimensional cage structure of a schwarzite that was formed inside the pores of a zeolite. (Graphics by Yongjin Lee and Efrem Braun)

An August 13, 2018 news item on Nanowerk announces the new carbon structure,

The discovery of buckyballs [also known as fullerenes, C60, or buckminsterfullerenes] surprised and delighted chemists in the 1980s, nanotubes jazzed physicists in the 1990s, and graphene charged up materials scientists in the 2000s, but one nanoscale carbon structure – a negatively curved surface called a schwarzite – has eluded everyone. Until now.

University of California, Berkeley [UC Berkeley], chemists have proved that three carbon structures recently created by scientists in South Korea and Japan are in fact the long-sought schwarzites, which researchers predict will have unique electrical and storage properties like those now being discovered in buckminsterfullerenes (buckyballs or fullerenes for short), nanotubes and graphene.

An August 13, 2018 UC Berkeley news release by Robert Sanders, which originated the news item, describes how the Berkeley scientists and the members of their international  collaboration from Germany, Switzerland, Russia, and Italy, have contributed to the current state of schwarzite research,

The new structures were built inside the pores of zeolites, crystalline forms of silicon dioxide – sand – more commonly used as water softeners in laundry detergents and to catalytically crack petroleum into gasoline. Called zeolite-templated carbons (ZTC), the structures were being investigated for possible interesting properties, though the creators were unaware of their identity as schwarzites, which theoretical chemists have worked on for decades.

Based on this theoretical work, chemists predict that schwarzites will have unique electronic, magnetic and optical properties that would make them useful as supercapacitors, battery electrodes and catalysts, and with large internal spaces ideal for gas storage and separation.

UC Berkeley postdoctoral fellow Efrem Braun and his colleagues identified these ZTC materials as schwarzites based of their negative curvature, and developed a way to predict which zeolites can be used to make schwarzites and which can’t.

“We now have the recipe for how to make these structures, which is important because, if we can make them, we can explore their behavior, which we are working hard to do now,” said Berend Smit, an adjunct professor of chemical and biomolecular engineering at UC Berkeley and an expert on porous materials such as zeolites and metal-organic frameworks.

Smit, the paper’s corresponding author, Braun and their colleagues in Switzerland, China, Germany, Italy and Russia will report their discovery this week in the journal Proceedings of the National Academy of Sciences. Smit is also a faculty scientist at Lawrence Berkeley National Laboratory.

Playing with carbon

Diamond and graphite are well-known three-dimensional crystalline arrangements of pure carbon, but carbon atoms can also form two-dimensional “crystals” — hexagonal arrangements patterned like chicken wire. Graphene is one such arrangement: a flat sheet of carbon atoms that is not only the strongest material on Earth, but also has a high electrical conductivity that makes it a promising component of electronic devices.

schwarzite carbon cage

The cage structure of a schwarzite that was formed inside the pores of a zeolite. The zeolite is subsequently dissolved to release the new material. (Graphics by Yongjin Lee and Efrem Braun)

Graphene sheets can be wadded up to form soccer ball-shaped fullerenes – spherical carbon cages that can store molecules and are being used today to deliver drugs and genes into the body. Rolling graphene into a cylinder yields fullerenes called nanotubes, which are being explored today as highly conductive wires in electronics and storage vessels for gases like hydrogen and carbon dioxide. All of these are submicroscopic, 10,000 times smaller than the width of a human hair.

To date, however, only positively curved fullerenes and graphene, which has zero curvature, have been synthesized, feats rewarded by Nobel Prizes in 1996 and 2010, respectively.

In the 1880s, German physicist Hermann Schwarz investigated negatively curved structures that resemble soap-bubble surfaces, and when theoretical work on carbon cage molecules ramped up in the 1990s, Schwarz’s name became attached to the hypothetical negatively curved carbon sheets.

“The experimental validation of schwarzites thus completes the triumvirate of possible curvatures to graphene; positively curved, flat, and now negatively curved,” Braun added.

Minimize me

Like soap bubbles on wire frames, schwarzites are topologically minimal surfaces. When made inside a zeolite, a vapor of carbon-containing molecules is injected, allowing the carbon to assemble into a two-dimensional graphene-like sheet lining the walls of the pores in the zeolite. The surface is stretched tautly to minimize its area, which makes all the surfaces curve negatively, like a saddle. The zeolite is then dissolved, leaving behind the schwarzite.

soap bubble schwarzite structure

A computer-rendered negatively curved soap bubble that exhibits the geometry of a carbon schwarzite. (Felix Knöppel image)

“These negatively-curved carbons have been very hard to synthesize on their own, but it turns out that you can grow the carbon film catalytically at the surface of a zeolite,” Braun said. “But the schwarzites synthesized to date have been made by choosing zeolite templates through trial and error. We provide very simple instructions you can follow to rationally make schwarzites and we show that, by choosing the right zeolite, you can tune schwarzites to optimize the properties you want.”

Researchers should be able to pack unusually large amounts of electrical charge into schwarzites, which would make them better capacitors than conventional ones used today in electronics. Their large interior volume would also allow storage of atoms and molecules, which is also being explored with fullerenes and nanotubes. And their large surface area, equivalent to the surface areas of the zeolites they’re grown in, could make them as versatile as zeolites for catalyzing reactions in the petroleum and natural gas industries.

Braun modeled ZTC structures computationally using the known structures of zeolites, and worked with topological mathematician Senja Barthel of the École Polytechnique Fédérale de Lausanne in Sion, Switzerland, to determine which of the minimal surfaces the structures resembled.

The team determined that, of the approximately 200 zeolites created to date, only 15 can be used as a template to make schwarzites, and only three of them have been used to date to produce schwarzite ZTCs. Over a million zeolite structures have been predicted, however, so there could be many more possible schwarzite carbon structures made using the zeolite-templating method.

Other co-authors of the paper are Yongjin Lee, Seyed Mohamad Moosavi and Barthel of the École Polytechnique Fédérale de Lausanne, Rocio Mercado of UC Berkeley, Igor Baburin of the Technische Universität Dresden in Germany and Davide Proserpio of the Università degli Studi di Milano in Italy and Samara State Technical University in Russia.

Here’s a link to and a citation for the paper,

Generating carbon schwarzites via zeolite-templating by Efrem Braun, Yongjin Lee, Seyed Mohamad Moosavi, Senja Barthel, Rocio Mercado, Igor A. Baburin, Davide M. Proserpio, and Berend Smit. PNAS August 14, 2018. 201805062; published ahead of print August 14, 2018. https://doi.org/10.1073/pnas.1805062115

This paper appears to be open access.

Periodic table of nanomaterials

This charming illustration is the only pictorial representation i’ve seen for Kyoto University’s (Japan) proposed periodic table of nanomaterials, (By the way, 2019 is UNESCO’s [United Nations Educational, Scientific and Cultural Organization] International Year of the Periodic Table of Elements, an event recognizing the table’s 150th anniversary. See my January 8, 2019 posting for information about more events.)

Caption: Molecules interact and align with each other as they self-assemble. This new simulation enables to find what molecules best interact with each other to build nanomaterials, such as materials that work as a nano electrical wire.
Credit Illustration by Izumi Mindy Takamiya

A July 23, 2018 news item on Nanowerk announces the new periodic table (Note: A link has been removed),

The approach was developed by Daniel Packwood of Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS) and Taro Hitosugi of the Tokyo Institute of Technology (Nature Communications, “Materials informatics for self-assembly of functionalized organic precursors on metal surfaces”). It involves connecting the chemical properties of molecules with the nanostructures that form as a result of their interaction. A machine learning technique generates data that is then used to develop a diagram that categorizes different molecules according to the nano-sized shapes they form.

This approach could help materials scientists identify the appropriate molecules to use in order to synthesize target nanomaterials.

A July 23, 2018 Kyoto University press release on EurekAlert, which originated the news item, explains further about the computer simulations run by the scientists in pursuit of their specialized periodic table,

Fabricating nanomaterials using a bottom-up approach requires finding ‘precursor molecules’ that interact and align correctly with each other as they self-assemble. But it’s been a major challenge knowing how precursor molecules will interact and what shapes they will form.

Bottom-up fabrication of graphene nanoribbons is receiving much attention due to their potential use in electronics, tissue engineering, construction, and bio-imaging. One way to synthesise them is by using bianthracene precursor molecules that have bromine ‘functional’ groups attached to them. The bromine groups interact with a copper substrate to form nano-sized chains. When these chains are heated, they turn into graphene nanoribbons.

Packwood and Hitosugi tested their simulator using this method for building graphene nanoribbons.

Data was input into the model about the chemical properties of a variety of molecules that can be attached to bianthracene to ‘functionalize’ it and facilitate its interaction with copper. The data went through a series of processes that ultimately led to the formation of a ‘dendrogram’.

This showed that attaching hydrogen molecules to bianthracene led to the development of strong one-dimensional nano-chains. Fluorine, bromine, chlorine, amidogen, and vinyl functional groups led to the formation of moderately strong nano-chains. Trifluoromethyl and methyl functional groups led to the formation of weak one-dimensional islands of molecules, and hydroxide and aldehyde groups led to the formation of strong two-dimensional tile-shaped islands.

The information produced in the dendogram changed based on the temperature data provided. The above categories apply when the interactions are conducted at -73°C. The results changed with warmer temperatures. The researchers recommend applying the data at low temperatures where the effect of the functional groups’ chemical properties on nano-shapes are most clear.

The technique can be applied to other substrates and precursor molecules. The researchers describe their method as analogous to the periodic table of chemical elements, which groups atoms based on how they bond to each other. “However, in order to truly prove that the dendrograms or other informatics-based approaches can be as valuable to materials science as the periodic table, we must incorporate them in a real bottom-up nanomaterial fabrication experiment,” the researchers conclude in their study published in the journal xxx. “We are currently pursuing this direction in our laboratories.”

Here’s a link to and a citation for the paper,

Materials informatics for self-assembly of functionalized organic precursors on metal surfaces by Daniel M. Packwood & Taro Hitosugi. Nature Communicationsvolume 9, Article number: 2469 (2018)DOI: https://doi.org/10.1038/s41467-018-04940-z Published 25 June 2018

This paper is open access.

Nanoparticle detection with whispers and bubbles

Caption: A magnified photograph of a glass Whispering Gallery Resonator. The bubble is extremely small, less than the width of a human hair. Credit: OIST (Okinawa Institute of Science and Technology Graduate University)

It was the reference to a whispering gallery which attracted my attention; a July 11, 2018 news item on Nanowerk is where I found it,

Technology created by researchers at the Okinawa Institute of Science and Technology Graduate University (OIST) [Japan] is literally shedding light on some of the smallest particles to detect their presence – and it’s made from tiny glass bubbles.

The technology has its roots in a peculiar physical phenomenon known as the “whispering gallery,” described by physicist Lord Rayleigh (John William Strutt) in 1878 and named after an acoustic effect inside the dome of St Paul’s Cathedral in London. Whispers made at one side of the circular gallery could be heard clearly at the opposite side. It happens because sound waves travel along the walls of the dome to the other side, and this effect can be replicated by light in a tiny glass sphere just a hair’s breadth wide called a Whispering Gallery Resonator (WGR).

A July 11, 2018 OIST press release by Andrew Scott (also on EurekAlert), provides more details,

When light is shined into the sphere, it bounces around and around the inner surface, creating an optical carousel. Photons bouncing along the interior of the tiny sphere can end up travelling for long distances, sometimes as far as 100 meters. But each time a photon bounces off the sphere’s surface, a small amount of light escapes. This leaking light creates a sort of aura around the sphere, known as an evanescent light field. When nanoparticles come within range of this field, they distort its wavelength, effectively changing its color. Monitoring these color changes allows scientists to use the WGRs as a sensor; previous research groups have used them to detect individual virus particles in solution, for example. But at OIST’s Light-Matter Interactions Unit, scientists saw they could improve on previous work and create even more sensitive designs. The study is published in Optica.

Today, Dr. Jonathan Ward is using WGRs to detect minute particles more efficiently than ever before. The WGRs they have made are hollow glass bubbles rather than balls, explains Dr. Ward. “We heated a small glass tube with a laser and had air blown down it – it’s a lot like traditional glass blowing”. Blowing the air down the heated glass tube creates a spherical chamber that can support the sensitive light field. The most noticeable difference between a blown glass ornament and these precision instruments is the scale: the glass bubbles can be as small as 100 microns- a fraction of a millimeter in width. Their size makes them fragile to handle, but also malleable.

Working from theoretical models, Dr. Ward showed that they could increase the size of the light field by using a thin spherical shell (a bubble, in other words) instead of a solid sphere. A bigger field would increase the range in which particles can be detected, increasing the efficacy of the sensor. “We knew we had the techniques and the materials to fabricate the resonator”, said Dr. Ward. “Next we had to demonstrate that it could outperform the current types used for particle detection”.

To prove their concept, the team came up with a relatively simple test. The new bubble design was filled with a liquid solution containing tiny particles of polystyrene, and light was shined along a glass filament to generate a light field in its liquid interior. As particles passed within range of the light field, they produced noticeable shifts in the wavelength that were much more pronounced than those seen with a standard spherical WGR.

With a more effective tool now at their disposal, the next challenge for the team is to find applications for it. Learning what changes different materials make to the light field would allow Dr Ward to identify and target them, and even control their activity.

Despite their fragility, these new versions of WGRs are easy to manufacture and can be safely transported in custom made cases. That means these sensors could be used in a wide verity of fields, such as testing for toxic molecules in water to detect pollution, or detecting blood borne viruses in extremely rural areas where healthcare may be limited.

For Dr. Ward however, there’s always room from improvement: “We’re always pushing to get even more sensitivity and find the smallest particle this sensor can detect. We want to push our detection to the physical limits.”

Here’s a link to and a citation for the paper,

Nanoparticle sensing beyond evanescent field interaction with a quasi-droplet microcavity by Jonathan M. Ward, Yong Yang, Fuchuan Lei, Xiao-Chong Yu, Yun-Feng Xiao, and Síle Nic Chormaic. Optica Vol. 5, Issue 6, pp. 674-677 (2018) https://doi.org/10.1364/OPTICA.5.000674

This paper is open access.

Nano-saturn

It’s a bit of a stretch but I really appreciate how the nanoscale (specifically a fullerene) is being paired with the second largest planet (the largest is Jupiter) in our solar system. (See Nola Taylor Redd’s November 14, 2012 article on space.com for more about the planet Saturn.)

From a June 8, 2018 news item on ScienceDaily,

Saturn is the second largest planet in our solar system and has a characteristic ring. Japanese researchers have now synthesized a molecular “nano-Saturn.” As the scientists report in the journal Angewandte Chemie, it consists of a spherical C(60) fullerene as the planet and a flat macrocycle made of six anthracene units as the ring. The structure is confirmed by spectroscopic and X-ray analyses.

A June 8, 2018  Wiley Publications press release (also on EurekAlert), which originated the news item, fills in some details,

Nano-Saturn systems with a spherical molecule and a macrocyclic ring have been a fascinating structural motif for researchers. The ring must have a rigid, circular form, and must hold the molecular sphere firmly in its midst. Fullerenes are ideal candidates for the nano-sphere. They are made of carbon atoms linked into a network of rings that form a hollow sphere. The most famous fullerene, C60, consists of 60 carbon atoms arranged into 5- and 6-membered rings like the leather patches of a classic soccer ball. The electrons in their double bonds, knows as the π-electrons, are in a kind of “electron cloud”, able to freely move about and have binding interactions with other molecules, such as a macrocycle that also has a “cloud” of π-electrons. The attractive interactions between the electron clouds allow fullerenes to lodge in the cavities of such macrocycles.

A series of such complexes has previously been synthesized. Because of the positions of the electron clouds around the macrocycles, it was previously only possible to make rings that surround the fullerene like a belt or a tire. The ring around Saturn, however, is not like a “belt” or “tire”, it is a very flat disc. Researchers working at the Tokyo Institute of Technology and Okayama University of Science (Japan) wanted to properly imitate this at nanoscale.

Their success resulted from a different type of bonding between the “nano-planet” and its “nano-ring”. Instead of using the attraction between the π-electron clouds of the fullerene and macrocycle, the team working with Shinji Toyota used the weak attractive interactions between the π-electron cloud of the fullerene and non- π-electron of the carbon-hydrogen groups of the macrocycle.

To construct their “Saturn ring”, the researchers chose to use anthracene units, molecules made of three aromatic six-membered carbon rings linked along their edges. They linked six of these units into a macrocycle whose cavity was the perfect size and shape for a C60 fullerene. Eighteen hydrogen atoms of the macrocycle project into the middle of the cavity. In total, their interactions with the fullerene are enough to give the complex enough stability, as shown by computer simulations. By using X-ray analysis and NMR spectroscopy, the team was able to prove experimentally that they had produced Saturn-shaped complexes.

Here’s an illustration of the ‘nano-saturn’,

Courtesy: Wiley Publications

Here’s a link to and a citation for the paper,

Nano‐Saturn: Experimental Evidence of Complex Formation of an Anthracene Cyclic Ring with C60 by Yuta Yamamoto, Dr. Eiji Tsurumaki, Prof. Dr. Kan Wakamatsu, Prof. Dr. Shinji Toyota. Angewandte Chemie https://doi.org/10.1002/anie.201804430 First published: 30 May 2018

This paper is behind a paywall.

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,

De-Extinction

In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

Robust reverse osmosis membranes made of carbon nanotubes

Caption: SEM images of MWCNT-PA (Multi-Walled Carbon Nanotube-Polyamide) nanocomposite membranes, for plain PA, and PA with 5, 9.5, 12.5, 15.5, 17 and 20 wt.% of MWCNT, where the typical lobe-like structures appear at the surface. Note the tendency towards a flatter membrane surface as the content of MWCNT increases. Scale bar corresponds to 1.0?μm for all the micrographs. Credit: Copyright 2018, Springer Nature, Licensed under CC BY 4.0

It seems unlikely that the image’s resemblance to a Japanese kimono on display is accidental. Either way, nicely done!

An April 12, 2018 news item on phys.org describes a technique that would allow large-scale water desalination,

A research team of Shinshu University, Japan, has developed robust reverse osmosis membranes that can endure large-scale water desalination. The team published their results in early February [2018] in Scientific Reports.

“Since more than 97 percent of the water in the world is saline water, reverse osmosis desalination plants for producing fresh water are increasingly important for providing a safe and consistent supply,” said Morinobu Endo, Ph.D., corresponding author on the paper. Endo is a distinguished professor of Shinshu University and the Honorary Director of the Institute of Carbon Science and Technology. “Even though reverse osmosis membrane technology has been under development for several decades, new threats like global warming and increasing clean water demand in populated urban centers challenge the conventional water supply systems.”

Reverse osmosis membranes typically consist of thin film composite systems, with an active layer of polymer film that restricts undesired substances, such as salt, from passing through a permeable porous substrate. Such membranes can turn seawater into drinkable water, as well as aid in agricultural and landscape irrigation, but they can be costly to operate and spend a large amount of energy.

To meet the demand of potable water at low cost, Endo says more robust membranes capable of withstanding harsh conditions, while remaining chemically stable to tolerate cleaning treatments, are necessary. The key lays in carbon nanotechnology.

An April 11, 2018 Shinshu University press release, which originated the news item, provides more details about the work,

Endo is a pioneer of carbon nanotubes [sic] synthesis by catalytic chemical vapor deposition. In this research, Endo and his team developed a multi-walled carbon nanotube-polyamide nanocomposite membrane, which is resistant to chlorine–one of the main cause of degradation or failure cases in reverse osmosis membranes. The added carbon nanotubes create a protective effect that stabilized the linked molecules of the polyamide against chlorine.

“Carbon nanotechnology has been expected to bring benefits, and this is one promising example of the contribution of carbon nanotubes to a very critical application: water purification,” Endo said. “Carbon nanotubes and fibers are already superb reinforcements for other applications in materials science and engineering, and this is yet another field where their exceptional properties can be used for improving conventional technologies.”

The researchers are working to stabilize and expand the production and processing of multi-walled carbon nanotube-polyamide nanocomposite membranes.

“We are currently working on scaling up our method of synthesis, which, in principle, is based on the same method used to prepare current polyamide membranes,” Endo said. He also noted that his team is planning a collaboration to produce commercial membranes.

Here’s a link to and a citation for the paper,

Robust water desalination membranes against degradation using high loads of carbon nanotubes by J. Ortiz-Medina, S. Inukai, T. Araki, A. Morelos-Gomez, R. Cruz-Silva, K. Takeuchi, T. Noguchi, T. Kawaguchi, M. Terrones, & M. Endo. Scientific Reports volume 8, Article number: 2748 (2018) doi:10.1038/s41598-018-21192-5 Published online: 09 February 2018

This paper is open access.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

When nanoparticles collide

The science of collisions, although it looks more like kissing to me, at the nanoscale could lead to some helpful discoveries according to an April 5, 2018 news item on Nanowerk,

Helmets that do a better job of preventing concussions and other brain injuries. Earphones that protect people from damaging noises. Devices that convert “junk” energy from airport runway vibrations into usable power.

New research on the events that occur when tiny specks of matter called nanoparticles smash into each other could one day inform the development of such technologies.

Before getting to the news release proper, here’s a gif released by the university,

A digital reconstruction shows how individual atoms in two largely spherical nanoparticles react when the nanoparticles collide in a vacuum. In the reconstruction, the atoms turn blue when they are in contact with the opposing nanoparticle. Credit: Yoichi Takato

An April 4, 2018 University at Buffalo news release (also on EurekAlert) by Charlotte Hsu, which originated the news item, fills in some details,

Using supercomputers, scientists led by the University at Buffalo modeled what happens when two nanoparticles collide in a vacuum. The team ran simulations for nanoparticles with three different surface geometries: those that are largely circular (with smooth exteriors); those with crystal facets; and those that possess sharp edges.

“Our goal was to lay out the forces that control energy transport at the nanoscale,” says study co-author Surajit Sen, PhD, professor of physics in UB’s College of Arts and Sciences. “When you have a tiny particle that’s 10, 20 or 50 atoms across, does it still behave the same way as larger particles, or grains? That’s the guts of the question we asked.”

“The guts of the answer,” Sen adds, “is yes and no.”

“Our research is useful because it builds the foundation for designing materials that either transmit or absorb energy in desired ways,” says first author Yoichi Takato, PhD. Takato, a physicist at AGC Asahi Glass and former postdoctoral scholar at the Okinawa Institute of Science and Technology in Japan, completed much of the study as a doctoral candidate in physics at UB. “For example, you could potentially make an ultrathin material that is energy absorbent. You could imagine that this would be practical for use in helmets and head gear that can help to prevent head and combat injuries.”

The study was published on March 21 in Proceedings of the Royal Society A by Takato, Sen and Michael E. Benson, who completed his portion of the work as an undergraduate physics student at UB. The scientists ran their simulations at the Center for Computational Research, UB’s academic supercomputing facility.

What happens when nanoparticles crash

The new research focused on small nanoparticles — those with diameters of 5 to 15 nanometers. The scientists found that in collisions, particles of this size behave differently depending on their shape.

For example, nanoparticles with crystal facets transfer energy well when they crash into each other, making them an ideal component of materials designed to harvest energy. When it comes to energy transport, these particles adhere to scientific norms that govern macroscopic linear systems — including chains of equal-sized masses with springs in between them — that are visible to the naked eye.

In contrast, nanoparticles that are rounder in shape, with amorphous surfaces, adhere to nonlinear force laws. This, in turn, means they may be especially useful for shock mitigation. When two spherical nanoparticles collide, energy dissipates around the initial point of contact on each one instead of propagating all the way through both. The scientists report that at crash velocities of about 30 meters per second, atoms within each particle shift only near the initial point of contact.

Nanoparticles with sharp edges are less predictable: According to the new study, their behavior varies depending on sharpness of the edges when it comes to transporting energy.
Designing a new generation of materials

“From a very broad perspective, the kind of work we’re doing has very exciting prospects,” Sen says. “It gives engineers fundamental information about nanoparticles that they didn’t have before. If you’re designing a new type of nanoparticle, you can now think about doing it in a way that takes into account what happens when you have very small nanoparticles interacting with each other.”

Though many scientists are working with nanotechnology, the way the tiniest of nanoparticles behave when they crash into each other is largely an open question, Takato says.

“When you’re designing a material, what size do you want the nanoparticle to be? How will you lay out the particles within the material? How compact do you want it to be? Our study can inform these decisions,” Takato says.

Here’s a link to and a citation for the paper,

Small nanoparticles, surface geometry and contact forces by Yoichi Takato, Michael E. Benson, Surajit Sen. Proceedings of the Royal Society A (Mathematical, Physical, and Engineering Sciences) Published 21 March 2018.DOI: 10.1098/rspa.2017.0723

This paper is behind a paywall.

The joys of an electronic ‘pill’: Could Canadian Olympic athletes’ training be hacked?

Lori Ewing (Canadian Press) in an  August 3, 2018 article on the Canadian Broadcasting Corporation news website, heralds a new technology intended for the 2020 Olympics in Tokyo (Japan) but being tested now for the 2018 North American, Central American and Caribbean Athletics Association (NACAC) Track & Field Championships, known as Toronto 2018: Track & Field in the 6ix (Aug. 10-12, 2018) competition.

It’s described as a ‘computerized pill’ that will allow athletes to regulate their body temperature during competition or training workouts, from the August 3, 2018 article,

“We can take someone like Evan [Dunfee, a race walker], have him swallow the little pill, do a full four-hour workout, and then come back and download the whole thing, so we get from data core temperature every 30 seconds through that whole workout,” said Trent Stellingwerff, a sport scientist who works with Canada’s Olympic athletes.

“The two biggest factors of core temperature are obviously the outdoor humidex, heat and humidity, but also exercise intensity.”

Bluetooth technology allows Stellingwerff to gather immediate data with a handheld device — think a tricorder in “Star Trek.” The ingestible device also stores measurements for up to 16 hours when away from the monitor which can be wirelessly transmitted when back in range.

“That pill is going to change the way that we understand how the body responds to heat, because we just get so much information that wasn’t possible before,” Dunfee said. “Swallow a pill, after the race or after the training session, Trent will come up, and just hold the phone [emphasis mine] to your stomach and download all the information. It’s pretty crazy.”

First off, it’s probably not a pill or tablet but a gelcap and it sounds like the device is a wireless biosensor. As Ewing notes, the device collects data and transmits it.

Here’s how the French company, BodyCap, supplying the technology describes their product, from the company’s e-Celsius Performance webpage, (assuming this is the product being used),

Continuous core body temperature measurement

Main applications are:

Risk reduction for people in extreme situations, such as elite athletes. During exercise in a hot environment, thermal stress is amplified by the external temperature and the environment’s humidity. The saturation of the body’s thermoregulation mechanism can quickly cause hyperthermia to levels that may cause nausea, fainting or death.

Performance optimisation for elite athletes.This ingestible pill leaves the user fully mobile. The device keeps a continuous record of temperature during training session, competition and during the recovery phase. The data can then be used to correlate thermoregulation with performances. This enable the development of customised training protocols for each athlete.

e-Celsius Performance® can be used for all sports, including water sports. Its application is best suited to sports that are physically intensive like football, rugby, cycling, long distance running, tennis or those that take place in environments with extreme temperature conditions, like diving or skiing.

e-Celsius Performance®, is a miniaturised ingestible electronic pill that wirelessly transmits a continuous measurement of gastrointestinal temperature. [emphasis mine]

The data are stored on a monitor called e-Viewer Performance®. This device [emphases mine] shows alerts if the measurement is outside the desired range. The activation box is used to turn the pill on from standby mode and connect the e-Celsius Performance pill with the monitor for data collection in either real time or by recovery from the internal memory of e-Celsius Performance®. Each monitor can be used with up to three pills at once to enable extended use.

The monitor’s interface allows the user to download data to a PC/ Mac for storage. The pill is safe, non-invasive and easy to use, leaving the gastric system after one or two days, [emphasis mine] depending on individual transit time.

I found Dunfee’s description mildly confusing but that can be traced to his mention of wireless transmission to a phone. Ewing describes a handheld device which is consistent with the company’s product description. There is no mention of the potential for hacking but I would hope Athletics Canada and BodyCap are keeping up with current concerns over hacking and interference (e.g., Facebook/Cambridge Analytica, Russians and the 2016 US election, Roberto Rocha’s Aug. 3, 2018 article for CBC titled: Data sheds light on how Russian Twitter trolls targeted Canadians, etc.).

Moving on, this type of technology was first featured here in a February 11, 2014 posting (scroll down to the gif where an electronic circuit dissolves in water) and again in a November 23, 2015 posting about wearable and ingestible technologies but this is the first real life application I’ve seen for it.

Coincidentally, an August 2, 2018 Frontiers [Publishing] news release on EurekAlert announced this piece of research (published in June 2018) questioning whether we need this much data and whether these devices work as promoted,

Wearable [and, in the future, ingestible?] devices are increasingly bought to track and measure health and sports performance: [emphasis mine] from the number of steps walked each day to a person’s metabolic efficiency, from the quality of brain function to the quantity of oxygen inhaled while asleep. But the truth is we know very little about how well these sensors and machines work [emphasis mine]– let alone whether they deliver useful information, according to a new review published in Frontiers in Physiology.

“Despite the fact that we live in an era of ‘big data,’ we know surprisingly little about the suitability or effectiveness of these devices,” says lead author Dr Jonathan Peake of the School of Biomedical Sciences and Institute of Health and Biomedical Innovation at the Queensland University of Technology in Australia. “Only five percent of these devices have been formally validated.”

The authors reviewed information on devices used both by everyday people desiring to keep track of their physical and psychological health and by athletes training to achieve certain performance levels. [emphases mine] The devices — ranging from so-called wrist trackers to smart garments and body sensors [emphasis mine] designed to track our body’s vital signs and responses to stress and environmental influences — fall into six categories:

  • devices for monitoring hydration status and metabolism
  • devices, garments and mobile applications for monitoring physical and psychological stress
  • wearable devices that provide physical biofeedback (e.g., muscle stimulation, haptic feedback)
  • devices that provide cognitive feedback and training
  • devices and applications for monitoring and promoting sleep
  • devices and applications for evaluating concussion

The authors investigated key issues, such as: what the technology claims to do; whether the technology has been independently validated against some recognized standards; whether the technology is reliable and what, if any, calibration is needed; and finally, whether the item is commercially available or still under development.

The authors say that technology developed for research purposes generally seems to be more credible than devices created purely for commercial reasons.

“What is critical to understand here is that while most of these technologies are not labeled as ‘medical devices’ per se, their very existence, let alone the accompanying marketing, conveys a sensibility that they can be used to measure a standard of health,” says Peake. “There are ethical issues with this assumption that need to be addressed.” [emphases mine]

For example, self-diagnosis based on self-gathered data could be inconsistent with clinical analysis based on a medical professional’s assessment. And just as body mass index charts of the past really only provided general guidelines and didn’t take into account a person’s genetic predisposition or athletic build, today’s technology is similarly limited.

The authors are particularly concerned about those technologies that seek to confirm or correlate whether someone has sustained or recovered from a concussion, whether from sports or military service.

“We have to be very careful here because there is so much variability,” says Peake. “The technology could be quite useful, but it can’t and should never replace assessment by a trained medical professional.”

Speaking generally again now, Peake says it is important to establish whether using wearable devices affects people’s knowledge and attitude about their own health and whether paying such close attention to our bodies could in fact create a harmful obsession with personal health, either for individuals using the devices, or for family members. Still, self-monitoring may reveal undiagnosed health problems, said Peake, although population data is more likely to point to false positives.

“What we do know is that we need to start studying these devices and the trends they are creating,” says Peake. “This is a booming industry.”

In fact, a March 2018 study by P&S Market Research indicates the wearable market is expected to generate $48.2 billion in revenue by 2023. That’s a mere five years into the future.”

The authors highlight a number of areas for investigation in order to develop reasonable consumer policies around this growing industry. These include how rigorously the device/technology has been evaluated and the strength of evidence that the device/technology actually produces the desired outcomes.

“And I’ll add a final question: Is wearing a device that continuously tracks your body’s actions, your brain activity, and your metabolic function — then wirelessly transmits that data to either a cloud-based databank or some other storage — safe, for users? Will it help us improve our health?” asked Peake. “We need to ask these questions and research the answers.”

The authors were not examining ingestible biosensors nor were they examining any issues related to data about core temperatures but it would seem that some of the same issues could apply especially if and when this technology is brought to the consumer market.

Here’s a link to the and a citation for the paper,

Critical Review of Consumer Wearables, Mobile Applications, and Equipment for Providing Biofeedback, Monitoring Stress, and Sleep in Physically Active Populations by Jonathan M. Peake, Graham Kerr, and John P. Sullivan. Front. Physiol., 28 June 2018 | https://doi.org/10.3389/fphys.2018.00743

This paper is open access.

A dance with love and fear: the Yoko Ono exhibit and the Takashi Murakami exhibit in Vancouver (Canada)

It seems Japanese artists are ‘having a moment’. There’s a documentary (Kusama—Infinity) about contemporary Japanese female artist, Yayoi Kusama, making the festival rounds this year (2018). Last year (2017), the British Museum mounted a major exhibition of Hokusai’s work (19th Century) and in 2017, the Metropolitan Museum of Art Costume Institute benefit was inspired by a Japanese fashion designer, “Rei Kawakubo/Comme des Garçons: Art of the In-Between.” (A curator at the Japanese Garden in Portland who had lived in Japan for a number of years mentioned to me during an interview that the Japanese have one word for art. There is no linguistic separation between art and craft.)

More recently, both Yoko Ono and Takashi Murakami have had shows in Vancouver, Canada. Starting with fear as I prefer to end with love, Murakami had a blockbuster show at the Vancouver Gallery.

Takashi Murakami: a dance with fear (and money too)

In the introductory notes at the beginning of the exhibit: “Takashi Murakami: The Octopus Eats Its own Leg,” it was noted that fear is one of Murakami’s themes. The first few pieces in the show had been made to look faded and brownish to the point where you had to work at seeing what was underneath the layers. The images were a little bit like horror films something’s a bit awry then scary and you don’t know what it is or how to deal with it.

After those images, the show opened up to bright, bouncy imagery commonly associated with Mrjakami’s work. However, if you look at them carefully, you’ll see many of these characters have big, pointed teeth. Also featured was a darkened room with two huge warriors.At a guess, I’d say they were 14 feet tall.

It  made for a disconcerting show with its darker themes usually concealed in bright, vibrant colour. Here’s an image promoting Murakami’s Vancouver birthday celebration and exhibit opening,

‘Give me the money, now!’ says a gleeful Takashi Murakami, whose expansive show is currently at the Vancouver Art Gallery. Photo by the VAG. [downloaded from https://thetyee.ca/Culture/2018/02/07/Takashi-Murakami-VAG/]

The colours and artwork shown in the marketing materials (I’m including the wrapping on the gallery itself) were  exuberant as was Murakami who acted as his own marketing material. I’m mentioning the money It’s very intimately and blatantly linked to Murakami’s art and work.  Dorothy Woodend in a Feb. 7, 2018 article for The Tyee puts it this way (Note: Link have been removed),

The close, almost incestuous relationship between art and money is a very old story. [emphasis mine] You might even say it is the only story at the moment.

You can know this, understand it to a certain extent, and still have it rear up and bite you on the bum. [emphasis mine] Such was my experience of attending the exhibition preview of Takashi Murakami’s The Octopus Eats Its Own Leg at the Vancouver Art Gallery.

The show is the first major retrospective of Murakami’s work in Canada, and the VAG has spared no expense in marketing the living hell out of the thing. From the massive cephalopod installed atop the dome of the gallery, to the ocean of smiling cartoon flowers, to the posters papering every inch of downtown Vancouver, it is in a word: huge.

If you don’t know much about Murakami the show is illuminating, in many different ways. Expansive in extremis, the exhibition includes more than 50 works that trace a path through the evolution of Murakami’s style and aesthetic, moving from his early dark textural paintings that blatantly ripped off Anselm Kiefer, to his later pop-art style (Superflat), familiar from Kanye West albums and Louis Vuitton handbags.

make no mistake, money runs underneath the VAG show like an engine [emphasis mine]. You can feel it in the air, thrumming with a strange radioactive current, like a heat mirage coming off the people madly snapping selfies next to the Kanye Bear sculpture.

The artist himself seems particularly aware of how much of a financial edifice surrounds the human impulse to make images. In an on-stage interview with senior VAG [Vancouver Art Gallery] curator Bruce Grenville during a media preview for the show, Murakami spoke plainly about the need for survival (a.k.a. money) [emphasis mine] that has propelled his career.

Even the title of the show speaks to the notion of survival (from Woodend’s article; Note: Links have been removed),

The title of the show takes inspiration from Japanese folklore about a creature that sacrifices part of its own body so that the greater whole might survive. In the natural world, an octopus will chew off its own leg if there is an infection, and then regrow the missing limb. In the art world, the idea pertains to the practice of regurgitating (recycling) old ideas to serve the endless voracious demand for new stuff. “I don’t have the talent to come up with new ideas, so in order to survive, you have to eat your own body,” Murakami explains, citing his need for deadlines, and very bad economic conditions, that lead to a state of almost Dostoyevskyian desperation. “Please give me the money now!” he yells, and the assembled press laughs on cue.

The artist’s responsibility to address larger issues like gender, politics and the environment was the final question posed during the Q&A, before the media were allowed into the gallery to see the work. Murakami took his time before answering, speaking through the nice female translator beside him. “Artists don’t have that much power in the world, but they can speak to the audience of the future, who look at the artwork from a certain era, like Goya paintings, and see not just social commentary, but an artistic point of view. The job of the artist is to dig deep into human beings.”

Which is a nice sentiment to be sure, but increasingly art is about celebrity and profit. Record-breaking shows like Alexander McQueen’s Savage Beauty and Rei Kawakubo/Comme des Garçons: Art of the In-Between demonstrated an easy appeal for both audiences and corporations. One of Murakami’s earlier exhibitions featured a Louis Vuitton pop-up shop as part of the show. Closer to home, the Fight for Beauty exhibit mixed fashion, art and development in a decidedly queasy-making mixture.

There is money to be made in culture of a certain scale, with scale being the operative word. Get big or get out.

Woodend also relates the show and some of the issues it raises to the local scene (Note: Links have been removed),

A recent article in the Vancouver Courier about the Oakridge redevelopment plans highlighted the relationship between development and culture in raw numbers: “1,000,000 square feet of retail, 2,600 homes for 6,000 people, office space for 3,000 workers, a 100,000-square-foot community centre and daycare, the city’s second-largest library, a performing arts academy, a live music venue for 3,000 people and the largest public art program in Vancouver’s history…”

Westbank’s Ian Gillespie [who hosted the Fight for Beauty exhibit] was quoted extensively, outlining the integration between the city and the developer. “The development team will also work with the city’s chief librarian to figure out the future of the library, while the 3,000-seat music venue will create an ‘incredible music scene.’” The term “cultural hub” also pops up so many times it’s almost funny, in a horrifying kind of way.

But bigness often squeezes out artists and musicians who simply can’t compete. Folk who can’t fill a 3,000-seat venue, or pack in thousands of visitors, like the Murakami show, are out of luck.

Vancouver artists, who struggle to survive in the city and have done so for quite some time, were singularly unimpressed with the Oakridge development proposal. Selina Crammond, a local musician and all-around firebrand, summed up the divide in a few eloquent sentences: “I mean really, who is going to make up this ‘incredible music scene’ and fill all of these shiny new venues? Many of my favourite local musicians have already moved away from Vancouver because they just can’t make it work. Who’s going to pay the musicians and workers? Who’s going to pay the large ticket prices to be able to maintain these spaces? I don’t think space is the problem. I think affordability and distribution of wealth and funding are the problems artists and arts workers are facing.”

The stories continue to pop up, the most recent being the possible sale and redevelopment of the Rio Theatre. The news sparked an outpouring of anger, but the story is repeated so often in Vancouver, it has become something of a cliché. You need only to look at the story of the Hollywood Theatre for a likely ending to the saga.

Which brings me back around to the Murakami exhibit. To be perfectly frank, the show is incredible and well-worth visiting. I enjoyed every minute of wandering through it taking in the sheer expanse of mind-boggling, googly-eyed detail. I would urge you to attend, if you can afford it. But there’s the rub. I was there for free, and general admission to the VAG is $22.86. This may not seem like a lot, but in a city where people can barely make rent, culture becomes the purview of them that can afford it.

The City of Vancouver recently launched its Creative Cities initiative to look at issues of affordability, diversity and gentrification.

We shall see if anything real emerges from the process. But in the meantime, Vancouver artists might have to eat their own legs simply to survive. [Tyee]

Survival issues and their intimate companions, fear, are clearly a major focus for Murakami’s art.

For the curious, the Vancouver version of the Murakami retrospective show was held from February 3 – May 6, 2018. There are still some materials about the show available online here.

Yoko Ono and the power of love (and maybe money, too)

More or less concurrently with the Murakami exhibition, the Rennie Museum (formerly Rennie Collection), came back from a several month hiatus to host a show featuring Yoko Ono’s “Mend Piece.”

From a Rennie Museum (undated) press release,

Rennie Museum is pleased to present Yoko Ono’s MEND PIECE, Andrea Rosen Gallery, New York City version (1966/2015). Illustrating Ono’s long standing artistic quest in social activism and world peace, this instructional work will transform the historic Wing Sang building into an intimate space for creative expression and bring people together in an act of collective healing and meditation. The installation will run from March 1 to April 15, 2018.

First conceptualized in 1966, the work immerses the visitor in a dream-like state. Viewers enter into an all-white space and are welcomed to take a seat at the table to reassemble fragments of ceramic coffee cups and saucers using the provided twine, tape, and glue. Akin to the Japanese philosophy of Wabi-sabi, an embracing of the flawed or imperfect, Mend Piece encourages the participant to transform broken fragments into an object that prevails its own violent rupture. The mended pieces are then displayed on shelves installed around the room. The contemplative act of mending is intended to promote reparation starting within one’s self and community, and bridge the gap created by violence, hatred, and war. In the words of Yoko Ono herself, “Mend with wisdom, mend with love. It will mend the earth at the same time.”

The installation of MEND PIECE, Andrea Rosen Gallery, New York City version at Rennie Museum will be accompanied by an espresso bar, furthering the notions of community and togetherness.

Yoko Ono (b. 1933) is a Japanese conceptual artist, musician, and peace activist pioneering feminism and Fluxus art. Her eclectic oeuvre of performance art, paintings, sculptures, films and sound works have been shown at renowned institutions worldwide, with recent exhibitions at The Museum of Modern Art, New York; Copenhagen Contemporary, Copenhagen; Museum of Contemporary Art, Tokyo; and Museo de Arte Latinoamericano de Buenos Aires. She is the recipient of the 2005 IMAJINE Lifetime Achievement Award and the 2009 Venice Biennale Golden Lion for Lifetime Achievement, among other distinctions. She lives and works in New York City.

While most of the shows have taken place over two, three, or four floors, “Mend Piece” was on the main floor only,

Courtesy: Rennie Museum

There was another “Mend Piece” in Canada, located at the Gardiner Museum and part of a larger show titled: “The Riverbed,” which ran from February 22 to June 3, 2018. Here’s an image of one of the Gardiner Museum “Mend” pieces that was featured in a March 7, 2018 article by Sonya Davidson for the Toronto Guardian,

Yoko Ono, Mend Piece, 1966 / 2018, © Yoko Ono. Photo: Tara Fillion Courtesy: Toronto Guardian

Here’s what Davidson had to say about the three-part installation, “The Riverbed,”

I’m sitting  on one of the cushions placed on the floor watching the steady stream of visitors at Yoko Ono’s exhibition The Riverbed at the Gardiner Museum. The room is airy and bright but void of  colours yet it’s vibrant and alive in a calming way. There are three distinct areas in this exhibition: Stone Piece, Line Piece and Mend Piece. From what I’ve experienced in Ono’s previous exhibitions, her work encourages participation and is inclusive of everyone. She has the idea. She encourages us to  go collaborate with her. Her work is describe often as  redirecting our attention to ideas, instead of appearances.

Mend Piece is the one I’m most familiar with. It was part of her exhibition I visited in Reykjavik [Iceland]. Two large communal tables are filled with broken ceramic pieces and mending elements. Think glue, string, and tape.  Instructions from Ono once again are simple but with meaning. Take the pieces that resonate with you and mend them as you desire. You’re encourage [sic] to leave it in the communal space for everyone to experience what you’ve experienced. It reminded me of her work decades ago where she shattered porcelain vases, and people invited people to take a piece with them. But then years later she collected as many back and mended them herself. Part contemporary with a nod to the traditional Japanese art form of Kintsugi – fixing broken pottery with gold and the philosophy of nothing is ever truly broken. The repairs made are part of the history and should be embraced with honour and pride.

The experience at the Rennie was markedly different . I recommend reading both Davidson’s piece (includes many embedded images) in its entirety to get a sense for how different and this April 7, 2018 article by Jenna Moon for The Star regarding the theft of a stone from The Riverbed show at the Gardiner,

A rock bearing Yoko Ono’s handwriting has been stolen from the Gardiner Museum, Toronto police say. The theft reportedly occurred around 5:30 p.m. on March 12.

The rock is part of an art exhibit featuring Ono, where patrons can meditate using several river rocks. The stone is inscribed with black ink, and reads “love yourself” in block letters. It is valued at $17,500 (U.S.), [emphasis mine] Toronto police media officer Gary Long told the Star Friday evening.

As far as I can tell, they still haven’t found the suspect who was described as a woman between the ages of 55 and 60. However the question that most interests me is how did they arrive at a value for the stone? Was it a case of assigning a value to the part of the installation with the stones and dividing that value by the number of stones? Yoko Ono may focus her art on social activism and peace but she too needs money to survive. Moving on.

Musings on ‘mend’

Participating in “Mend Piece” at the Rennie Museum was revelatory. It was a direct experience of the “traditional Japanese art form of Kintsugi – fixing broken pottery with gold and the philosophy of nothing is ever truly broken.” So often art is at best a tertiary experience for the viewer. The artist has the primary experience producing the work and the curator has the secondary experience of putting the show together.

For all the talk about interactive installations and pieces, there are few that truly engage the viewer with the piece. I find this rule applies: the more technology, the less interactivity.

“Mend” insisted on interactivity. More or less. I went with a friend and sat beside the one person in the group who didn’t want to talk to anyone. And she wasn’t just quiet, you could feel the “don’t talk to me” vibrations pouring from every one of her body parts.

The mending sessions were about 30 minutes long and, as Davidson notes, you had string, two types of glue, and twine. For someone with any kind of perfectionist tendencies (me) and a lack of crafting skills (me), it proved to be a bit of a challenge, especially with a semi-hostile person beside me. Thank goodness my friend was on the other side.

Adding to my travails was the gallery assistant (a local art student) who got very anxious and hovered over me as I attempted and failed to set my piece on a ledge in the room (twice). She was very nice and happy to share, without being intrusive, information about Yoko Ono and her work while we were constructing our pieces. I’m not sure what she thought was going to happen when I started dropping things but her hovering brought back memories of my adolescence when shopkeepers would follow me around their store.

Most of my group had finished and even though there was still time in my session, the next group rushed in and took my seat while I failed for the second time to place my piece. I stood for my third (and thankfully successful) repair attempt.

At that point I went to the back where more of the “Mend” communal experience awaited. Unfortunately, the coffee bar’s (this put up especially for the show) espresso machine was not working. There was some poetry on the walls and a video highlighting Yoko Ono’s work over the years and the coffee bar attendant was eager to share (but not intrusively so) some information about Yoko and her work.

As I stated earlier, it was a revelatory experience. First, It turned out my friend had been following Yoko’s work since before the artist had hooked up with John Lennon and she was able to add details to the attendants’ comments.

Second, I didn’t expect was a confrontation with the shards of my past and personality. In essence, mending myself and, hopefully, more. There was my perfectionism, rejection by the unfriendly tablemate, my emotional response (unspoken) to the hypervigilant gallery assistant, having my seat taken from me before the time was up, and the disappointment of the coffee bar. There was also a rediscovery of my friend, a friendly tablemate who made a beautiful object (it looked like a bird), the helpfulness of both the gallery assistants, Yoko Ono’s poetry, and a documentary about the remarkable Yoko.

All in all, it was a perfect reflection of imperfection (wabi-sabi), brokenness, and wounding in the context of repair (Kintsugi)/healing.

Thank you, Yoko Ono.

For anyone in Vancouver who feels they missed out on the experience, there are some performances of “Perfect Imperfections: The Art of a Messy Life” (comedy, dance, and live music) at Vancity Culture Lab at The Cultch from June 14 – 16, 2018. You can find out more here.

The moment

It certainly seems as if there’s a great interest in Japanese art, if you live in Vancouver (Canada), anyway. The Murakami show was a huge success for the Vancouver Art Gallery. As for Yoko Ono, the Rennie Museum extended the exhibit dates due to demand. Plus, the 2018 – 2020 version of the Vancouver Biennale is featuring (from a May 29, 2018 Vancouver Biennale news release),

… Yoko Ono with its 2018 Distinguished Artist Award, a recognition that coincides with reissuing the acclaimed artist’s 2007 Biennale installation, “IMAGINE PEACE,” marshalled at this critical time to re-inspire a global consciousness towards unity, harmony, and accord. Yoko Ono’s project exemplifies the Vancouver Biennale’s mission for diverse communities to gain access, visibility and representation.

The British Museum’s show (May 25 – August 13, 2017), “Hokusai’s Great Wave,” was seen in Vancouver at a special preview event in May 2017 at a local movie house, which was packed.

The documentary film festival, DOXA (Vancouver) closed its 2018 iteration with the documentary about Yayoi Kusama. Here’s more about her from a May 9, 2018 article by Janet Smith for the Georgia Straight,

Amid all the dizzying, looped-and-dotted works that American director Heather Lenz has managed to capture in her new documentary Kusama—Infinity, perhaps nothing stands out so much as images of the artist today in her Shinjuku studio.

Interviewed in the film, the 89-year-old Yayoi Kusama sports a signature scarlet bobbed anime wig and hot-pink polka-dotted dress, sitting with her marker at a drawing table, and set against the recent creations on her wall—a sea of black-and-white spots and jaggedy lines.

“The boundary between Yayoi Kusama and her art is not very great,” Lenz tells the Straight from her home in Orange County. “They are one and the same.”

It was as a young student majoring in art history and fine art that Lenz was first drawn to Kusama—who stood out as one of few female artists in her textbooks. She saw an underappreciated talent whose avant-pop works anticipated Andy Warhol and others. And as Lenz dug deeper into the artist’s story, she found a woman whose struggles with a difficult childhood and mental illness made her achievements all the more remarkable.

Today, Kusama is one of the world’s most celebrated female artists, her kaleidoscopic, multiroom show Infinity Mirrors drawing throngs of visitors to galleries like the Art Gallery of Ontario and the Seattle Art Museum over the past year. But when Lenz set out to make her film 17 long years ago, few had ever heard of Kusama.

I am hopeful that this is a sign that the Vancouver art scene is focusing more attention to the west, to Asia. Quite frankly, it’s about time.

As a special treat, here’s a ‘Yoko Ono tribute’ from the Bare Naked Ladies,

Dance!