The structural colo(u)r stories I’ve posted previously identify nanostructures as the reason for why certain animals and plants display a particular set of optical properties, colours that can’t be obtained by pigment or dye. However, the Stellar’s jay structural colour story is a little different.
A Nagoya University-led [Japan] research team mimics the rich color of bird plumage and demonstrates new ways to control how light interacts with materials.
Bright colors in the natural world often result from tiny structures in feathers or wings that change the way light behaves when it’s reflected. So-called “structural color” is responsible for the vivid hues of birds and butterflies. Artificially harnessing this effect could allow us to engineer new materials for applications such as solar cells and chameleon-like adaptive camouflage.
Inspired by the deep blue coloration of a native North American bird, Stellar’s jay, a team at Nagoya University reproduced the color in their lab, giving rise to a new type of artificial pigment. This development was reported in Advanced Materials.
“The Stellar’s jay’s feathers provide an excellent example of angle-independent structural color,” says last author Yukikazu Takeoka, “This color is enhanced by dark materials, which in this case can be attributed to black melanin particles in the feathers.
In most cases, structural colors appear to change when viewed from different perspectives. For example, imagine the way that the colors on the underside of a CD appear to shift when the disc is viewed from a different angle. The difference in Stellar’s jay’s blue is that the structures, which interfere with light, sit on top of black particles that can absorb a part of this light. This means that at all angles, however you look at it, the color of the Stellar’s Jay does not change.
The team used a “layer-by-layer” approach to build up films of fine particles that recreated the microscopic sponge-like texture and black backing particles of the bird’s feathers.
To mimic the feathers, the researchers covered microscopic black core particles with layers of even smaller transparent particles, to make raspberry-like particles. The size of the core and the thickness of the layers controlled the color and saturation of the resulting pigments. Importantly, the color of these particles did not change with viewing angle.
“Our work represents a much more efficient way to design artificially produced angle-independent structural colors,” Takeoka adds. “We still have much to learn from biological systems, but if we can understand and successfully apply these phenomena, a whole range of new metamaterials will be accessible for all kinds of advanced applications where interactions with light are important.”
Ordinarily, I’d expect to see the term ‘nano’ somewhere in the press release or in the abstract but that’s not the case here. The best I could find was a reference to ‘submicrometer-sized .. particles” in the abstract. I suppose that could refer to the nanoscale but given that a Japanese researcher (Norio Taniguchi in 1974) coined the phrase ‘nanotechnology’ to describe research at that scale it seems unlikely that Japanese researchers some forty years later wouldn’t use that term when appropriate.
A May 3, 2017 news item on Nanowerk announces research from Japan on using tumor suppressor proteins to control nanostructures,
A new method combining tumor suppressor protein p53 and biomineralization peptide BMPep successfully created hexagonal silver nanoplates, suggesting an efficient strategy for controlling the nanostructure of inorganic materials.
Precise control of nanostructures is a key factor to form functional nanomaterials. Biomimetic approaches are considered effective for fabricating nanomaterials because biomolecules are able to bind with specific targets, self-assemble, and build complex structures. Oligomerization, or the assembly of biomolecules, is a crucial aspect of natural materials that form higher-ordered structures.
Some peptides are known to bind with a specific inorganic substance, such as silver, and enhance its crystal formation. This phenomenon, called peptide-mediated biomineralization, could be used as a biomimetic approach to create functional inorganic structures. Controlling the spatial orientation of the peptides could yield complex inorganic structures, but this has long been a great challenge.
A team of researchers led by Hokkaido University Professor Kazuyasu Sakaguchi has succeeded in controlling the oligomerization of the silver biomineralization peptide (BMPep) which led to the creation of hexagonal silver nanoplates.
The team utilized the well-known tumor suppressor protein p53 which has been known to form tetramers through its tetramerization domain (p53Tet). “The unique symmetry of the p53 tetramer is an attractive scaffold to be used in controlling the overall oligomerization state of the silver BMPep such as its spatial orientation, geometry, and valency,” says Sakaguchi.
In the experiments, the team successfully created silver BMPep fused with p53Tet. This resulted in the formation of BMPep tetramers which yielded hexagonal silver nanoplates. They also found that the BMPep tetramers have enhanced specificity to the structured silver surface, apparently regulating the direction of crystal growth to form hexagonal nanoplates. Furthermore, the tetrameric peptide acted as a catalyst, controlling the silver’s crystal growth without consuming the peptide.
“Our novel method can be applied to other biomineralization peptides and oligomerization proteins, thus providing an efficient and versatile strategy for controlling nanostructures of various inorganic materials. The production of tailor-made nanomaterials is now more feasible,” Sakaguchi commented.
(Left panels) Schematic illustrations of monomeric and tetrameric biomineralization peptides fused with p53Tet and electron microscopy images of silver nanostructures formed by the biomineralization peptides. Scale bar = 100 nm. (Right) The proposed model in which tetrameric biomineralization peptides regulate the direction of crystal growth and therefore its nanostructure.
The Nanocar Race (which at one point was the NanoCar Race) took place on April 28 -29, 2017 in Toulouse, France. Presumably the fall 2016 race did not take place (as I had reported in my May 26, 2016 posting). A March 23, 2017 news item on ScienceDaily gave the latest news about the race,
Nanocars will compete for the first time ever during an international molecule-car race on April 28-29, 2017 in Toulouse (south-western France). The vehicles, which consist of a few hundred atoms, will be powered by minute electrical pulses during the 36 hours of the race, in which they must navigate a racecourse made of gold atoms, and measuring a maximum of a 100 nanometers in length. They will square off beneath the four tips of a unique microscope located at the CNRS’s Centre d’élaboration de matériaux et d’études structurales (CEMES) in Toulouse. The race, which was organized by the CNRS, is first and foremost a scientific and technological challenge, and will be broadcast live on the YouTube Nanocar Race channel. Beyond the competition, the overarching objective is to advance research in the observation and control of molecule-machines.
More than just a competition, the Nanocar Race is an international scientific experiment that will be conducted in real time, with the aim of testing the performance of molecule-machines and the scientific instruments used to control them. The years ahead will probably see the use of such molecular machinery — activated individually or in synchronized fashion — in the manufacture of common machines: atom-by-atom construction of electronic circuits, atom-by-atom deconstruction of industrial waste, capture of energy…The Nanocar Race is therefore a unique opportunity for researchers to implement cutting-edge techniques for the simultaneous observation and independent maneuvering of such nano-machines.
The experiment began in 2013 as part of an overview of nano-machine research for a scientific journal, when the idea for a car race took shape in the minds of CNRS senior researcher Christian Joachim (now the director of the race) and Gwénaël Rapenne, a Professor of chemistry at Université Toulouse III — Paul Sabatier. …
An April 19, 2017 article by Davide Castelvecchi for Nature (magazine) provided more detail about the race (Note: Links have been removed),
The term nanocar is actually a misnomer, because the molecules involved in this race have no motors. (Future races may incorporate them, Joachim says.) And it is not clear whether the molecules will even roll along like wagons: a few designs might, but many lack axles and wheels. Drivers will use electrons from the tip of a scanning tunnelling microscope (STM) to help jolt their molecules along, typically by just 0.3 nano-metres each time — making 100 nanometres “a pretty long distance”, notes physicist Leonhard Grill of the University of Graz, Austria, who co-leads a US–Austrian team in the race.
Contestants are not allowed to directly push on their molecules with the STM tip. Some teams have designed their molecules so that the incoming electrons raise their energy states, causing vibrations or changes to molecular structures that jolt the racers along. Others expect electrostatic repulsion from the electrons to be the main driving force. Waka Nakanishi, an organic chemist at the National Institute for Materials Science in Tsukuba, Japan, has designed a nanocar with two sets of ‘flaps’ that are intended to flutter like butterfly wings when the molecule is energized by the STM tip (see ‘Molecular race’). Part of the reason for entering the race, she says, was to gain access to the Toulouse lab’s state-of-the-art STM to better understand the molecule’s behaviour.
Eric Masson, a chemist at Ohio University in Athens, hopes to find out whether the ‘wheels’ (pumpkin-shaped groups of atoms) of his team’s car will roll on the surface or simply slide. “We want to better understand the nature of the interaction between the molecule and the surface,” says Masson..
Adapted from www.nanocar-race.cnrs.fr
Simply watching the race progress is half the battle. After each attempted jolt, teams will take three minutes to scan their race track with the STM, and after each hour they will produce a short animation that will immediately be posted online. That way, says Joachim, everyone will be able to see the race streamed almost live.
The Toulouse laboratory has an unusual STM with four scanning tips — most have only one — that will allow four teams to race at the same time, each on a different section of the gold surface. Six teams will compete this week to qualify for one of the four spots; the final race will begin on 28 April at 11 a.m. local time. The competitors will face many obstacles during the contest. Individual molecules in the race will often be lost or get stuck, and the trickiest part may be to negotiate the two turns in the track, Joachim says. He thinks the racers may require multiple restarts to cover the distance.
For anyone who wants more information, go to the Nanocar Race website. There is also a highlights video,
Published on Apr 29, 2017
The best moments of the first-ever international race of molecule- cars.
An April 3, 2017 news item on Nanowerk describes Japanese research into a new technique for producing MOF’s (metallic organic frameworks),
Osaka-based researchers developed a new method to create films of porous metal–organic frameworks fully aligned on inorganic substrates. The method is simple, requiring only that the substrate and an organic linker are mixed under mild conditions, and fast, producing perfectly aligned films within minutes. The films oriented fluorescent dye molecules within their pores, and the fluorescence response of these dyes was switched on or off simply by rotating the material in polarized light.
Metal–organic frameworks, or MOFs, are highly ordered crystalline structures made of metal ion nodes and organic molecule linkers. Many MOFs can take up and store gases, such as carbon dioxide or hydrogen, thanks to their porous, sponge-like structures.
MOFs are also potential chemical sensors. They can be designed to change color or display another optical signal if a particular molecule is taken up into the framework. However, most studies on MOFs are performed on tiny single crystals, which is not practical for the commercial development of these materials.
Chemists have now come a step closer to making commercially viable sensors that contain highly ordered MOFs, thanks to the collaboration of an international team of researchers at Osaka Prefecture University, Osaka University and Graz University of Technology. The method will allow researchers to fabricate large tailor-made MOF films on any substrate of any size, which will vastly improve their prospects for commercial development.
In a study recently published in Nature Materials and highlighted on the cover and in the ‘News and Views’ section of the journal, the Osaka-based researchers report a one-step method to prepare thin MOF films directly on inorganic copper hydroxide substrates. Using this method, the researchers produced large MOF films with areas of more than 1 cm2 that were, for the first time, fully aligned with the crystal lattice of the underlying substrate.
Noting that microcrystals of copper hydroxide can be converted into MOFs by adding organic linker molecules under mild conditions, the researchers used the same strategy to create a thin MOF layer on larger copper hydroxide substrates. They carefully chose the carboxylic acid-based linker molecule 1,4-dibenzenedicarboxylic acid because it fit exactly to the spacing between the copper atoms on the substrate surface.
A MOF film began to grow on the copper hydroxide substrates within minutes of mixing it with the linker molecule, making this technique much easier and faster than previous step-wise approaches to build up MOF films. Using microscopy and X-ray diffraction techniques, the researchers found that the film was precisely oriented along the copper hydroxide lattice.
To demonstrate the unique optical behavior of their films, the researchers filled the MOF’s ordered pores with fluorescent molecules, which fluoresce when light is shone on them in a particular direction. When they shone polarized light on the ordered material, the researchers found that they could easily switch the fluorescence response on or off simply by rotating the material.
Japanese scientists have developed a more precise method for culturing stem cells according to a March 14, 2017 news item on Nanowerk,
A team of researchers in Japan has developed a new platform for culturing human pluripotent stem cells that provides far more control of culture conditions than previous tools by using micro and nanotechnologies.
The Multiplexed Artificial Cellular Microenvironment (MACME) array places nanofibres, mimicking cellular matrices, into fluid-filled micro-chambers of precise sizes, which mimic extracellular environments.
Caption: The Multiplexed Artificial Cellular Microenvironment (MACME) array, consisted with a microfluidic structure and nanofibre array for mimicking cellular microenvironments. Credit: Kyoto University iCeMS
Human pluripotent stems cells (hPSCs) hold great promise for tissue engineering, regenerative medicine and cell-based therapies because they can become any type of cell. The environment surrounding the cells plays a major role in determining what tissues they become, if they replicate into more cells, or die. However, understanding these interactions has been difficult because researchers have lacked tools that work on the appropriate scale.
Often, stem cells are cultured in a cell culture medium in small petri dishes. While factors such as medium pH levels and nutrients can be controlled, the artificial set up is on the macroscopic scale and does not allow for precise control of the physical environment surrounding the cells.
The MACME array miniaturizes this set up, culturing stem cells in rows of micro-chambers of cell culture medium. It also takes it a step further by placing nanofibers in these chambers to mimic the structures found around cells.
Led by Ken-ichiro Kamei of Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS), the team tested a variety of nanofiber materials and densities, micro-chamber heights and initial stem cell densities to determine the best combination that encourages human pluripotent stem cells to replicate.
They stained the cells with several fluorescent markers and used a microscope to see if the cells died, replicated or differentiated into tissues.
Their analysis revealed that gelatin nanofibers and medium-sized chambers that create medium seed cell density provided the best environment for the stem cells to continue to multiply. The quantity and density of neighboring cells strongly influences cell survival.
The array is an “optimal and powerful approach for understanding how environmental cues regulate cellular functions,” the researchers conclude in a recently published paper in the journal Small.
This array appears to be the first time multiple kinds of extracellular environments can be mounted onto a single device, making it much easier to compare how different environments influence cells.
The MACME array could substantially reduce experiment costs compared to conventional tools, in part because it is low volume and requires less cell culture medium. The array does not require any special equipment and is compatible with both commonly used laboratory pipettes and automated pipette systems for performing high-throughput screening.
Here’s a link to and a citation for the paper,
Microfluidic-Nanofiber Hybrid Array for Screening of Cellular Microenvironments by Ken-ichiro Kamei, Yasumasa Mashimo, Momoko Yoshioka, Yumie Tokunaga, Christopher Fockenberg, Shiho Terada, Yoshie Koyama, Minako Nakajima, Teiko Shibata-Seki, Li Liu, Toshihiro Akaike, Eiry Kobatake, Siew-Eng How, Motonari Uesugi, and Yong Chen. Small DOI: 10.1002/smll.201603104 Version of Record online: 8 MAR 2017
Should you have concerns about exposure to pesticides or chemical warfare agents (timely given events in Syria as per this April 4, 2017 news item on CBC [Canadian Broadcasting News Corporation] online) , scientists at the Lomonosov Moscow State University have developed a possible antidote according to a March 8,, 2017 news item on phys.org,
Members of the Faculty of Chemistry of the Lomonosov Moscow State University have developed novel nanosized agents that could be used as efficient protective and antidote modalities against the impact of neurotoxic organophosphorus compounds such as pesticides and chemical warfare agents. …
A group of scientists from the Faculty of Chemistry under the leadership of Prof. Alexander Kabanov has focused their research supported by a “megagrant” on the nanoparticle-based delivery to an organism of enzymes, capable of destroying toxic organophosphorous compounds. Development of first nanosized drugs has started more than 30 years ago and already in the 90-s first nanomedicines for cancer treatment entered the market. First such medicines were based on liposomes – spherical vesicles made of lipid bilayers. The new technology, developed by Kabanov and his colleagues, uses an enzyme, synthesized at the Lomonosov Moscow State University, encapsulated into a biodegradable polymer coat, based on an amino acid (glutamic acid).
Alexander Kabanov, Doctor of Chemistry, Professor at the Eshelman School of Pharmacy of the University of North Carolina (USA) and the Faculty of Chemistry, M. V. Lomonosov Moscow State University, one of the authors of the article explains: “At the end of the 80-s my team (at that time in Moscow) and independently Japanese colleagues led by Prof. Kazunori Kataoka from Tokyo began using polymer micelles for small molecules delivery. Soon the nanomedicine field has “exploded”. Currently hundreds of laboratories across the globe work in this area, applying a wide variety of approaches to creation of such nanosized agents. A medicine on the basis of polymeric micelles, developed by a Korean company Samyang Biopharm, was approved for human use in 2006.”
Professor Kabanov’s team after moving to the USA in 1994 focused on development of polymer micelles, which could include biopolymers due to electrostatic interactions. Initially chemists were interested in usage of micelles for RNA and DNA delivery but later on scientists started actively utilizing this approach for delivery of proteins and, namely, enzymes, to the brain and other organs.
Alexander Kabanov says: “At the time I worked at the University of Nebraska Medical Center, in Omaha (USA) and by 2010 we had a lot of results in this area. That’s why when my colleague from the Chemical Enzymology Department of the Lomonosov Moscow State University, Prof. Natalia Klyachko offered me to apply for a megagrant the research theme of the new laboratory was quite obvious. Specifically, to use our delivery approach, which we’ve called a “nanozyme”, for “improvement” of enzymes, developed by colleagues at the Lomonosov Moscow State University for its further medical application.”
Scientists together with the group of enzymologists from the Lomonosov Moscow State University under the leadership of Elena Efremenko, Doctor of Biological Sciences, have chosen organophosphorus hydrolase as a one of the delivered enzymes. Organophosphorus hydrolase is capable of degrading toxic pesticides and chemical warfare agents with very high rate. However, it has disadvantages: because of its bacterial origin, an immune response is observed as a result of its delivery to an organism of mammals. Moreover, organophosphorus hydrolase is quickly removed from the body. Chemists have solved this problem with the help of a “self-assembly” approach: as a result of inclusion of organophosphorus hydrolase enzyme in a nanozyme particles the immune response becomes weaker and, on the contrary, both the storage stability of the enzyme and its lifetime after delivery to an organism considerably increase. Rat experiments have proved that such nanozyme efficiently protects organisms against lethal doses of highly toxic pesticides and even chemical warfare agents, such as VX nerve gas.
Alexander Kabanov summarizes: “The simplicity of our approach is very important. You could get an organophosphorus hydrolase nanozyme by simple mixing of aqueous solutions of anenzyme and safe biocompatible polymer. This nanozyme is self-assembled due to electrostatic interaction between a protein (enzyme) and polymer”.
According to the scientist’s words the simplicity and technological effectiveness of the approach along with the obtained promising results of animal experiments bring hope that this modality could be successful and in clinical use.
Members of the Faculty of Chemistry of the Lomonosov Moscow State University, along with scientists from the 27th Central Research Institute of the Ministry of Defense of the Russian Federation, the Eshelman School of Pharmacy of the University of North Carolina at Chapel Hill (USA) and the University of Nebraska Medical Center (UNC) have taken part in the Project.
A new nanofiber-on-microfiber matrix could help produce more and better quality stem cells for disease treatment and regenerative therapies.
A matrix made of gelatin nanofibers on a synthetic polymer microfiber mesh may provide a better way to culture large quantities of healthy human stem cells.
Developed by a team of researchers led by Ken-ichiro Kamei of Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS), the ‘fiber-on-fiber’ (FF) matrix improves on currently available stem cell culturing techniques.
Researchers have been developing 3D culturing systems to allow human pluripotent stem cells (hPSCs) to grow and interact with their surroundings in all three dimensions, as they would inside the human body, rather than in two dimensions, like they do in a petri dish.
Pluripotent stem cells have the ability to differentiate into any type of adult cell and have huge potential for tissue regeneration therapies, treating diseases, and for research purposes.
Most currently reported 3D culturing systems have limitations, and result in low quantities and quality of cultured cells.
Kamei and his colleagues fabricated gelatin nanofibers onto a microfiber sheet made of synthetic, biodegradable polyglycolic acid. Human embryonic stem cells were then seeded onto the matrix in a cell culture medium.
The FF matrix allowed easy exchange of growth factors and supplements from the culture medium to the cells. Also, the stem cells adhered well to the matrix, resulting in robust cell growth: after four days of culture, more than 95% of the cells grew and formed colonies.
The team also scaled up the process by designing a gas-permeable cell culture bag in which multiple cell-loaded, folded FF matrices were placed. The system was designed so that minimal changes were needed to the internal environment, reducing the amount of stress placed on the cells. This newly developed system yielded a larger number of cells compared to conventional 2D and 3D culture methods.
“Our method offers an efficient way to expand hPSCs of high quality within a shorter term,” write the researchers in their study published in the journal Biomaterials. Also, because the use of the FF matrix is not limited to a specific type of culture container, it allows for scaling up production without loss of cell functions. “Additionally, as nanofiber matrices are advantageous for culturing other adherent cells, including hPSC-derived differentiated cells, FF matrix might be applicable to the large-scale production of differentiated functional cells for various applications,” the researchers conclude.
Human stem cells that grew on the ‘fiber-on-fiber’ culturing system
It seems unexpected to stumble across presentations on robots and on artificial intelligence at an entertainment conference such as South by South West (SXSW). Here’s why I thought so, from the SXSW Wikipedia entry (Note: Links have been removed),
South by Southwest (abbreviated as SXSW) is an annual conglomerate of film, interactive media, and music festivals and conferences that take place in mid-March in Austin, Texas, United States. It began in 1987, and has continued to grow in both scope and size every year. In 2011, the conference lasted for 10 days with SXSW Interactive lasting for 5 days, Music for 6 days, and Film running concurrently for 9 days.
The 2017 SXSW Interactive featured separate presentations by Japanese roboticist, Hiroshi Ishiguro (mentioned here a few times), and EPFL (École Polytechnique Fédérale de Lausanne; Switzerland) artificial intelligence expert, Marcel Salathé.
Ishiguro’s work is the subject of Harry McCracken’s March 14, 2017 article for Fast Company (Note: Links have been removed),
I’m sitting in the Japan Factory pavilion at SXSW in Austin, Texas, talking to two other attendees about whether human beings are more valuable than robots. I say that I believe human life to be uniquely precious, whereupon one of the others rebuts me by stating that humans allow cars to exist even though they kill humans.
It’s a reasonable point. But my fellow conventioneer has a bias: It’s a robot itself, with an ivory-colored, mask-like face and visible innards. So is the third participant in the conversation, a much more human automaton modeled on a Japanese woman and wearing a black-and-white blouse and a blue scarf.
We’re chatting as part of a demo of technologies developed by the robotics lab of Hiroshi Ishiguro, based at Osaka University, and Japanese telecommunications company NTT. Ishiguro has gained fame in the field by creating increasingly humanlike robots—that is, androids—with the ultimate goal of eliminating the uncanny valley that exists between people and robotic people.
I also caught up with Ishiguro himself at the conference—his second SXSW—to talk about his work. He’s a champion of the notion that people will respond best to robots who simulate humanity, thereby creating “a feeling of presence,” as he describes it. That gives him and his researchers a challenge that encompasses everything from technology to psychology. “Our approach is quite interdisciplinary,” he says, which is what prompted him to bring his work to SXSW.
If you have the time, do read McCracken’t piece in its entirety.
You can find out more about the ‘uncanny valley’ in my March 10, 2011 posting about Ishiguro’s work if you scroll down about 70% of the way to find the ‘uncanny valley’ diagram and Masahiro Mori’s description of the concept he developed.
You can read more about Ishiguro and his colleague, Ryuichiro Higashinaka, on their SXSW biography page.
In the quest for reliable artificial intelligence, EPFL scientist Marcel Salathé argues that AI technology should be openly available. He will be discussing the topic at this year’s edition of South by South West on March 14th in Austin, Texas.
Will artificial intelligence (AI) change the nature of work? For EPFL theoretical biologist Marcel Salathé, the answer is invariably yes. To him, a more fundamental question that needs to be addressed is who owns that artificial intelligence?
“We have to hold AI accountable, and the only way to do this is to verify it for biases and make sure there is no deliberate misinformation,” says Salathé. “This is not possible if the AI is privatized.”
AI is both the algorithm and the data
So what exactly is AI? It is generally regarded as “intelligence exhibited by machines”. Today, it is highly task specific, specially designed to beat humans at strategic games like Chess and Go, or diagnose skin disease on par with doctors’ skills.
On a practical level, AI is implemented through what scientists call “machine learning”, which means using a computer to run specifically designed software that can be “trained”, i.e. process data with the help of algorithms and to correctly identify certain features from that data set. Like human cognition, AI learns by trial and error. Unlike humans, however, AI can process and recall large quantities of data, giving it a tremendous advantage over us.
Crucial to AI learning, therefore, is the underlying data. For Salathé, AI is defined by both the algorithm and the data, and as such, both should be publicly available.
Deep learning algorithms can be perturbed
Last year, Salathé created an algorithm to recognize plant diseases. With more than 50,000 photos of healthy and diseased plants in the database, the algorithm uses artificial intelligence to diagnose plant diseases with the help of your smartphone. As for human disease, a recent study by a Stanford Group on cancer showed that AI can be trained to recognize skin cancer slightly better than a group of doctors. The consequences are far-reaching: AI may one day diagnose our diseases instead of doctors. If so, will we really be able to trust its diagnosis?
These diagnostic tools use data sets of images to train and learn. But visual data sets can be perturbed that prevent deep learning algorithms from correctly classifying images. Deep neural networks are highly vulnerable to visual perturbations that are practically impossible to detect with the naked eye, yet causing the AI to misclassify images.
In future implementations of AI-assisted medical diagnostic tools, these perturbations pose a serious threat. More generally, the perturbations are real and may already be affecting the filtered information that reaches us every day. These vulnerabilities underscore the importance of certifying AI technology and monitoring its reliability.
A Feb.28,2017 news item on Nanowerk announces a proposed nerve regeneration technique (Note: A link has been removed),
A research team consisting of Mitsuhiro Ebara, MANA associate principal investigator, Mechanobiology Group, NIMS, and Hiroyuki Tanaka, assistant professor, Orthopaedic Surgery, Osaka University Graduate School of Medicine, developed a mesh which can be wrapped around injured peripheral nerves to facilitate their regeneration and restore their functions (Acta Biomaterialia, “Electrospun nanofiber sheets incorporating methylcobalamin promote nerve regeneration and functional recovery in a rat sciatic nerve crush injury model”).
This mesh incorporates vitamin B12—a substance vital to the normal functioning of nervous systems—which is very soft and degrades in the body. When the mesh was applied to injured sciatic nerves in rats, it promoted nerve regeneration and recovery of their motor and sensory functions.
Artificial nerve conduits have been developed in the past to treat peripheral nerve injuries, but they merely form a cross-link to the injury site and do not promote faster nerve regeneration. Moreover, their application is limited to relatively few patients suffering from a complete loss of nerve continuity. Vitamin B12 has been known to facilitate nerve regeneration, but oral administration of it has not proven to be very effective, and no devices capable of delivering vitamin B12 directly to affected sites had been available. Therefore, it had been hoped to develop such medical devices to actively promote nerve regeneration in the many patients who suffer from nerve injuries but have not lost nerve continuity.
The NIMS-Osaka University joint research team recently developed a special mesh that can be wrapped around an injured nerve which releases vitamin B12 (methylcobalamin) until the injury heals. By developing very fine mesh fibers (several hundred nanometers in diameter) and reducing the crystallinity of the fibers, the team successfully created a very soft mesh that can be wrapped around a nerve. This mesh is made of a biodegradable plastic which, when implanted in animals, is eventually eliminated from the body. In fact, experiments demonstrated that application of the mesh directly to injured sciatic nerves in rats resulted in regeneration of axons and recovery of motor and sensory functions within six weeks.
The team is currently negotiating with a pharmaceutical company and other organizations to jointly study clinical application of the mesh as a medical device to treat peripheral nerve disorders, such as CTS.
This study was supported by the JSPS KAKENHI program (Grant Number JP15K10405) and AMED’s Project for Japan Translational and Clinical Research Core Centers (also known as Translational Research Network Program).
Figure 1. Conceptual diagram showing a nanofiber mesh incorporating vitamin B12 and its application to treat a peripheral nerve injury.
8 February – 3 September 2017, Science Museum, London
Admission: £15 adults, £13 concessions (Free entry for under 7s; family tickets available)
Tickets available in the Museum or via sciencemuseum.org.uk/robots
Supported by the Heritage Lottery Fund
Throughout history, artists and scientists have sought to understand what it means to be human. The Science Museum’s new Robots exhibition, opening in February 2017, will explore this very human obsession to recreate ourselves, revealing the remarkable 500-year story of humanoid robots.
Featuring a unique collection of over 100 robots, from a 16th-century mechanical monk to robots from science fiction and modern-day research labs, this exhibition will enable visitors to discover the cultural, historical and technological context of humanoid robots. Visitors will be able to interact with some of the 12 working robots on display. Among many other highlights will be an articulated iron manikin from the 1500s, Cygan, a 2.4m tall 1950s robot with a glamorous past, and one of the first walking bipedal robots.
Robots have been at the heart of popular culture since the word ‘robot’ was first used in 1920, but their fascinating story dates back many centuries. Set in five different periods and places, this exhibition will explore how robots and society have been shaped by religious belief, the industrial revolution, 20th century popular culture and dreams about the future.
The quest to build ever more complex robots has transformed our understanding of the human body, and today robots are becoming increasingly human, learning from mistakes and expressing emotions. In the exhibition, visitors will go behind the scenes to glimpse recent developments from robotics research, exploring how roboticists are building robots that resemble us and interact in human-like ways. The exhibition will end by asking visitors to imagine what a shared future with robots might be like. Robots has been generously supported by the Heritage Lottery Fund, with a £100,000 grant from the Collecting Cultures programme.
Ian Blatchford, Director of the Science Museum Group said: ‘This exhibition explores the uniquely human obsession of recreating ourselves, not through paint or marble but in metal. Seeing robots through the eyes of those who built or gazed in awe at them reveals much about humanity’s hopes, fears and dreams.’
‘The latest in our series of ambitious, blockbuster exhibitions, Robots explores the wondrously rich culture, history and technology of humanoid robotics. Last year we moved gigantic spacecraft from Moscow to the Museum, but this year we will bring a robot back to life.’
Today [May ?, 2016] the Science Museum launched a Kickstarter campaign to rebuild Eric, the UK’s first robot. Originally built in 1928 by Captain Richards & A.H. Reffell, Eric was one of the world’s first robots. Built less than a decade after the word robot was first used, he travelled the globe with his makers and amazed crowds in the UK, US and Europe, before disappearing forever.
Getting back to the exhibition, the Guardian’s Ian Sample has written up a Feb. 7, 2017 preview (Note: Links have been removed),
Eric the robot wowed the crowds. He stood and bowed and answered questions as blue sparks shot from his metallic teeth. The British creation was such a hit he went on tour around the world. When he arrived in New York, in 1929, a theatre nightwatchman was so alarmed he pulled out a gun and shot at him.
The curators at London’s Science Museum hope for a less extreme reaction when they open Robots, their latest exhibition, on Wednesday [Feb. 8, 2016]. The collection of more than 100 objects is a treasure trove of delights: a miniature iron man with moving joints; a robotic swan that enthralled Mark Twain; a tiny metal woman with a wager cup who is propelled by a mechanism hidden up her skirt.
The pieces are striking and must have dazzled in their day. Ben Russell, the lead curator, points out that most people would not have seen a clock when they first clapped eyes on one exhibit, a 16th century automaton of a monk [emphasis mine], who trundled along, moved his lips, and beat his chest in contrition. It was surely mesmerising to the audiences of 1560. “Arthur C Clarke once said that any sufficiently advanced technology is indistinguishable from magic,” Russell says. “Well, this is where it all started.”
In every chapter of the 500-year story, robots have held a mirror to human society. Some of the earliest devices brought the Bible to life. One model of Christ on the cross rolls his head and oozes wooden blood from his side as four figures reach up. The mechanisation of faith must have drawn the congregations as much as any sermon.
But faith was not the only focus. Through clockwork animals and human figurines, model makers explored whether humans were simply conscious machines. They brought order to the universe with orreries and astrolabes. The machines became more lighthearted in the enlightened 18th century, when automatons of a flute player, a writer, and a defecating duck all made an appearance. A century later, the style was downright rowdy, with drunken aristocrats, preening dandies and the disturbing life of a sausage from farm to mouth all being recreated as automata.
That reference to an automaton of a monk reminded me of a July 22, 2009 posting where I excerpted a passage (from another blog) about a robot priest and a robot monk,
Since 1993 Robo-Priest has been on call 24-hours a day at Yokohama Central Cemetery. The bearded robot is programmed to perform funerary rites for several Buddhist sects, as well as for Protestants and Catholics. Meanwhile, Robo-Monk chants sutras, beats a religious drum and welcomes the faithful to Hotoku-ji, a Buddhist temple in Kakogawa city, Hyogo Prefecture. More recently, in 2005, a robot dressed in full samurai armour received blessings at a Shinto shrine on the Japanese island of Kyushu. Kiyomori, named after a famous 12th-century military general, prayed for the souls of all robots in the world before walking quietly out of Munakata Shrine.
Sample’s preview takes the reader up to our own age and contemporary robots. And, there is another Guardian article which offering a behind-the-scenes look at the then upcoming exhibition, a Jan. 28, 2016 piece by Jonathan Jones, ,
An android toddler lies on a pallet, its doll-like face staring at the ceiling. On a shelf rests a much more grisly creation that mixes imitation human bones and muscles, with wires instead of arteries and microchips in place of organs. It has no lower body, and a single Cyclopean eye. This store room is an eerie place, then it gets more creepy, as I glimpse behind the anatomical robot a hulking thing staring at me with glowing red eyes. Its plastic skin has been burned off to reveal a metal skeleton with pistons and plates of merciless strength. It is the Terminator, sent back in time by the machines who will rule the future to ensure humanity’s doom.
Backstage at the Science Museum, London, where these real experiments and a full-scale model from the Terminator films are gathered to be installed in the exhibition Robots, it occurs to me that our fascination with mechanical replacements for ourselves is so intense that science struggles to match it. We think of robots as artificial humans that can not only walk and talk but possess digital personalities, even a moral code. In short we accord them agency. Today, the real age of robots is coming, and yet even as these machines promise to transform work or make it obsolete, few possess anything like the charisma of the androids of our dreams and nightmares.
That’s why, although the robotic toddler sleeping in the store room is an impressive piece of tech, my heart leaps in another way at the sight of the Terminator. For this is a bad robot, a scary robot, a robot of remorseless malevolence. It has character, in other words. Its programmed persona (which in later films becomes much more helpful and supportive) is just one of those frightening, funny or touching personalities that science fiction has imagined for robots.
Can the real life – well, real simulated life – robots in the Science Museum’s new exhibition live up to these characters? The most impressively interactive robot in the show will be RoboThespian, who acts as compere for its final gallery displaying the latest advances in robotics. He stands at human height, with a white plastic face and metal arms and legs, and can answer questions about the value of pi and the nature of free will. “I’m a very clever robot,” RoboThespian claims, plausibly, if a little obnoxiously.
Except not quite as clever as all that. A human operator at a computer screen connected with Robothespian by wifi is looking through its video camera eyes and speaking with its digital voice. The result is huge fun – the droid moves in very lifelike ways as it speaks, and its interactions don’t need a live operator as they can be preprogrammed. But a freethinking, free-acting robot with a mind and personality of its own, Robothespian is not.
Our fascination with synthetic humans goes back to the human urge to recreate life itself – to reproduce the mystery of our origins. Artists have aspired to simulate human life since ancient times. The ancient Greek myth of Pygmalion, who made a statue so beautiful he fell in love with it and prayed for it to come to life, is a mythic version of Greek artists such as Pheidias and Praxiteles whose statues, with their superb imitation of muscles and movement, seem vividly alive. The sculptures of centaurs carved for the Parthenon in Athens still possess that uncanny lifelike power.
Most of the finest Greek statues were bronze, and mythology tells of metal robots that sound very much like statues come to life, including the bronze giant Talos, who was to become one of cinema’s greatest robotic monsters thanks to the special effects genius of Ray Harryhausen in Jason and the Argonauts.
Renaissance art took the quest to simulate life to new heights, with awed admirers of Michelangelo’s David claiming it even seemed to breathe (as it really does almost appear to when soft daylight casts mobile shadow on superbly sculpted ribs). So it is oddly inevitable that one of the first recorded inventors of robots was Leonardo da Vinci, consummate artist and pioneering engineer. Leonardo apparently made, or at least designed, a robot knight to amuse the court of Milan. It worked with pulleys and was capable of simple movements. Documents of this invention are frustratingly sparse, but there is a reliable eyewitness account of another of Leonardo’s automata. In 1515 he delighted Francois I, king of France, with a robot lion that walked forward towards the monarch, then released a bunch of lilies, the royal flower, from a panel that opened in its back.
One of the most uncanny androids in the Science Museum show is from Japan, a freakily lifelike female robot called Kodomoroid, the world’s first robot newscaster. With her modest downcast gaze and fine artificial complexion, she has the same fetishised femininity you might see in a Manga comic and appears to reflect a specific social construction of gender. Whether you read that as vulnerability or subservience, presumably the idea is to make us feel we are encountering a robot with real personhood. Here is a robot that combines engineering and art just as Da Vinci dreamed – it has the mechanical genius of his knight and the synthetic humanity of his perfect portrait.