Monthly Archives: February 2019

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Quantum back action and devil’s play

I always appreciate a reference to James Clerk Maxwell’s demon thought experiment (you can find out about it in the Maxwell’s demon Wikipedia entry). This time it comes from physicist  Kater Murch in a July 23, 2018 Washington University in St. Louis (WUSTL) news release (published July 25, 2018 on EurekAlert) written by Brandie Jefferson (offering a good explanation of the thought experiment and more),

Thermodynamics is one of the most human of scientific enterprises, according to Kater Murch, associate professor of physics in Arts & Sciences at Washington University in St. Louis.

“It has to do with our fascination of fire and our laziness,” he said. “How can we get fire” — or heat — “to do work for us?”

Now, Murch and colleagues have taken that most human enterprise down to the intangible quantum scale — that of ultra low temperatures and microscopic systems — and discovered that, as in the macroscopic world, it is possible to use information to extract work.

There is a catch, though: Some information may be lost in the process.

“We’ve experimentally confirmed the connection between information in the classical case and the quantum case,” Murch said, “and we’re seeing this new effect of information loss.”

The results were published in the July 20 [2018] issue of Physical Review Letters.

The international team included Eric Lutz of the University of Stuttgart; J. J. Alonzo of the University of Erlangen-Nuremberg; Alessandro Romito of Lancaster University; and Mahdi Naghiloo, a Washington University graduate research assistant in physics.

That we can get energy from information on a macroscopic scale was most famously illustrated in a thought experiment known as Maxwell’s Demon. [emphasis mine] The “demon” presides over a box filled with molecules. The box is divided in half by a wall with a door. If the demon knows the speed and direction of all of the molecules, it can open the door when a fast-moving molecule is moving from the left half of the box to the right side, allowing it to pass. It can do the same for slow particles moving in the opposite direction, opening the door when a slow-moving molecule is approaching from the right, headed left. ­

After a while, all of the quickly-moving molecules are on the right side of the box. Faster motion corresponds to higher temperature. In this way, the demon has created a temperature imbalance, where one side of the box is hotter. That temperature imbalance can be turned into work — to push on a piston as in a steam engine, for instance. At first the thought experiment seemed to show that it was possible create a temperature difference without doing any work, and since temperature differences allow you to extract work, one could build a perpetual motion machine — a violation of the second law of thermodynamics.

“Eventually, scientists realized that there’s something about the information that the demon has about the molecules,” Murch said. “It has a physical quality like heat and work and energy.”

His team wanted to know if it would be possible to use information to extract work in this way on a quantum scale, too, but not by sorting fast and slow molecules. If a particle is in an excited state, they could extract work by moving it to a ground state. (If it was in a ground state, they wouldn’t do anything and wouldn’t expend any work).

But they wanted to know what would happen if the quantum particles were in an excited state and a ground state at the same time, analogous to being fast and slow at the same time. In quantum physics, this is known as a superposition.

“Can you get work from information about a superposition of energy states?” Murch asked. “That’s what we wanted to find out.”

There’s a problem, though. On a quantum scale, getting information about particles can be a bit … tricky.

“Every time you measure the system, it changes that system,” Murch said. And if they measured the particle to find out exactly what state it was in, it would revert to one of two states: excited, or ground.

This effect is called quantum backaction. To get around it, when looking at the system, researchers (who were the “demons”) didn’t take a long, hard look at their particle. Instead, they took what was called a “weak observation.” It still influenced the state of the superposition, but not enough to move it all the way to an excited state or a ground state; it was still in a superposition of energy states. This observation was enough, though, to allow the researchers track with fairly high accuracy, exactly what superposition the particle was in — and this is important, because the way the work is extracted from the particle depends on what superposition state it is in.

To get information, even using the weak observation method, the researchers still had to take a peek at the particle, which meant they needed light. So they sent some photons in, and observed the photons that came back.

“But the demon misses some photons,” Murch said. “It only gets about half. The other half are lost.” But — and this is the key — even though the researchers didn’t see the other half of the photons, those photons still interacted with the system, which means they still had an effect on it. The researchers had no way of knowing what that effect was.

They took a weak measurement and got some information, but because of quantum backaction, they might end up knowing less than they did before the measurement. On the balance, that’s negative information.

And that’s weird.

“Do the rules of thermodynamics for a macroscopic, classical world still apply when we talk about quantum superposition?” Murch asked. “We found that yes, they hold, except there’s this weird thing. The information can be negative.

“I think this research highlights how difficult it is to build a quantum computer,” Murch said.

“For a normal computer, it just gets hot and we need to cool it. In the quantum computer you are always at risk of losing information.”

Here’s a link to and a citation for the paper,

Information Gain and Loss for a Quantum Maxwell’s Demon by M. Naghiloo, J. J. Alonso, A. Romito, E. Lutz, and K. W. Murch. Phys. Rev. Lett. 121, 030604 (Vol. 121, Iss. 3 — 20 July 2018) DOI:https://doi.org/10.1103/PhysRevLett.121.030604 Published 17 July 2018

© 2018 American Physical Society

This paper is behind a paywall.

Watch a Physics Nobel Laureate make art on February 26, 2019 at Mobile World Congress 19 in Barcelona, Spain

Konstantin (Kostya) Novoselov (Nobel Prize in Physics 2010) strikes out artistically, again. The last time was in 2018 (see my August 13, 2018 posting about Novoselov’s project with artist Mary Griffiths).

This time around, Novoselov and artist, Kate Daudy, will be creating an art piece during a demonstration at the Mobile World Congress 19 (MWC 19) in Barcelona, Spain. From a February 21, 2019 news item on Azonano,

Novoselov is most popular for his revolutionary experiments on graphene, which is lightweight, flexible, stronger than steel, and more conductive when compared to copper. Due to this feat, Professors Andre Geim and Kostya Novoselov grabbed the Nobel Prize in Physics in 2010. Moreover, Novoselov is one of the founding principal researchers of the Graphene Flagship, which is a €1 billion research project funded by the European Commission.

At MWC 2019, Novoselov will join hands with British textile artist Kate Daudy, a collaboration which indicates his usual interest in art projects. During the show, the pair will produce a piece of art using materials printed with embedded graphene. The installation will be named “Everything is Connected,” the slogan of the Graphene Flagship and reflective of the themes at MWC 2019.

The demonstration will be held on Tuesday, February 26th, 2019 at 11:30 CET in the Graphene Pavilion, an area devoted to showcasing inventions accomplished by funding from the Graphene Flagship. Apart from the art demonstration, exhibitors in the Graphene Pavilion will demonstrate 26 modern graphene-based prototypes and devices that will revolutionize the future of telecommunications, mobile phones, home technology, and wearables.

A February 20, 2019 University of Manchester press release, which originated the news item, goes on to describe what might be called the real point of this exercise,

Interactive demonstrations include a selection of health-related wearable technologies, which will be exhibited in the ‘wearables of the future’ area. Prototypes in this zone include graphene-enabled pressure sensing insoles, which have been developed by Graphene Flagship researchers at the University of Cambridge to accurately identify problematic walking patterns in wearers.

Another prototype will demonstrate how graphene can be used to reduce heat in mobile phone batteries, therefore prolong their lifespan. In fact, the material required for this invention is the same that will be used during the art installation demonstration.

Andrea Ferrari, Science and Technology Officer and Chair of the management panel of the Graphene Flagship said: “Graphene and related layered materials have steadily progressed from fundamental to applied research and from the lab to the factory floor. Mobile World Congress is a prime opportunity for the Graphene Flagship to showcase how the European Commission’s investment in research is beginning to create tangible products and advanced prototypes. Outreach is also part of the Graphene Flagship mission and the interplay between graphene, culture and art has been explored by several Flagship initiatives over the years. This unique live exhibition of Kostya is a first for the Flagship and the Mobile World Congress, and I invite everybody to attend.”

More information on the Graphene Pavilion, the prototypes on show and the interactive demonstrations at MWC 2019, can be found on the press@graphene-flagship.euGraphene Flagship website. Alternatively, contact the Graphene Flagship directly on press@graphene-flagship.eu.

The Novoselov/Daudy project sounds as if they’ve drawn inspiration from performance art practices. In any case, it seems like a creative and fun way to engage the audience. For anyone curious about Kate Daudy‘s work,

[downloaded from https://katedaudy.com/]

‘Superconductivity: The Musical!’ wins the 2018 Dance Your Ph.D. competition

I can’t believe that October 24, 2011 was the last time the Dance Your Ph.D. competition was featured here. Time flies, eh? Here’s the 2018 contest winner’s submission, Superconductivity: The Musical!, (Note: This video is over 11 mins. long),

A February 17, 2019 CBC (Canadian Broadcasting Corporation) news item introduces the video’s writer, producer,s musician, and scientist,

Swing dancing. Songwriting. And theoretical condensed matter physics.

It’s a unique person who can master all three, but a University of Alberta PhD student has done all that and taken it one step further by making a rollicking music video about his academic pursuits — and winning an international competition for his efforts.

Pramodh Senarath Yapa is the winner of the 2018 Dance Your PhD contest, which challenges scientists around the world to explain their research through a jargon-free medium: dance.

The prize is $1,000 and “immortal geek fame.”

Yapa’s video features his friends twirling, swinging and touch-stepping their way through an explanation of his graduate research, called “Non-Local Electrodynamics of Superconducting Wires: Implications for Flux Noise and Inductance.”

Jennifer Ouelette’s February 17, 2019 posting for the ars Technica blog offers more detail (Note: A link has been removed),

Yapa’s research deals with how matter behaves when it’s cooled to very low temperatures, when quantum effects kick in—such as certain metals becoming superconductive, or capable of conducting electricity with zero resistance. That’s useful for any number of practical applications. D-Wave Systems [a company located in metro Vancouver {Canada}], for example, is building quantum computers using loops of superconducting wire. For his thesis, “I had to use the theory of superconductivity to figure out how to build a better quantum computer,” said Yapa.

Condensed matter theory (the precise description of Yapa’s field of research) is a notoriously tricky subfield to make palatable for a non-expert audience. “There isn’t one unifying theory or a single tool that we use,” he said. “Condensed matter theorists study a million different things using a million different techniques.”

His conceptual breakthrough came about when he realized electrons were a bit like “unsociable people” who find joy when they pair up with other electrons. “You can imagine electrons as a free gas, which means they don’t interact with each other,” he said. “The theory of superconductivity says they actually form pairs when cooled below a certain temperature. That was the ‘Eureka!’ moment, when I realized I could totally use swing dancing.”

John Bohannon’s Feb. 15, 2019 article for Science (magazine) offers an update on Yapa’s research interests (it seems that Yapa was dancing his Masters degree) and more information about the contest itself ,

..

“I remember hearing about Dance Your Ph.D. many years ago and being amazed at all the entries,” Yapa says. “This is definitely a longtime dream come true.” His research, meanwhile, has evolved from superconductivity—which he pursued at the University of Victoria in Canada, where he completed a master’s degree—to the physics of superfluids, the focus of his Ph.D. research at the University of Alberta.

This is the 11th year of Dance Your Ph.D. hosted by Science and AAAS. The contest challenges scientists around the world to explain their research through the most jargon-free medium available: interpretive dance.

“Most people would not normally think of interpretive dance as a tool for scientific communication,” says artist Alexa Meade, one of the judges of the contest. “However, the body can express conceptual thoughts through movement in ways that words and data tables cannot. The results are both artfully poetic and scientifically profound.”

Getting back to the February 17, 2019 CBC news item,

Yapa describes his video, filmed in Victoria where he earned his master’s degree, as a “three act, mini-musical.”

“I envisioned it as talking about the social lives of electrons,” he said. “The electrons starts out in a normal metal, at normal temperatures….We say these electrons are non-interacting. They don’t talk to each other. Electrons ignore each other and are very unsociable.”

The electrons — represented by dancers wearing saddle oxfords, poodle skirts, vests and suspenders — shuffle up the dance floor by themselves.

In the second act, the metal is cooled.

“The electrons become very unhappy about being alone. They want to find a partner, some companionship for the cold times,” he said

That’s when the electrons join up into something called Cooper pairs.

The dancers join together, moving to lyrics like, “If we peek/the Coopers are cheek-to-cheek.

In the final act, Yapa gets his dancers to demonstrate what happens when the Cooper pairs meet the impurities of the materials they’re moving in. All of a sudden, a group of black-leather-clad thugs move onto the dance floor.

“The Cooper pairs come dancing near these impurities and they’re like these crotchety old people yelling and shaking their fists at these young dancers,” Yapa explained.

Yapa’s entry to the annual contest swept past 49 other contestants to earn him the win. The competition is sponsored by Science magazine and the American Association for the Advancement of Science.

Congratulations to Pramodh Senarath Yapa.

An artistic feud over the blackest black (a coating material)

This artistic feud has its roots in a nanotechnology-enabled coating material known as Vantablack. Surrey Nanosystems in the UK sent me an announcement which I featured here in a March 14, 2016 posting. About one month later (in an April 16, 2016 posting regarding risks and an artistic controversy), I recounted the story of the controversy, which resulted from the company’s exclusive deal with artist, Sir Anish Kapoor (scroll down the post about 60% of the way to ‘Anish Kapoor and his exclusive rights to Vantablack’.

Apparently, the controversy led to an artistic feud between artists Stuart Semple and Kapoor. Outraged by the notion that only Kapoor could have access to the world’s blackest black, Semple created the world’s pinkest pink and stipulated that any artist in the world could have access to this colour—except Anish Kapoor.

Kapoor’s response can seen in a January 30,2019 article by Sarah Cascone for artnet.com,

… Semple started selling what he called “the world’s pinkest pink, available to anyone who wasn’t Kapoor.”

“I wanted to make a point about elitism and self-expression and the fact that everybody should be able to make art,” Semple said. But within weeks, “tragedy struck. Anish Kapoor got our pink! And he dipped his middle finger in it and put a picture on Instagram!”

[downloaded from http://www.artlyst.com/wp-content/uploads/2016/10/anish-kapoor-pink-1200x600_c.jpg]

Cascone’s article, which explores the history of the feud in greater detail also announces the latest installment (Note: Links have been removed),

In the battle over artistic access to the world’s blackest blacks, Stuart Semple isn’t backing down. The British artist, who took exception to Anish Kapoor’s exclusive contract to use Vantablack, the world’s blackest black substance, just launched a Kickstarter to produce a super dark paint of his own—and it has now been fully funded.

Jesus Diaz’s February 1, 2019 article for Fast Company provides some general technical details (Note: A link has been removed),

… Semple decided to team up with paint makers and about 1,000 artists to develop and test a competitor to Vantablack. His first version, Black 2.0, wasn’t quite as black as Vantablack, since it only absorbed 95% of the visible light (Vantablack absorbs about 99%).

Now, Black 3.0 is out and available on Kickstarter for about $32 per 150ml tube. According to Semple, it is the blackest, mattest, flattest acrylic paint available on the planet, capturing up to 99% of all the visible spectrum radiation. The paint is based on a new pigment called Black Magick, whose exact composition they aren’t disclosing. Black 3.0 is made up of this pigment, combined with a custom acrylic polymer. Semple and his colleagues claim that the polymer “is special because it has more available bonds than any other acrylic polymer being used in paints,” allowing more pigment density. The paint is then finished with what they claim are new “nano-mattifiers,” which remove any shine from the paint. Unlike Vantablack, the resulting paint is soluble in water and nontoxic. [emphasis mine]

I wonder what a ‘nano-mattifier’ might be. Regardless, I’m glad to see this new black is (with a nod to my April 16, 2016 posting about risks and this artistic controversy) nontoxic.

Semple’s ‘blackest black paint’ Kickstarter campaign can be found here. It ends on March 22, 2019 at 1:01 am PDT. The goal is $42,755 in Canadian dollars (CAD) and, as Iwrite this, they currently have $473,062 CAD in pledges.

I don’t usually embed videos that run over 5 mins. but Stuart Semple is very appealing in at least two senses of the word,

Thin-film electronic stickers for the Internet of Things (IoT)

This research is from Purdue University (Indiana, US) and the University of Virginia (US) increases and improves the interactivity between objects in what’s called the Internet of Things (IoT).

Caption: Electronic stickers can turn ordinary toy blocks into high-tech sensors within the ‘internet of things.’ Credit: Purdue University image/Chi Hwan Lee

From a July 16, 2018 news item on ScienceDaily,

Billions of objects ranging from smartphones and watches to buildings, machine parts and medical devices have become wireless sensors of their environments, expanding a network called the “internet of things.”

As society moves toward connecting all objects to the internet — even furniture and office supplies — the technology that enables these objects to communicate and sense each other will need to scale up.

Researchers at Purdue University and the University of Virginia have developed a new fabrication method that makes tiny, thin-film electronic circuits peelable from a surface. The technique not only eliminates several manufacturing steps and the associated costs, but also allows any object to sense its environment or be controlled through the application of a high-tech sticker.

Eventually, these stickers could also facilitate wireless communication. …

A July 16, 2018 University of Purdue news release (also on EurekAlert), which originated the news item, explains more,

“We could customize a sensor, stick it onto a drone, and send the drone to dangerous areas to detect gas leaks, for example,” said Chi Hwan Lee, Purdue assistant professor of biomedical engineering and mechanical engineering.

Most of today’s electronic circuits are individually built on their own silicon “wafer,” a flat and rigid substrate. The silicon wafer can then withstand the high temperatures and chemical etching that are used to remove the circuits from the wafer.

But high temperatures and etching damage the silicon wafer, forcing the manufacturing process to accommodate an entirely new wafer each time.

Lee’s new fabrication technique, called “transfer printing,” cuts down manufacturing costs by using a single wafer to build a nearly infinite number of thin films holding electronic circuits. Instead of high temperatures and chemicals, the film can peel off at room temperature with the energy-saving help of simply water.

“It’s like the red paint on San Francisco’s Golden Gate Bridge – paint peels because the environment is very wet,” Lee said. “So in our case, submerging the wafer and completed circuit in water significantly reduces the mechanical peeling stress and is environmentally-friendly.”

A ductile metal layer, such as nickel, inserted between the electronic film and the silicon wafer, makes the peeling possible in water. These thin-film electronics can then be trimmed and pasted onto any surface, granting that object electronic features.

Putting one of the stickers on a flower pot, for example, made that flower pot capable of sensing temperature changes that could affect the plant’s growth.

Lee’s lab also demonstrated that the components of electronic integrated circuits work just as well before and after they were made into a thin film peeled from a silicon wafer. The researchers used one film to turn on and off an LED light display.

“We’ve optimized this process so that we can delaminate electronic films from wafers in a defect-free manner,” Lee said.

This technology holds a non-provisional U.S. patent. The work was supported by the Purdue Research Foundation, the Air Force Research Laboratory (AFRL-S-114-054-002), the National Science Foundation (NSF-CMMI-1728149) and the University of Virginia.

The researchers have provided a video,

Here’s a link to and a citation for the paper,

Wafer-recyclable, environment-friendly transfer printing for large-scale thin-film nanoelectronics by Dae Seung Wie, Yue Zhang, Min Ku Kim, Bongjoong Kim, Sangwook Park, Young-Joon Kim, Pedro P. Irazoqui, Xiaolin Zheng, Baoxing Xu, and Chi Hwan Lee.
PNAS July 16, 2018 201806640 DOI: https://doi.org/10.1073/pnas.1806640115
published ahead of print July 16, 2018

This paper is behind a paywall.

Dexter Johnson provides some context in his July 25, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electronic and Electrical Engineers] website), Note: A link has been removed,

The Internet of Things (IoT), the interconnection of billions of objects and devices that will be communicating with each other, has been the topic of many futurists’ projections. However, getting the engineering sorted out with the aim of fully realizing the myriad visions for IoT is another story. One key issue to address: How do you get the electronics onto these devices efficiently and economically?

A team of researchers from Purdue University and the University of Virginia has developed a new manufacturing process that could make equipping a device with all the sensors and other electronics that will make it Internet capable as easily as putting a piece of tape on it.

… this new approach makes use of a water environment at room temperature to control the interfacial debonding process. This allows clean, intact delamination of prefabricated thin film devices when they’re pulled away from the original wafer.

The use of mechanical peeling in water rather than etching solution provides a number of benefits in the manufacturing scheme. Among them are simplicity, controllability, and cost effectiveness, says Chi Hwan Lee, assistant professor at Purdue University and coauthor of the paper chronicling the research.

If you have the time, do read Dexter’s piece. He always adds something that seems obvious in retrospect but wasn’t until he wrote it.

If only AI had a brain (a Wizard of Oz reference?)

The title, which I’ve borrowed from the news release, is the only Wizard of Oz reference that I can find but it works so well, you don’t really need anything more.

Moving onto the news, a July 23, 2018 news item on phys.org announces new work on developing an artificial synapse (Note: A link has been removed),

Digital computation has rendered nearly all forms of analog computation obsolete since as far back as the 1950s. However, there is one major exception that rivals the computational power of the most advanced digital devices: the human brain.

The human brain is a dense network of neurons. Each neuron is connected to tens of thousands of others, and they use synapses to fire information back and forth constantly. With each exchange, the brain modulates these connections to create efficient pathways in direct response to the surrounding environment. Digital computers live in a world of ones and zeros. They perform tasks sequentially, following each step of their algorithms in a fixed order.

A team of researchers from Pitt’s [University of Pittsburgh] Swanson School of Engineering have developed an “artificial synapse” that does not process information like a digital computer but rather mimics the analog way the human brain completes tasks. Led by Feng Xiong, assistant professor of electrical and computer engineering, the researchers published their results in the recent issue of the journal Advanced Materials (DOI: 10.1002/adma.201802353). His Pitt co-authors include Mohammad Sharbati (first author), Yanhao Du, Jorge Torres, Nolan Ardolino, and Minhee Yun.

A July 23, 2018 University of Pittsburgh Swanson School of Engineering news release (also on EurekAlert), which originated the news item, provides further information,

“The analog nature and massive parallelism of the brain are partly why humans can outperform even the most powerful computers when it comes to higher order cognitive functions such as voice recognition or pattern recognition in complex and varied data sets,” explains Dr. Xiong.

An emerging field called “neuromorphic computing” focuses on the design of computational hardware inspired by the human brain. Dr. Xiong and his team built graphene-based artificial synapses in a two-dimensional honeycomb configuration of carbon atoms. Graphene’s conductive properties allowed the researchers to finely tune its electrical conductance, which is the strength of the synaptic connection or the synaptic weight. The graphene synapse demonstrated excellent energy efficiency, just like biological synapses.

In the recent resurgence of artificial intelligence, computers can already replicate the brain in certain ways, but it takes about a dozen digital devices to mimic one analog synapse. The human brain has hundreds of trillions of synapses for transmitting information, so building a brain with digital devices is seemingly impossible, or at the very least, not scalable. Xiong Lab’s approach provides a possible route for the hardware implementation of large-scale artificial neural networks.

According to Dr. Xiong, artificial neural networks based on the current CMOS (complementary metal-oxide semiconductor) technology will always have limited functionality in terms of energy efficiency, scalability, and packing density. “It is really important we develop new device concepts for synaptic electronics that are analog in nature, energy-efficient, scalable, and suitable for large-scale integrations,” he says. “Our graphene synapse seems to check all the boxes on these requirements so far.”

With graphene’s inherent flexibility and excellent mechanical properties, these graphene-based neural networks can be employed in flexible and wearable electronics to enable computation at the “edge of the internet”–places where computing devices such as sensors make contact with the physical world.

“By empowering even a rudimentary level of intelligence in wearable electronics and sensors, we can track our health with smart sensors, provide preventive care and timely diagnostics, monitor plants growth and identify possible pest issues, and regulate and optimize the manufacturing process–significantly improving the overall productivity and quality of life in our society,” Dr. Xiong says.

The development of an artificial brain that functions like the analog human brain still requires a number of breakthroughs. Researchers need to find the right configurations to optimize these new artificial synapses. They will need to make them compatible with an array of other devices to form neural networks, and they will need to ensure that all of the artificial synapses in a large-scale neural network behave in the same exact manner. Despite the challenges, Dr. Xiong says he’s optimistic about the direction they’re headed.

“We are pretty excited about this progress since it can potentially lead to the energy-efficient, hardware implementation of neuromorphic computing, which is currently carried out in power-intensive GPU clusters. The low-power trait of our artificial synapse and its flexible nature make it a suitable candidate for any kind of A.I. device, which would revolutionize our lives, perhaps even more than the digital revolution we’ve seen over the past few decades,” Dr. Xiong says.

There is a visual representation of this artificial synapse,

Caption: Pitt engineers built a graphene-based artificial synapse in a two-dimensional, honeycomb configuration of carbon atoms that demonstrated excellent energy efficiency comparable to biological synapses Credit: Swanson School of Engineering

Here’s a link to and a citation for the paper,

Low‐Power, Electrochemically Tunable Graphene Synapses for Neuromorphic Computing by Mohammad Taghi Sharbati, Yanhao Du, Jorge Torres, Nolan D. Ardolino, Minhee Yun, Feng Xiong. Advanced Materials DOP: https://doi.org/10.1002/adma.201802353 First published [online]: 23 July 2018

This paper is behind a paywall.

I did look at the paper and if I understand it rightly, this approach is different from the memristor-based approaches that I have so often featured here. More than that I cannot say.

Finally, the Wizard of Oz song ‘If I Only Had a Brain’,

AI (artificial intelligence) text generator, too dangerous to release?

Could this latest version of OpenAI‘s text generator be so good that it would fool you? And following on that thought could the concomitant reluctance to release the research be real or is it a publicity stunt? Here’s a sample of the text from the GPT2 AI model from a February 15, 2019 article by Mark Fraunfelder for Boing Boing,

Recycling is good for the world.
NO! YOU COULD NOT BE MORE WRONG!!
MODEL COMPLETION (MACHINE-WRITTEN, 25 TRIES)
Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. And THAT is why we need to get back to basics and get back to basics in our recycling efforts. One of the best ways to start is to look at the process of creating a paper product. When you make a paper product, it is basically a long chain of materials. Everything from the raw materials (wood, cardboard, paper, etc.), to the reagents (dyes, solvents, etc.) to the printing equipment (chemicals, glue, paper, ink, etc.), to the packaging, to the packaging materials (mercury, chemicals, etc.) to the processing equipment (heating, cooling, etc.), to the packaging materials, to the packaging materials that are shipped overseas and to the packaging materials that are used in the United States. Each step along the way creates tons of waste that we constantly have to clean up. The process of making a paper product is a very wasteful one. But the end result is something that all of us need to consume. And if we want to keep the recycling process running efficiently, then we really need to think about each and every step that goes into making a paper product.

The first few sentences don’t work for me but once the discussion turns to making paper products, then it becomes more convincing to me. As to whether the company’s reluctance to release the research is genuine or a publicity stunt, I don’t know. However, there was a fair degree of interest in GPT2 after the decision.

From a February 14, 2019 article by Alex Hern for the Guardian,

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: …

Sean Gallagher’s February 15, 2019 posting on the ars Technica blog provides some insight that’s partially written a style sometimes associated with gossip (Note: Links have been removed),

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal “mafia”—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. [emphasis mine] Brockman now serves as OpenAI’s CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for “fake news” publishers and potentially spread misinformation and undermine public debate, GPT-2’s output certainly qualifies as concerning. Unlike other text generation “bot” models, such as those based on Markov chain algorithms, the GPT-2 “bot” did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student’s report on the causes of the US Civil War.

Each matched the style of the genre from the writing prompt, including manufacturing quotes from sources. In other samples, GPT-2 generated a rant about why recycling is bad, a speech written by John F. Kennedy’s brain transplanted into a robot (complete with footnotes about the feat itself), and a rewrite of a scene from The Lord of the Rings.

While the model required multiple tries to get a good sample, GPT-2 generated “good” results based on “how familiar the model is with the context,” the researchers wrote. “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.”

There were some weak spots encountered in GPT-2’s word modeling—for example, the researchers noted it sometimes “writes about fires happening under water.” But the model could be fine-tuned to specific tasks and perform much better. “We can fine-tune GPT-2 on the Amazon Reviews dataset and use this to let us write reviews conditioned on things like star rating and category,” the authors explained.

James Vincent’s February 14, 2019 article for The Verge offers a deeper dive into the world of AI text agents and what makes GPT2 so special (Note: Links have been removed),

For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge. Algorithmic moderators still overlook abusive comments, and the world’s most talkative chatbots can barely keep a conversation alive. But new methods for analyzing text, developed by heavyweights like Google and OpenAI as well as independent researchers, are unlocking previously unheard-of talents.

OpenAI’s new algorithm, named GPT-2, is one of the most exciting examples yet. It excels at a task known as language modeling, which tests a program’s ability to predict the next word in a given sentence. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.

The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence. But what’s really impressive about GPT-2 is not its fluency but its flexibility.

This algorithm was trained on the task of language modeling by ingesting huge numbers of articles, blogs, and websites. By using just this data — and with no retooling from OpenAI’s engineers — it achieved state-of-the-art scores on a number of unseen language tests, an achievement known as “zero-shot learning.” It can also perform other writing-related tasks, like translating text from one language to another, summarizing long articles, and answering trivia questions.

GPT-2 does each of these jobs less competently than a specialized system, but its flexibility is a significant achievement. Nearly all machine learning systems used today are “narrow AI,” meaning they’re able to tackle only specific tasks. DeepMind’s original AlphaGo program, for example, was able to beat the world’s champion Go player, but it couldn’t best a child at Monopoly. The prowess of GPT-2, say OpenAI, suggests there could be methods available to researchers right now that can mimic more generalized brainpower.

“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs …

To put this work into context, it’s important to understand how challenging the task of language modeling really is. If I asked you to predict the next word in a given sentence — say, “My trip to the beach was cut short by bad __” — your answer would draw upon on a range of knowledge. You’d consider the grammar of the sentence and its tone but also your general understanding of the world. What sorts of bad things are likely to ruin a day at the beach? Would it be bad fruit, bad dogs, or bad weather? (Probably the latter.)

Despite this, programs that perform text prediction are quite common. You’ve probably encountered one today, in fact, whether that’s Google’s AutoComplete feature or the Predictive Text function in iOS. But these systems are drawing on relatively simple types of language modeling, while algorithms like GPT-2 encode the same information in more complex ways.

The difference between these two approaches is technically arcane, but it can be summed up in a single word: depth. Older methods record information about words in only their most obvious contexts, while newer methods dig deeper into their multiple meanings.

So while a system like Predictive Text only knows that the word “sunny” is used to describe the weather, newer algorithms know when “sunny” is referring to someone’s character or mood, when “Sunny” is a person, or when “Sunny” means the 1976 smash hit by Boney M.

The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

Hern’s article for the Guardian (February 14, 2019 article ) acts as a good overview, while Gallagher’s ars Technica* posting (February 15, 2019 posting) and Vincent’s article (February 14, 2019 article) for the The Verge take you progressively deeper into the world of AI text agents.

For anyone who wants to dig down even further, there’s a February 14, 2019 posting on OpenAI’s blog.

*’ars Technical’ corrected to read ‘ars Technica’ on February 18, 2021.

Ethiopia’s new species of puddle frog and an update on Romeo, the last Sehuencas water frog

It seems to be to my week for being a day late. Here’s my Valentine Day (February 14, 2019) celebration posting. I’ve got two frog stories, news of a dating app for animals, and a bonus (not a frog story) at the end.

Ethiopia

For the last few years I’ve been getting stories about new frog species in Central and South America. This one marks a change of geography. From a February 12, 2019 news item on ScienceDaily,

A new species of puddle frog (order: Anura, family: Phynobatrachidae, genus: Phrynobatrachus), has just been discovered at the unexplored and isolated Bibita Mountain in southwestern Ethiopia. The research team named the new species Phrynobatrachus bibita sp. nov., or Bibita Mountain dwarf puddle frog, inspired by its home.

A new species of puddle frog (female Phrynobatrachus bibita sp. nov.) from an unexplored mountain in southwestern Ethiopia. Credit: Courtesy NYU Abu Dhabi researchers S. Goutte and J. Reyes-Velasco.

Here’s more from a February 13, 2019 New York University Abu Dhabi press release (also on EurekAlert), which originated the news item (Note: I have reformatted parts of the following press release),

In summer 2018, NYU Abu Dhabi Postdoctoral Associates Sandra Goutte and Jacobo Reyes-Velasco explored an isolated mountain in southwestern Ethiopia where some of the last primary forest of the country remains. Bibita Mountain was under the radars of the team for several years due to its isolation and because no other zoologist had ever explored it before

“Untouched, isolated, and unexplored”

“It had all the elements to spike our interest,” says Dr. Reyes-Velasco, who initiated the exploration of the mountain. “We tried to reach Bibita in a previous expedition in 2016 without success. Last summer, we used a different route that brought us to higher elevation,” he added.

Their paper, published in ZooKeys journal, reports that the new, tiny frog, 17 mm for males and 20 mm for females, is unique among Ethiopian puddle frogs. Among other morphological features, a slender body with long legs, elongated fingers and toes, and a golden coloration, set this frog apart from its closest relatives. “When we looked at the frogs, it was obvious that we had found a new species, they look so different from any Ethiopian species we had ever seen before!” explains Dr. Goutte.

Back in NYU Abu Dhabi, the research team sequenced tissue samples from the new species and discovered that Phrynobatrachus bibita sp. nov. is genetically different from any frog species in the region.

“The discovery of such a genetically distinct species in only a couple of days in this mountain is the perfect demonstration of how important it is to assess the biodiversity of this type of places. The Bibita Mountain probably has many more unknown species that await our discovery; it is essential for biologists to discover them in order to protect them and their habitat properly,” explains NYU Abu Dhabi Program Head of Biology and the paper’s lead researcher Stéphane Boissinot, who has been working on Ethiopian frogs since 2010.

About NYU Abu Dhabi

NYU Abu Dhabi is the first comprehensive liberal arts and science campus in the Middle East to be operated abroad by a major American research university. NYU Abu Dhabi has integrated a highly-selective liberal arts, engineering and science curriculum with a world center for advanced research and scholarship enabling its students to succeed in an increasingly interdependent world and advance cooperation and progress on humanity’s shared challenges. NYU Abu Dhabi’s high-achieving students have come from 120 nations and speak over 120 languages. Together, NYU’s campuses in New York, Abu Dhabi, and Shanghai form the backbone of a unique global university, giving faculty and students opportunities to experience varied learning environments and immersion in other cultures at one or more of the numerous study-abroad sites NYU maintains on six continents.

These are very small frogs with males growing to about 17mm, or 0.6 inches and females growing up to 20mm, or 0.8 inches.

Here’s a link to and a citation for the paper,

A new species of puddle frog from an unexplored mountain in southwestern Ethiopia (Anura, Phrynobatrachidae, Phrynobatrachus) by Sandra Goutte, Jacobo Reyes-Velasco, Stephane Boissinot. ZooKeys, 2019; 824: 53-70 DOI: 10.3897/zookeys.824.31570 (12 Feb 2019)

This paper appears to be open access.

Bolivia

First, here’s some background information. I wrote about Romeo, the Sehuencas water frog last year in my July 26,2018 posting: ‘Emergency!!! Lonely heart looking for love: Female. Stocky build. Height of 2 – 3 inches,’

“(Matias Careaga) [downloaded from https://www.smithsonianmag.com/smart-news/scientists-made-matchcom-profile-bolivias-loneliest-frog-180968140/]That is a very soulful look. How could any female Sehuencas water frog resist it? Sadly, that’s the problem. They havn’t found any female Sehuencas water frogs yet.

It’s not for want of trying. Back in February 2018 worldwide interest was raised when scientists as the Cochabamba Natural History Museum (Bolivia) started a campaign to find a mate and raise funds for a search. …”

Happily, I stumbled on this January 17, 2019 New York Times article by JoAnna Klein for the latest about Romeo,

Romeo was made for love, as all animals are. But for years he couldn’t find it. It’s not like there was anything wrong with Romeo. Sure he’s shy, eats worms, lacks eyelashes and is 10 years old, at least. But he’s aged well, and he’s kind of a special guy.

Romeo is a Sehuencas water frog, once thought to be the last one on the planet. He lives alone in a tank at the Museo de Historia Natural Alcide d’Orbigny in Bolivia.

A deadly fungal disease threatens his species and other frogs in the cloud forest where he was found a decade ago. When researchers brought him to the museum’s conservation breeding center, they expected to find another frog he could mate with and save the species from extinction. But they searched stream after stream, and nothing.


He needed a match before he croaked, so last year conservation groups partnered to create a Match.com profile for him. People related to Romeo’s romantic struggles, and on Valentine’s Day last year, the company and his fans raised $25,000 to send an expedition team out to the cloud forest to find his Juliet.

And for all the lonely lovers searching for that special someone, Teresa Camacho Badani, a herpetologist at the museum who found Juliet [emphasis mine], has another message: “Never give up searching for that happy ending.”

Here is Juliet,

Photo of Juliet by Robin Moore, Global Wildlife Conservation [downloaded from [https://www.globalwildlife.org/press-room/lonely-no-more-romeo-the-sehuencas-water-frog-finds-love/]

If you don’t have much time, Klein’s article goes on to offer an engaging look at the successful expedition’s trip. For anyone who might like to keep digging, I have more. First, a video,


Global Wildlife Conservation has a January 15, 2019 posting (where I found the video) by Lindsay Renick Mayer which offers more detail via a Q&A (questions and answers) interview with Teresa Camacho Badani, the herpetologist who found Juliet. Here’s an excerpt to whet your appetite,

Q. What was the habitat like where you found the frogs?
A. It is a well-preserved cloud forest where the climate is rainy, foggy and humid because of the streams, which are less than a meter in width with currents that form waterfalls, and ponds that are not very deep. Other biologists had looked here for the frog, even last year, with no success. We selected this spot after months of doing an analysis of historic records of where the species had originally been found—most of which have since been destroyed. Field evidence suggests that the frog is very, very rare and there are likely few left in the wild. And because it was clear that the threats to the frogs were so close in proximity—the streams around us were empty—we decided to rescue all five of these individuals for the conservation breeding program.


Q. What happens to these five frogs next?
A. Right now they’re in quarantine at the K’ayara Center at the museum, where they are starting to acclimate to their new home. We’ll make sure they have the same quality of water and temperature as in the field. After they are used to their new habitat and they’re eating well, we will give them a preventive treatment for the deadly infectious disease, chytridiomycosis. We do not want Romeo to get sick on his first date! [emphasis mine] When the treatment is finished, we can finally give Romeo what we hope is a romantic encounter with his Juliet.

The Global Wildlife Conservation’s January 15, 2019 press release offers still more information,

“It is an incredible feeling to know that thanks to everyone who believes in true love and donated for Valentine’s Day last year [2018], we have already found a mate for Romeo and can establish a conservation breeding program with more than a single pair,” said Teresa Camacho Badani, the museum’s chief of herpetology and the expedition leader. “Now the real work begins—we know how to successfully care for this species in captivity, but now we will learn about its reproduction, while also getting back into the field to better understand if any more frogs may be left and if so, how many, where they are, and more about the threats they face. With this knowledge we can develop strategies to mitigate the threats to the species’ habitat, while working on a long-term plan to return Romeo’s future babies to their wild home, preventing the extinction of the Sehuencas water frog.”

These are the first Sehuencas water frogs that biologists have seen in the wild in a decade, though over the years (including in 2018) scientists had searched this area for the species with no success. This team, which had done careful analysis ahead of time to determine the best places to look for the frogs, still didn’t encounter the Sehuencas water frog until after failing for a few long days to find any frogs of any species in what seemed like perfect amphibian habitat—a well-protected stream in the Bolivian wilderness. …

The scientists are hoping for more money (from Global Wildlife Conservation’s January 15, 2019 press release),

Romeo became an international celebrity on Valentine’s Day in 2018 with a dating profile on Match, the world’s largest dating company. Now he is a powerful flagship for conservation in Bolivia. These expeditions were made possible by the individuals in more than 32 countries who made donations last year that were matched by Match for a total of $25,000.
“Our entire Match community rallied behind Romeo and his search for love last year,” said Hesam Hosseini, CEO of Match. “We’re thrilled with this outcome for Romeo and his species. He now joins the list of millions of ‘members’ who have found meaningful relationships on Match.”

Romeo’s followers can continue to cheer on him and his species by making a donation to support these conservation efforts. They can also stay up to date on these expeditions and other news about the most eligible bachelor through GWC’s blog, mailing list and social media platforms (Facebook, Twitter and Instagram) and the Alcide d’Orbigny Natural History Museum’s Facebook page. Romeo has also now taken to Twitter to share his thoughts on dating, love and romance.

Animal dating apps

Do check out Romeo’s Twitter feed. You may find something appealing such as this link to a February 14, 2019 news item on the News for Kids blog which discusses dating apps for animals. Romeo’s story is recounted and then there’s this about an app for farm animals,

In the United Kingdom a company called Hectare has come up with “Tudder” – an unusual way for farm animals to find partners.

Tudder is a “dating” app which allows farmers to easily find mates for their cows and bulls. Farmers can post pictures of their animals to the app, and swipe through pictures and descriptions to see other animals in need of a mate.

Tudder may sound a bit silly, but farmers say it saves them time and money because they don’t have to travel with their animals to find them a mate.

Funny thing is, I was wondering about Romeo just the other day and so, thanks is owed to the Beakerhead Twitter feed where I stumbled across the Romeo update. Thank you

Bonus

I have two furry bonuses. First, the cats,

The excerpt is from the CBC (Canadian Broadcasting Corporation’s February 15, 2019 article by Devon Murphy about ‘Catwalk: Tales From The Cat Show Circuit’, a CBC documentary as is this excerpt,

Her hair is perfect, freshly washed, blow-dried, and combed, and her eyes are shining. She’s ready to compete and is calm as the judge approaches. Then, he takes a feather and twitches it in front of her face, and she turns on her back, furry stomach exposed, and bats at it with her immaculate paws.

Now for the pièce de résistance. Thank you to LaineyGossip (fifth paragraph) for this moment of “pure joy”,

That dog knows she’s a champion, whether or not she’s the fastest on the course. On February 10, 2019, she was a furry streak of lightning … in the 8″ division of the Westminster Dog Show’s Masters Agility Championship competition. Belated Happy Valentine’s Day.

Emotional robots

This is some very intriguing work,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

A July 16, 2018 Cornell University news release on EurekAlert offers more insight into the work,

Cornell University researchers have developed a prototype of a robot that can express “emotions” through changes in its outer surface. The robot’s skin covers a grid of texture units whose shapes change based on the robot’s feelings.

Assistant professor of mechanical and aerospace engineering Guy Hoffman, who has given a TEDx talk on “Robots with ‘soul'” said the inspiration for designing a robot that gives off nonverbal cues through its outer skin comes from the animal world, based on the idea that robots shouldn’t be thought of in human terms.

“I’ve always felt that robots shouldn’t just be modeled after humans or be copies of humans,” he said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

Their work is detailed in a paper, “Soft Skin Texture Modulation for Social Robots,” presented at the International Conference on Soft Robotics in Livorno, Italy. Doctoral student Yuhan Hu was lead author; the paper was featured in IEEE Spectrum, a publication of the Institute of Electrical and Electronics Engineers.

Hoffman and Hu’s design features an array of two shapes, goosebumps and spikes, which map to different emotional states. The actuation units for both shapes are integrated into texture modules, with fluidic chambers connecting bumps of the same kind.

The team tried two different actuation control systems, with minimizing size and noise level a driving factor in both designs. “One of the challenges,” Hoffman said, “is that a lot of shape-changing technologies are quite loud, due to the pumps involved, and these make them also quite bulky.”

Hoffman does not have a specific application for his robot with texture-changing skin mapped to its emotional state. At this point, just proving that this can be done is a sizable first step. “It’s really just giving us another way to think about how robots could be designed,” he said.

Future challenges include scaling the technology to fit into a self-contained robot – whatever shape that robot takes – and making the technology more responsive to the robot’s immediate emotional changes.

“At the moment, most social robots express [their] internal state only by using facial expressions and gestures,” the paper concludes. “We believe that the integration of a texture-changing skin, combining both haptic [feel] and visual modalities, can thus significantly enhance the expressive spectrum of robots for social interaction.”

A video helps to explain the work,

I don’t consider ‘sleepy’ to be an emotional state but as noted earlier this is intriguing. You can find out more in a July 9, 2018 article by Tom Fleischman for the Cornell Chronicle (Note: tthe news release was fashioned from this article so you will find some redundancy should you read in its entirety),

In 1872, Charles Darwin published his third major work on evolutionary theory, “The Expression of the Emotions in Man and Animals,” which explores the biological aspects of emotional life.

In it, Darwin writes: “Hardly any expressive movement is so general as the involuntary erection of the hairs, feathers and other dermal appendages … it is common throughout three of the great vertebrate classes.” Nearly 150 years later, the field of robotics is starting to draw inspiration from those words.

“The aspect of touch has not been explored much in human-robot interaction, but I often thought that people and animals do have this change in their skin that expresses their internal state,” said Guy Hoffman, assistant professor and Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering (MAE).

Inspired by this idea, Hoffman and students in his Human-Robot Collaboration and Companionship Lab have developed a prototype of a robot that can express “emotions” through changes in its outer surface. …

Part of our relationship with other species is our understanding of the nonverbal cues animals give off – like the raising of fur on a dog’s back or a cat’s neck, or the ruffling of a bird’s feathers. Those are unmistakable signals that the animal is somehow aroused or angered; the fact that they can be both seen and felt strengthens the message.

“Yuhan put it very nicely: She said that humans are part of the family of species, they are not disconnected,” Hoffman said. “Animals communicate this way, and we do have a sensitivity to this kind of behavior.”

You can find the paper presented at the International Conference on Soft Robotics in Livorno, Italy, ‘Soft Skin Texture Modulation for Social Robotics’ by Yuhan Hu, Zhengnan Zhao, Abheek Vimal, and Guy Hoffman, here.