A molecule that protects plants from overexposure to harmful sunlight thanks to its flamenco-style twist could form the basis for a new longer-lasting sunscreen, chemists at the University of Warwick have found, in collaboration with colleagues in France and Spain. Research on the green molecule by the scientists has revealed that it absorbs ultraviolet light and then disperses it in a ‘flamenco-style’ dance, making it ideal for use as a UV filter in sunscreens.
The team of scientists report today, Friday 18th October 2019, in the journal Nature Communications that, as well as being plant-inspired, this molecule is also among a small number of suitable substances that are effective in absorbing light in the Ultraviolet A (UVA) region of wavelengths. It opens up the possibility of developing a naturally-derived and eco-friendly sunscreen that protects against the full range of harmful wavelengths of light from the sun.
The UV filters in a sunscreen are the ingredients that predominantly provide the protection from the sun’s rays. In addition to UV filters, sunscreens will typically also include:
Emollients, used for moisturising and lubricating the skin Thickening agents Emulsifiers to bind all the ingredients Water Other components that improve aesthetics, water resistance, etc.
The researchers tested a molecule called diethyl sinapate, a close mimic to a molecule that is commonly found in the leaves of plants, which is responsible for protecting them from overexposure to UV light while they absorb visible light for photosynthesis.
They first exposed the molecule to a number of different solvents to determine whether that had any impact on its (principally) light absorbing behaviour. They then deposited a sample of the molecule on an industry standard human skin mimic (VITRO-CORNEUM®) where it was irradiated with different wavelengths of UV light. They used the state-of-the-art laser facilities within the Warwick Centre for Ultrafast Spectroscopy to take images of the molecule at extremely high speeds, to observe what happens to the light’s energy when it’s absorbed in the molecule in the very early stages (millionths of millionths of a second). Other techniques were also used to establish longer term (many hours) properties of diethyl sinapate, such as endocrine disruption activity and antioxidant potential.
Professor Vasilios Stavros from the University of Warwick, Department of Chemistry, who was part of the research team, explains: “A really good sunscreen absorbs light and converts it to harmless heat. A bad sunscreen is one that absorbs light and then, for example, breaks down potentially inducing other chemistry that you don’t want. Diethyl sinapate generates lots of heat, and that’s really crucial.”
When irradiated the molecule absorbs light and goes into an excited state but that energy then has to be disposed of somehow. The team of researchers observed that it does a kind of molecular ‘dance’ a mere 10 picoseconds (ten millionths of a millionth of a second) long: a twist in a similar fashion to the filigranas and floreos hand movements of flamenco dancers. That causes it to come back to its original ground state and convert that energy into vibrational energy, or heat.
It is this ‘flamenco dance’ that gives the molecule its long-lasting qualities. When the scientists bombarded the molecule with UVA light they found that it degraded only 3% over two hours, compared to the industry requirement of 30%.
Dr Michael Horbury, who was a Postgraduate Research Fellow at The University Warwick when he undertook this research (and now at the University of Leeds) adds: “We have shown that by studying the molecular dance on such a short time-scale, the information that you gain can have tremendous repercussions on how you design future sunscreens. Emily Holt, a PhD student in the Department of Chemistry at the University of Warwick who was part of the research team, said: “The next step would be to test it on human skin, then to mix it with other ingredients that you find in a sunscreen to see how those affect its characteristics.”
Professor Florent Allais and Dr Louis Mouterde, URD Agro-Biotechnologies Industrielles at AgroParisTech (Pomacle, France) commented: “What we have developed together is a molecule based upon a UV photoprotective molecule found in the surface of leaves on a plant and refunctionalised it using greener synthetic procedures. Indeed, this molecule has excellent long-term properties while exhibiting low endocrine disruption and valuable antioxidant properties.”
Professor Laurent Blasco, Global Technical Manager (Skin Essentials) at Lubrizol and Honorary Professor at the University of Warwick commented: “In sunscreen formulations at the moment there is a lack of broad-spectrum protection from a single UV filter. Our collaboration has gone some way towards developing a next generation broad-spectrum UV filter inspired by nature. Our collaboration has also highlighted the importance of academia and industry working together towards a common goal.”
Professor Vasilios Stavros added, “Amidst escalating concerns about their impact on human toxicity (e.g. endocrine disruption) and ecotoxicity (e.g. coral bleaching), developing new UV filters is essential. We have demonstrated that a highly attractive avenue is ‘nature-inspired’ UV filters, which provide a front-line defence against skin cancer and premature skin aging.”
Here’s a link to and a citation for the paper,
Towards symmetry driven and nature inspired UV filter design by Michael D. Horbury, Emily L. Holt, Louis M. M. Mouterde, Patrick Balaguer, Juan Cebrián, Laurent Blasco, Florent Allais & Vasilios G. Stavros. Nature Communications volume 10, Article number: 4748 (2019) DOI: https://doi.org/10.1038/s41467-019-12719-z
This paper is open access.
Why the high hopes?
Briefly (the long story stretches over 10 years), the most recommended sunscreens today (2020) are ‘mineral-based’. This is painfully amusing because civil society groups (activists) such as Friends of the Earth (in particular the Australia chapter under Georgia Miller’s leadership) and Canada’s own ETC Group had campaigned against these same sunscreen when they were billed as being based on metal oxide nanoparticles such zinc oxide and/or titanium oxide. The ETC Group under Pat Roy Mooney’s leadership didn’t press the campaign after an initial push. As for Australia and Friend of the Earth, their anti-metallic oxide nanoparticle sunscreen campaign didn’t work out well as I noted in a February 9, 2012 posting and with a follow-up in an October 31, 2012 posting.
The only civil society group to give approval (very reluctantly) was the Environmental Working Group (EWG) as I noted in a July 9, 2009 posting. They had concerns about the fact that these ingredients are metallic but after a thorough of then available research, EWG gave the sunscreens a passing grade and noted, in their report, that they had more concerns about the use of oxybenzone in sunscreens. That latter concern has since been flagged by others (e.g., the state of Hawai’i) as noted in my July 6, 2018 posting.
So, rebranding metallic oxides as minerals has allowed the various civil society groups to support the very same sunscreens many of them were advocating against.
In the meantime, scientists continue work on developing plant-based sunscreens as an improvement to the ‘mineral-based’ sunscreens used now.
Research on novel nanoelectronics devices led by the University of Southampton enabled brain neurons and artificial neurons to communicate with each other. This study has for the first time shown how three key emerging technologies can work together: brain-computer interfaces, artificial neural networks and advanced memory technologies (also known as memristors). The discovery opens the door to further significant developments in neural and artificial intelligence research.
Brain functions are made possible by circuits of spiking neurons, connected together by microscopic, but highly complex links called ‘synapses’. In this new study, published in the scientific journal Nature Scientific Reports, the scientists created a hybrid neural network where biological and artificial neurons in different parts of the world were able to communicate with each other over the internet through a hub of artificial synapses made using cutting-edge nanotechnology. This is the first time the three components have come together in a unified network.
During the study, researchers based at the University of Padova in Italy cultivated rat neurons in their laboratory, whilst partners from the University of Zurich and ETH Zurich created artificial neurons on Silicon microchips. The virtual laboratory was brought together via an elaborate setup controlling nanoelectronic synapses developed at the University of Southampton. These synaptic devices are known as memristors.
The Southampton based researchers captured spiking events being sent over the internet from the biological neurons in Italy and then distributed them to the memristive synapses. Responses were then sent onward to the artificial neurons in Zurich also in the form of spiking activity. The process simultaneously works in reverse too; from Zurich to Padova. Thus, artificial and biological neurons were able to communicate bidirectionally and in real time.
Themis Prodromakis, Professor of Nanotechnology and Director of the Centre for Electronics Frontiers at the University of Southampton said “One of the biggest challenges in conducting research of this kind and at this level has been integrating such distinct cutting edge technologies and specialist expertise that are not typically found under one roof. By creating a virtual lab we have been able to achieve this.”
The researchers now anticipate that their approach will ignite interest from a range of scientific disciplines and accelerate the pace of innovation and scientific advancement in the field of neural interfaces research. In particular, the ability to seamlessly connect disparate technologies across the globe is a step towards the democratisation of these technologies, removing a significant barrier to collaboration.
Professor Prodromakis added “We are very excited with this new development. On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI [artificial intelligence] chips.”
I’m fascinated by this work and after taking a look at the paper, I have to say, the paper is surprisingly accessible. In other words, I think I get the general picture. For example (from the Introduction to the paper; citation and link follow further down),
… To emulate plasticity, the memristor MR1 is operated as a two-terminal device through a control system that receives pre- and post-synaptic depolarisations from one silicon neuron (ANpre) and one biological neuron (BN), respectively. …
If I understand this properly, they’ve integrated a biological neuron and an artificial neuron in a single system across three countries.
For those who care to venture forth, here’s a link and a citation for the paper,
Memristive synapses connect brain and silicon spiking neurons by Alexantrou Serb, Andrea Corna, Richard George, Ali Khiat, Federico Rocchi, Marco Reato, Marta Maschietto, Christian Mayr, Giacomo Indiveri, Stefano Vassanelli & Themistoklis Prodromakis. Scientific Reports volume 10, Article number: 2590 (2020) DOI: https://doi.org/10.1038/s41598-020-58831-9 Published 25 February 2020
So we’re still stuck in 20th century concepts about artificial intelligence (AI), eh? Sean Captain’s February 21, 2020 article (for Fast Company) about the new AI exhibit in San Francisco suggests that artists can help us revise our ideas (Note: Links have been removed),
Though we’re well into the age of machine learning, popular culture is stuck with a 20th century notion of artificial intelligence. While algorithms are shaping our lives in real ways—playing on our desires, insecurities, and suspicions in social media, for instance—Hollywood is still feeding us clichéd images of sexy, deadly robots in shows like Westworld and Star Trek Picard.
The old-school humanlike sentient robot “is an important trope that has defined the visual vocabulary around this human-machine relationship for a very long period of time,” says Claudia Schmuckli, curator of contemporary art and programming at the Fine Arts Museums of San Francisco. It’s also a naïve and outdated metaphor, one she is challenging with a new exhibition at San Francisco’s de Young Museum, called Uncanny Valley, that opens on February 22 .
The show’s name [Uncanny Valley: Being Human in the Age of AI] is a kind of double entendre referencing both the dated and emerging conceptions of AI. Coined in the 1970s, the term “uncanny valley” describes the rise and then sudden drop off of empathy we feel toward a machine as its resemblance to a human increases. Putting a set of cartoony eyes on a robot may make it endearing. But fitting it with anatomically accurate eyes, lips, and facial gestures gets creepy. As the gap between the synthetic and organic narrows, the inability to completely close that gap becomes all the more unsettling.
But the artists in this exhibit are also looking to another valley—Silicon Valley, and the uncanny nature of the real AI the region is building. “One of the positions of this exhibition is that it may be time to rethink the coordinates of the Uncanny Valley and propose a different visual vocabulary,” says Schmuckli.
… the resemblance to humans is only synthetic-skin deep. Bina48 can string together a long series of sentences in response to provocative questions from Dinkins, such as, “Do you know racism?” But the answers are sometimes barely intelligible, or at least lack the depth and nuance of a conversation with a real human. The robot’s jerky attempts at humanlike motion also stand in stark contrast to Dinkins’s calm bearing and fluid movement. Advanced as she is by today’s standards, Bina48 is tragically far from the sci-fi concept of artificial life. Her glaring shortcomings hammer home why the humanoid metaphor is not the right framework for understanding at least today’s level of artificial intelligence.
What are the invisible mechanisms of current forms of artificial intelligence (AI)? How is AI impacting our personal lives and socioeconomic spheres? How do we define intelligence? How do we envision the future of humanity?
SAN FRANCISCO (September 26, 2019) — As technological innovation continues to shape our identities and societies, the question of what it means to be, or remain human has become the subject of fervent debate. Taking advantage of the de Young museum’s proximity to Silicon Valley, Uncanny Valley: Being Human in the Age of AI arrives as the first major exhibition in the US to explore the relationship between humans and intelligent machines through an artistic lens. Organized by the Fine Arts Museums of San Francisco, with San Francisco as its sole venue, Uncanny Valley: Being Human in the Age of AI will be on view from February 22 to October 25, 2020.
“Technology is changing our world, with artificial intelligence both a new frontier of possibility but also a development fraught with anxiety,” says Thomas P. Campbell, Director and CEO of the Fine Arts Museums of San Francisco. “Uncanny Valley: Being Human in the Age of AI brings artistic exploration of this tension to the ground zero of emerging technology, raising challenging questions about the future interface of human and machine.”
The exhibition, which extends through the first floor of the de Young and into the museum’s sculpture garden, explores the current juncture through philosophical, political, and poetic questions and problems raised by AI. New and recent works by an intergenerational, international group of artists and activist collectives—including Zach Blas, Ian Cheng, Simon Denny, Stephanie Dinkins, Forensic Architecture, Lynn Hershman Leeson, Pierre Huyghe, Christopher Kulendran Thomas in collaboration with Annika Kuhlmann, Agnieszka Kurant, Lawrence Lek, Trevor Paglen, Hito Steyerl, Martine Syms, and the Zairja Collective—will be presented.
The Uncanny Valley
In 1970 Japanese engineer Masahiro Mori introduced the concept of the “uncanny valley” as a terrain of existential uncertainty that humans experience when confronted with autonomous machines that mimic their physical and mental properties. An enduring metaphor for the uneasy relationship between human beings and lifelike robots or thinking machines, the uncanny valley and its edges have captured the popular imagination ever since. Over time, the rapid growth and affordability of computers, cloud infrastructure, online search engines, and data sets have fueled developments in machine learning that fundamentally alter our modes of existence, giving rise to a newly expanded uncanny valley.
“As our lives are increasingly organized and shaped by algorithms that track, collect, evaluate, and monetize our data, the uncanny valley has grown to encompass the invisible mechanisms of behavioral engineering and automation,” says Claudia Schmuckli, Curator in Charge of Contemporary Art and Programming at the Fine Arts Museums of San Francisco. “By paying close attention to the imminent and nuanced realities of AI’s possibilities and pitfalls, the artists in the exhibition seek to thicken the discourse around AI. Although fables like HBO’s sci-fi drama Westworld, or Spike Jonze’s feature film Her still populate the collective imagination with dystopian visions of a mechanized future, the artists in this exhibition treat such fictions as relics of a humanist tradition that has little relevance today.”
Ian Cheng’s digitally simulated AI creature BOB (Bag of Beliefs) reflects on the interdependency of carbon and silicon forms of intelligence. An algorithmic Tamagotchi, it is capable of evolution, but its growth, behavior, and personality are molded by online interaction with visitors who assume collective responsibility for its wellbeing.
In A.A.I. (artificial artificial intelligence), an installation of multiple termite mounds of colored sand, gold, glitter and crystals, Agnieszka Kurant offers a vibrant critique of new AI economies, with their online crowdsourcing marketplace platforms employing invisible armies of human labor at sub-minimum wages.
Simon Denny ‘s Amazon worker cage patent drawing as virtual King Island Brown Thornbill cage (US 9,280,157 B2: “System and method for transporting personnel within an active workspace”, 2016) (2019) also examines the intersection of labor, resources, and automation. He presents 3-D prints and a cage-like sculpture based on an unrealized machine patent filed by Amazon to contain human workers. Inside the cage an augmented reality application triggers the appearance of a King Island Brown Thornbill — a bird on the verge of extinction; casting human labor as the proverbial canary in the mine. The humanitarian and ecological costs of today’s data economy also informs a group of works by the Zairja Collective that reflect on the extractive dynamics of algorithmic data mining.
Hito Steyerl addresses the political risks of introducing machine learning into the social sphere. Her installation The City of Broken Windows presents a collision between commercial applications of AI in urban planning along with communal and artistic acts of resistance against neighborhood tipping: one of its short films depicts a group of technicians purposefully smashing windows to teach an algorithm how to recognize the sound of breaking glass, and another follows a group of activists through a Camden, NJ neighborhood as they work to keep decay at bay by replacing broken windows in abandoned homes with paintings.
Addressing the perpetuation of societal biases and discrimination within AI, Trevor Paglen’s They Took the Faces from the Accused and the Dead…(SD18), presents a large gridded installation of more than three thousand mugshots from the archives of the American National Standards Institute. The institute’s collections of such images were used to train ealry facial-recognition technologies — without the consent of those pictured. Lynn Hershman Leeson’s new installation Shadow Stalker critiques the problematic reliance on algorithmic systems, such as the military forecasting tool Predpol now widely used for policing, that categorize individuals into preexisting and often false “embodied metrics.”
Stephanie Dinkins extends the inquiry into how value systems are built into AI and the construction of identity in Conversations with Bina48, examining the social robot’s (and by extension our society’s) coding of technology, race, gender and social equity. In the same territory, Martine Syms posits AI as a “shamespace” for misrepresentation. For Mythiccbeing she has created an avatar of herself that viewers can interact with through text messaging. But unlike service agents such as Siri and Alexa, who readily respond to questions and demands, Syms’s Teeny is a contrarious interlocutor, turning each interaction into an opportunity to voice personal observations and frustrations about racial inequality and social injustice.
Countering the abusive potential of machine learning, Forensic Architecture pioneers an application to the pursuit of social justice. Their proposition of a Model Zoo marks the beginnings of a new research tool for civil society built of military vehicles, missile fragments, and bomb clouds—evidence of human-rights violations by states and militaries around the world. Christopher Kulendran Thomas’s video Being Human, created in collaboration with Annika Kuhlmann, poses the philosophical question of what it means to be human when machines are able to synthesize human understanding ever more convincingly. Set in Sri Lanka, it employs AI-generated characters of singer Taylor Swift and artist Oscar Murillo to reflect on issues of individual authenticity, collective sovereignty, and the future of human rights.
Lawrence Lek’s sci-fi-inflected film Aidol, which explores the relationship between algorithmic automation and human creativity, projects this question into the future. It transports the viewer into the computer-generated “sinofuturist” world of the 2065 eSports Olympics: when the popular singer Diva enlists the super-intelligent Geomancer to help her stage her artistic comeback during the game’s halftime show, she unleashes an existential and philosophical battle that explodes the divide between humans and machines.
The Doors, a newly commissioned installation by Zach Blas, by contrast shines the spotlight back onto the present and on the culture and ethos of Silicon Valley — the ground zero for the development of AI. Inspired by the ubiquity of enclosed gardens on tech campuses, he has created an artificial garden framed by a six-channel video projected on glass panes that convey a sense of algorithmic psychedelia aiming to open new “doors of perception.” While luring visitors into AI’s promises, it also asks what might become possible when such glass doors begin to crack.
Unveiled in late spring Pierre Huyghe‘s Exomind (Deep Water), a sculpture of a crouched female nude with a live beehive as its head will be nestled within the museum’s garden. With its buzzing colony pollinating the surrounding flora, it offers a poignant metaphor for the modeling of neural networks on the biological brain and an understanding of intelligence as grounded in natural forms and processes.
Since 2018, Forensic Architecture has used machine learning / AI to aid in humanitarian work, using synthetic images—photorealistic digital renderings based around 3-D models—to train algorithmic classifiers to identify tear gas munitions and chemical bombs deployed against protesters worldwide, including in Hong Kong, Chile, the US, Venezuela, and Sudan.
Their project, Model Zoo, on view in Uncanny Valley represents a growing collection of munitions and weapons used in conflict today and the algorithmic models developed to identify them. It shows a collection of models being used to track and hold accountable human rights violators around the world. The piece joins work by 14 contemporary artists reflecting on the philosophical and political consequences of the application of AI into the social sphere.
We are deeply saddened that Weizman will not be allowed to travel to celebrate the opening of the exhibition. We stand with him and Forensic Architecture’s partner communities who continue to resist violent states and corporate practices, and who are increasingly exposed to the regime of “security algorithms.”
—Claudia Schmuckli, Curator-in-Charge, Contemporary Art & Programming, & Thomas P. Campbell, Director and CEO, Fine Arts Museums of San Francisco
There is a February 20, 2020 article (for Fast Company) by Eyal Weizman chronicling his experience with being denied entry by an algorithm. Do read it in its entirety (the Fast Company is itself an excerpt from Weizman’s essay) if you have the time, if not, here’s the description of how he tried to gain entry after being denied the first time,
The following day I went to the U.S. Embassy in London to apply for a visa. In my interview, the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled (had I recently been in Syria, Iran, Iraq, Yemen, or Somalia or met their nationals?), hotels at which I stayed, or a certain pattern of relations among these things. I was asked to supply the Embassy with additional information, including 15 years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.
I hope the exhibition is successful; it has certainly experienced a thought-provoking start.
Finally, I have often featured postings that discuss the ‘uncanny valley’. To find those postings, just use that phrase in the blog search engine. You might also went to search ‘Hiroshi Ishiguro’, a Japanese scientist and robotocist who specializes in humanoid robots.
From a February 22, 2020 Café Scientifque announcement (received via email),
Our next café will happen on Tuesday, February 25th, 2020 at 7:30pm in the back room at Yagger’s Downtown (433 W Pender). Our speaker for the evening will be marine biologist Dr. Nick Wong who is associated with the conservation of invasive species [sic].
TITLE OF PRESENTATION: Invasive Species of the Lower Mainland 101
BRIEF ABSTRACT OF WORK: The Invasive Species Council of BC (ISCBC) is a collaborative-based organization committed to reducing the spread and impacts of non-native species within BC.
My role focuses on educating and informing a diverse range of audiences on current and “watchlist” invasive species in British Columbia.
Nick will give details about the key invasives species in the lower mainland, describe some of the ISCBC programs and share things you can do to preserve BC’s amazing biodiversity.
BIO: Nick is the Research and Projects Coordinator with the Invasive Species Council of BC. He received his BSc from Western University [Ontario] and an MSc and PhD in Marine Ecology from the University of Auckland. Nick is passionate about teaching and creating engaging opportunities for people to learn and understand the role they can play in the prevention and mitigation of invasive species.
If the annual reports page is to be believed, the ISCBC has been around since 2006. Nope, I just looked at the 2006 report and the introduction states they were just starting their fourth year of existence at that time. Here’s the ISCBC website.
One final comment, it seems like there might have been a lost opportunity. The ISCBC would have been an interesting addition as a sponsor or partner to the Invasive Systems Festival organized by the Curiosity Collider folks. The festival was mentioned in my October 14, 2019 posting (scroll down about 60% of the way).
A February 12, 2020 announcement (received via email) from ARPICO (Society of Italian Researchers and Professionals in Western Canada) features an upcoming March 2020 meeting,
ARPICO’s activity in 2020 will begin on Wednesday March 4th at the Italian Cultural Centre, Room 5, near the Museum & Art Gallery.
We’re sure many of us have often heard the words “artificial intelligence” also known by its acronym “AI”, a concept that appears to be infiltrating many aspects of our lives. It is probably a good guess to say that many of us wonder what AI really is and about the pros and cons of AI technology’s ubiquitous presence.
While it would take far longer than the typical ARPICO speaking event duration to even define AI, we will be able to delve into some of its workings and their effect on our lives at our next event when we are very pleased to host Dr. Cristina Conati, who will be presenting “The Eyes Are the Windows to the Mind: Implications for Artificial Intelligence (AI)-driven Personalized Interaction“
Ahead of the speaking event, ARPICO will be holding its 2020 Annual General Meeting in the same location. We encourage everyone to participate in the AGM and have their say on all aspects of ARPICO’s matters. ARPICO is made by all of its members, not just the Board, and it is therefore paramount that you all make an effort to attend, let us know what your wishes are for the Society and tell us how we can do better together as we go forward.
If you are driving to the venue, there is plenty of free parking space.
We look forward to seeing everyone there.
The evening agenda is as follows:
5:45PM to 6:30PM – Annual General Meeting
[ Doors Open for Registration at 5:30PM ]
7:00pm – Start of the evening Event with introductions & lecture by Dr. Cristina Conati
[ Doors Open for Registration at 6:30PM ]
8:00 pm – Q & A Period
to follow – Mingling & Refreshments until about 9:30 pm
Here’s a description of the talk and Dr. Conati,
Eye-tracking has been extensively used both in psychology for understanding various aspects of human cognition, as well as in human computer interaction (HCI) for evaluation of interface design or as a form of direct input. In recent years, eye-tracking has also been investigated as a source of information for machine learning models that predict relevant user states and traits (e.g., attention, confusion, learning, perceptual abilities). These predictions can then be leveraged by AI agents to personalize the interaction with their users. In this talk, Dr. Conati will provide an overview of the research her lab has done in this area, including predicting user cognitive skills, and affective states, with applications to User-Adaptive Visualizations and Intelligent Tutoring Systems.
Dr. Conati is a Professor of Computer Science at the University of British Columbia, Vancouver, Canada. She received an M.Sc. in Computer Science at the University of Milan, as well as an M.Sc. and Ph.D. in Intelligent Systems at the University of Pittsburgh. Conati’s research is at the intersection of Artificial Intelligence (AI), Human Computer Interaction (HCI) and Cognitive Science, with the goal to create intelligent interactive systems that can capture relevant user’s properties (states, skills, needs) and personalize the interaction accordingly. Conati has over 100 peer-reviewed publications in this field and her research has received awards from a variety of venues, including UMUAI, the Journal of User Modeling and User Adapted Interaction (2002), the ACM International Conference on Intelligent User Interfaces (IUI 2007), the International Conference of User Modeling, Adaptation and Personalization (UMAP 2013, 2014), TiiS, ACM Transactions on Intelligent Interactive Systems (2014), and the International Conference on Intelligent Virtual Agents (IVA 2016).
I have more registration information from the announcement,
WHEN (AGM): Wednesday, March 4th, 2020 at 5:45PM (doors open at 5:30PM)
WHEN (EVENT): Wednesday, March 4th, 2020 at 7:00PM (doors open at 6:30PM)
WHERE: Italian Cultural Centre – Museum & Art Gallery – Room 5 – 3075 Slocan St, Vancouver, BC, V5M 3E4
are FREE, but all individuals are requested to obtain “free-admission”
tickets on EventBrite site due to limited seating at the venue.
Organizers need accurate registration numbers to manage wait lists and
prepare name tags.
ARPICO events are 100% staffed by volunteer organizers and helpers,
however, room rental, stationery, and guest refreshments are costs
incurred and underwritten by members of ARPICO. Therefore to be fair,
all audience participants are asked to donate to the best of their
ability at the door or via EventBrite to “help” defray costs of the
The US Department of Agriculture has a very interesting funding opportunity, Higher Education Challenge (HEC) Grants Program, as evidenced by the Nano 2020 virtual reality (VR) classroom initiative. Before launching into the specifics of the Nano 2020 project, here’s a description of the funding program,
Projects supported by the Higher Education Challenge Grants Program will: (1) address a state, regional, national, or international educational need; (2) involve a creative or non-traditional approach toward addressing that need that can serve as a model to others; (3) encourage and facilitate better working relationships in the university science and education community, as well as between universities and the private sector, to enhance program quality and supplement available resources; and (4) result in benefits that will likely transcend the project duration and USDA support.
Sometimes the smallest of things lead to the biggest ideas. Case in point: Nano 2020, a University of Arizona-led initiative to develop curriculum and technology focused on educating students in the rapidly expanding field of nanotechnology.
The five-year, multi-university project recently met its goal of creating globally relevant and implementable curricula and instructional technologies, to include a virtual reality classroom, that enhance the capacity of educators to teach students about innovative nanotechnology applications in agriculture and the life sciences.
Here’s a video from the University of Arizona’s project proponents which illustrates their classroom,
For those who prefer text or like to have it as a backup, here’s the rest of the news release explaining the project,
Visualizing What is Too Small to be Seen
Nanotechnology involves particles and devices developed and used at the scale of 100 nanometers or less – to put that in perspective, the average diameter of a human hair is 80,000 nanometers. The extremely small scale can make comprehension challenging when it comes to learning about things that cannot be seen with the naked eye.
That’s where the Nano 2020 virtual reality classroom comes in. In a custom-developed VR classroom complete with a laboratory, nanoscale objects come to life for students thanks to the power of science data visualization.
Within the VR environment, students can interact with objects of nanoscale proportions – pick them up, turn them around and examine every nuance of things that would otherwise be too small to see. Students can also interact with their instructor or their peers. The Nano 2020 classroom allows for multi-player functionality, giving educators and students the opportunity to connect in a VR laboratory in real time, no matter where they are in the world.
“The virtual reality technology brings to life this complex content in a way that is oddly simple,” said Matt Mars, associate professor of agricultural leadership and innovation education in the College of Agriculture and Life Sciences and co-director of the Nano 2020 grant. “Imagine if you can take a student and they see a nanometer from a distance, and then they’re able to approach it and see how small it is by actually being in it. It’s mind-blowing, but in a way that students will be like, ‘Oh wow, that is really cool!'”
The technology was developed by Tech Core, a group of student programmers and developers led by director Ash Black in the Eller College of Management.
“The thing that I was the most fascinated with from the beginning was playing with a sense of scale,” said Black, a lifelong technologist and mentor-in-residence at the McGuire Center for Entrepreneurship. “What really intrigued me about virtual reality is that it is a tool where scale is elastic – you can dial it up and dial it down. Obviously, with nanotechnology, you’re dealing with very, very small things that nobody has seen yet, so it seemed like a perfect use of virtual reality.”
Black and Tech Core students including Robert Johnson, Hazza Alkaabi, Matthew Romero, Devon Oberdan, Brandon Erickson and Tim Lukau turned science data into an object, the object into an image, and the image into a 3D rendering that is functional in the VR environment they built.
“I think that being able to interact with objects of nanoscale data in this environment will result in a lot of light bulbs going off in the students’ minds. I think they’ll get it,” Black said. “To be able to experience something that is abstract – like, what does a carbon atom look like – well, if you can actually look at it, that’s suddenly a whole lot of context.”
The VR classroom complements the Nano 2020 curriculum, which globally expands the opportunities for nanotechnology education within the fields of agriculture and the life sciences.
Teaching the Workforce of the Future
“There have been great advances to the use of nanotechnology in the health sciences, but many more opportunities for innovation in this area still exist in the agriculture fields. The idea is to be able to advance these opportunities for innovation by providing some educational tools,” said Randy Burd, who was a nutritional sciences professor at the University of Arizona when he started the Nano 2020 project with funding from a National Institute of Food and Agriculture Higher Education Challenge grant through the United States Department of Agriculture. “It not only will give students the basics of the understanding of the applications, but will give them the innovative thought processes to think of new creations. That’s the real key.”
The goal of the Nano 2020 team, which includes faculty from the University of Arizona, Northern Arizona University and Johns Hopkins University, was to create an online suite of undergraduate courses that was not university-specific, but could be accessed and added to by educators to reach students around the world.
To that end, the team built modular courses in nanotechnology subjects such as glycobiology, optical microscopy and histology, nanomicroscopy techniques, nutritional genomics, applications of magnetic nanotechnology, and design, innovation, and entrepreneurship, to name a few. An online library will be created to facilitate the ongoing expansion of the open-source curricula, which will be disseminated through novel technologies such as the virtual reality classroom.
“It isn’t practical to think that other universities and colleges are just going to be able to launch new courses, because they still need people to teach those courses,” Mars said. “So we created a robust and flexible set of module-based course packages that include exercises, lectures, videos, power points, tools. Instructors will be able to pull out components and integrate them into what already exists to continue to move toward a more comprehensive offering in nanotechnology education.”
According to Mars, the highly adaptable nature of the curriculum and the ability to deliver it in various ways were key components of the Nano 2020 project.
“We approach the project with a strong entrepreneurial mindset and heavy emphasis on innovation. We wanted it to be broadly defined and flexible in structure, so that other institutions access and model the curricula, see its foundation, and adapt that to what their needs were to begin to disseminate the notion of nanotechnology as an underdeveloped but really important field within the larger landscape of agriculture and life sciences,” Mars said. “We wanted to also provide an overlay to the scientific and technological components that would be about adoption in human application, and we approached that through an innovation and entrepreneurial leadership lens.”
Portions of the Nano 2020 curriculum are currently being offered as electives in a certificate program through the Department of Agriculture Education, Technology and Innovation at the University of Arizona. As it becomes more widely disseminated through the higher education community at large, researchers expect the curriculum and VR classroom technology to transcend the boundaries of discipline, institution and geography.
“An online open platform will exist where people can download components and courses, and all of it is framed by the technology, so that these experiences and research can be shared over this virtual reality component,” Burd said. “It’s technologically distinct from what exists now.”
“The idea is that it’s not just curriculum, but it’s the delivery of that curriculum, and the delivery of that curriculum in various ways,” Mars said. “There’s a relatability that comes with the virtual reality that I think is really cool. It allows students to relate to something as abstract as a nanometer, and that is what is really exciting.”
As best I can determine, this VR Nano 2020 classroom is not yet ready for a wide release and, for now, is being offered exclusively at the University of Arizona.
This is a cellulose nanocrystal (CNC) story and in this story it’s derived from trees as opposed to banana skins or carrots or … A February 19, 2020 news item on Nanowerk announces CNC research from Northeastern University (Massachusetts, US),
Nature isn’t always generous with its secrets. That’s why some researchers look into unusual places for solutions to our toughest challenges, from powerful antibiotics hiding in the guts of tiny worms, to swift robots inspired by bats.
Now, Northeastern researchers have taken to the trees to look for ways to make new sustainable materials from abundant natural resources—specifically, within the chemical structure of microfibers that make up wood.
A team led by Hongli (Julie) Zhu, an assistant professor of mechanical and industrial engineering at Northeastern, is using unique nanomaterials derived from cellulose to improve the large and expensive kind of batteries needed to store renewable energy harnessed from sources such as sunlight and the wind.
Cellulose, the most abundant natural polymer on Earth, is also the most important structural component of plants. It contains important molecular structures to improve batteries, reduce plastic pollution, and power the sort of electrical grids that could support entire communities with renewable energy, Zhu says.
“We try to use polymers from wood, from bark, from seeds, from flowers, bacteria, green tea—from these kinds of plants to replace plastic,” Zhu says.
One of the main challenges in storing energy from the sun, wind, and other types of renewables is that variation in factors such as the weather lead to inconsistent sources of power.
That’s where batteries with large capacity come in. But storing the large amounts of energy that sunlight and the wind are able to provide requires a special kind of device.
The most advanced batteries to do that are called flow batteries, and are made with vanadium ions dissolved in acid in two separate tanks—one with a substance of negatively charged ions, and one with positive ones. The two solutions are continuously pumped from the tank into a cell, which functions like an engine for the battery.
These substances are always separated by a special membrane that ensures that they exchange positive hydrogen ions without flowing into each other. That selective exchange of ions is the basis for the ability of the battery to charge and discharge energy.
Flow batteries are ideal devices in which to store solar and wind energy because they can be tweaked to increase the amount of energy stored without compromising the amount of energy that can be generated. The bigger the tanks, the more energy the battery can store from non-polluting and practically inexhaustible resources.
But manufacturing them requires several moving pieces of hardware. As the membrane separating the two flowing substances decays, it can cause the vanadium ions from the solution to mix. That crossover reduces the stability of a battery, along with its capacity to store energy.
Zhu says the limited efficiency of that membrane, combined with its high cost, are the main factors keeping flow batteries from being widely used in large-scale grids.
In a recent paper, Zhu reported that a new membrane made with cellulose nanocrystals demonstrates superior efficiency compared to other membranes used commonly in the market. The team tested different membranes made from cellulose nanocrystals to make flow batteries cheaper.
“The cost of our membrane per square meter is 147.68 US dollars, ” Zhu says, adding that her calculations do not include costs associated with marketing. “The price quote for the commercialized Nafion membrane is $1,321 per square meter.”
Their tests also showed that the membranes, made with support from the Rogers Corporation and its Innovation Center at Northeastern’s Kostas Research Institute, can offer substantially longer battery lifetimes than other membranes.
Zhu’s naturally derived membrane is especially efficient because its cellular structure contains thousands of hydroxyl groups, which involve bonds of hydrogen and oxygen that make it easy for water to be transported in plants and trees.
In flow batteries, that molecular makeup speeds the transport of protons as they flow through the membrane.
The membrane also consists of another polymer known as poly(vinylidene fluoride-hexafluoropropylene), which prevents the negatively and positively charged acids from mixing with each other.
“For these materials, one of the challenges is that it is difficult to find a polymer that is proton conductive and that is also a material that is very stable in the flowing acid,” Zhu says.
Because these materials are practically everywhere, membranes made with it can be easily put together at large scales needed for complex power grids.
Unlike other expensive artificial materials that need to be concocted in a lab, cellulose can be extracted from natural sources including algae, solid waste, and bacteria.
“A lot of material in nature is a composite, and if we disintegrate its components, we can use it to extract cellulose,” Zhu says. “Like waste from our yard, and a lot of solid waste that we don’t always know what to do with.”
Here’s a link to and a citation for the paper mentioned in the news release,
Weaving a quantum processor from light is a jaw-dropping event (as far as I’m concerned). An October 17, 2019 news item on phys.org makes the announcement,
An international team of scientists from Australia, Japan and the United States has produced a prototype of a large-scale quantum processor made of laser light.
Based on a design ten years in the making, the processor has built-in scalability that allows the number of quantum components—made out of light—to scale to extreme numbers. The research was published in Science today [October 18, 2019; Note: I cannot explain the discrepancy between the dates]].
Quantum computers promise fast solutions to hard problems, but to do this they require a large number of quantum components and must be relatively error free. Current quantum processors are still small and prone to errors. This new design provides an alternative solution, using light, to reach the scale required to eventually outperform classical computers on important problems.
“While today’s quantum processors are impressive, it isn’t clear if the current designs can be scaled up to extremely large sizes,” notes Dr Nicolas Menicucci, Chief Investigator at the Centre for Quantum Computation and Communication Technology (CQC2T) at RMIT University in Melbourne, Australia.
“Our approach starts with extreme scalability – built in from the very beginning – because the processor, called a cluster state, is made out of light.”
Using light as a quantum processor
A cluster state is a large collection of entangled quantum components that performs quantum computations when measured in a particular way.
“To be useful for real-world problems, a cluster state must be both large enough and have the right entanglement structure. In the two decades since they were proposed, all previous demonstrations of cluster states have failed on one or both of these counts,” says Dr Menicucci. “Ours is the first ever to succeed at both.”
To make the cluster state, specially designed crystals convert ordinary laser light into a type of quantum light called squeezed light, which is then weaved into a cluster state by a network of mirrors, beamsplitters and optical fibres.
The team’s design allows for a relatively small experiment to generate an immense two-dimensional cluster state with scalability built in. Although the levels of squeezing – a measure of quality – are currently too low for solving practical problems, the design is compatible with approaches to achieve state-of-the-art squeezing levels.
The team says their achievement opens up new possibilities for quantum computing with light.
“In this work, for the first time in any system, we have made a large-scale cluster state whose structure enables universal quantum computation.” Says Dr Hidehiro Yonezawa, Chief Investigator, CQC2T at UNSW Canberra. “Our experiment demonstrates that this design is feasible – and scalable.”
The experiment was an international effort, with the design developed through collaboration by Dr Menicucci at RMIT, Dr Rafael Alexander from the University of New Mexico and UNSW Canberra researchers Dr Hidehiro Yonezawa and Dr Shota Yokoyama. A team of experimentalists at the University of Tokyo, led by Professor Akira Furusawa, performed the ground-breaking experiment.
Here’s a link to and a citation for the paper,
Generation of time-domain-multiplexed two-dimensional cluster state by Warit Asavanant, Yu Shiozawa, Shota Yokoyama, Baramee Charoensombutamon, Hiroki Emura, Rafael N. Alexander, Shuntaro Takeda, Jun-ichi Yoshikawa, Nicolas C. Menicucci, Hidehiro Yonezawa, Akira Furusawa. Science 18 Oct 2019: Vol. 366, Issue 6463, pp. 373-376 DOI: 10.1126/science.aay2645
Graphene fatigue operates under the same principle as metal fatigue. Subject graphene to stress over and over and at some point it (just like metal) will fail. Scientists at the University of Toronto (Ontatrio, Canada) and Rice University (Texas, US) have determined just how much stress graphene can withstand before breaking according to a January 28, 2020 University of Toronto news release by Tyler Irving (also on EurekAlert but published on January 29, 2020),
Graphene is a paradox. It is the thinnest material known to science, yet also one of the strongest. Now, research from University of Toronto Engineering shows that graphene is also highly resistant to fatigue — able to withstand more than a billion cycles of high stress before it breaks.
Graphene resembles a sheet of interlocking hexagonal rings, similar to the pattern you might see in bathroom flooring tiles. At each corner is a single carbon atom bonded to its three nearest neighbours. While the sheet could extend laterally over any area, it is only one atom thick.
The intrinsic strength of graphene has been measured at more than 100 gigapascals, among the highest values recorded for any material. But materials don’t always fail because the load exceeds their maximum strength. Stresses that are small but repetitive can weaken materials by causing microscopic dislocations and fractures that slowly accumulate over time, a process known as fatigue.
“To understand fatigue, imagine bending a metal spoon,” says Professor Tobin Filleter, one of the senior authors of the study, which was recently published in Nature Materials. “The first time you bend it, it just deforms. But if you keep working it back and forth, eventually it’s going to break in two.”
The research team — consisting of Filleter, fellow University of Toronto Engineering professors Chandra Veer Singh and Yu Sun, their students, and collaborators at Rice University — wanted to know how graphene would stand up to repeated stresses. Their approach included both physical experiments and computer simulations.
“In our atomistic simulations, we found that cyclic loading can lead to irreversible bond reconfigurations in the graphene lattice, causing catastrophic failure on subsequent loading,” says Singh, who along with postdoctoral fellow Sankha Mukherjee led the modelling portion of the study. “This is unusual behaviour in that while the bonds change, there are no obvious cracks or dislocations, which would usually form in metals, until the moment of failure.”
PhD candidate Teng Cui, who is co-supervised by Filleter and Sun, used the Toronto Nanofabrication Centre to build a physical device for the experiments. The design consisted of a silicon chip etched with half a million tiny holes only a few micrometres in diameter. The graphene sheet was stretched over these holes, like the head of a tiny drum.
Using an atomic force microscope, Cui then lowered a diamond-tipped probe into the hole to push on the graphene sheet, applying anywhere from 20 to 85 per cent of the force that he knew would break the material.
“We ran the cycles at a rate of 100,000 times per second,” says Cui. “Even at 70 per cent of the maximum stress, the graphene didn’t break for more than three hours, which works out to over a billion cycles. At lower stress levels, some of our trials ran for more than 17 hours.”
As with the simulations, the graphene didn’t accumulate cracks or other tell-tale signs of stress — it either broke or it didn’t.
“Unlike metals, there is no progressive damage during fatigue loading of graphene,” says Sun. “Its failure is global and catastrophic, confirming simulation results.”
The team also tested a related material, graphene oxide, which has small groups of atoms such as oxygen and hydrogen bonded to both the top and bottom of the sheet. Its fatigue behaviour was more like traditional materials, in that the failure was more progressive and localized. This suggests that the simple, regular structure of graphene is a major contributor to its unique properties.
“There are no other materials that have been studied under fatigue conditions that behave the way graphene does,” says Filleter. “We’re still working on some new theories to try and understand this.”
In terms of commercial applications, Filleter says that graphene-containing composites — mixtures of conventional plastic and graphene — are already being produced and used in sports equipment such as tennis rackets and skis.
In the future, such materials may begin to be used in cars or in aircraft, where the emphasis on light and strong materials is driven by the need to reduce weight, improve fuel efficiency and enhance environmental performance.
“There have been some studies to suggest that graphene-containing composites offer improved resistance to fatigue, but until now, nobody had measured the fatigue behaviour of the underlying material,” he says. “Our goal in doing this was to get at that fundamental understanding so that in the future, we’ll be able to design composites that work even better.”
Here’s a link to and a citation for the paper,
Fatigue of graphene by Teng Cui, Sankha Mukherjee, Parambath M. Sudeep, Guillaume Colas, Farzin Najafi, Jason Tam, Pulickel M. Ajayan, Chandra Veer Singh, Yu Sun & Tobin Filleter. Nature Materials (2020) DOI: DOIhttps://doi.org/10.1038/s41563-019-0586-y Published: 20 January 2020
No brain but it learns, it has about 720 sexes, and it travels at a rate of approximately 4 cm (1.6 inches) per hour, it is known as ‘le blob’. Fascinated when I first stumbled across the news, I had to post this piece but wish I hadn’t waited so long.
Here’s the 101: the 900-odd species of slime mould, of which P. polycephalum is just one, are a taxonomic headache. They’re currently boxed into the Protista kingdom, because where else are you going to put something that isn’t a fungus, plant, bacteria, or animal?
When life is good, they tend to live solitary lives as single cells like amoeba.
On occasion they squish together, forming a wide, branching structure called a plasmodium that can cover several square metres as they search cities to conquer. Well, bacteria to digest at least.
If you thought your experience on Tinder was hard, dating for slime moulds is a nightmare. Cells can only mix-and-match their genetic material if each has a compatible set of genes called matA, mat B, and mat C, each with up to 16 variations.
But the truly fascinating part is their ability to sense and rapidly adapt to their environment – a behaviour we might, for lack of a better word, call learning.
It isn’t an animal, a plant, or a fungus. The slime mold (Physarum polycephalum) is a strange, creeping, bloblike organism made up of one giant cell. Though it has no brain, it can learn from experience, as biologists at the Research Centre on Animal Cognition (CNRS, Université Toulouse III — Paul Sabatier) previously demonstrated. Now the same team of scientists has gone a step further, proving that a slime mold can transmit what it has learned to a fellow slime mold when the two combine. These new findings are published in the December 21, 2016, issue of the Proceedings of the Royal Society B.
Imagine you could temporarily fuse with someone, acquire that person’s knowledge, and then split off to become your separate self again. With slime molds, that really happens! The slime mold — Physarum polycephalum for scientists — is a unicellular organism whose natural habitat is forest litter. But it can also be cultured in a laboratory petri dish. Audrey Dussutour and David Vogel had already trained slime molds to move past repellent but harmless substances (e.g. coffee, quinine, or salt) to reach their food. They now reveal that a slime mold that has learned to ignore salt can transmit this acquired behavior to another simply by fusing with it.
To achieve this, the researchers taught more than 2,000 slime molds that salt posed no threat. In order to reach their food, these slime molds had to cross a bridge covered with salt. This experience made them habituated slime molds. Meanwhile, another 2,000 slime molds had to cross a bridge bare of any substance. They made up the group of naive slime molds. After this training period, the scientists grouped slime molds into habituated, naive, and mixed pairs. Paired slime molds fused together where they came into contact. The new, fused slime molds then had to cross salt-covered bridges. To the researchers’ surprise, the mixed slime molds moved just as fast as habituated pairs, and much faster than naive ones, suggesting that knowledge of the harmless nature of salt had been shared. This held true for slime molds formed from 3 or 4 individuals. No matter how many fused, only 1 habituated slime mold was needed to transfer the information.
To check that transfer had indeed taken place, the scientists separated the slime molds 1 hour and 3 hours after fusion and repeated the bridge experiment. Only naive slime molds that had been fused with habituated slime molds for 3 hours ignored the salt; all others were repulsed by it. This was proof of learning. When viewing the slime molds through a microscope, the scientists noticed that, after 3 hours, a vein formed at the point of fusion. This vein is undoubtedly the channel through which information is shared. The next challenges facing the researchers are to elucidate the form this information takes, and to test whether more than one behavior can be transmitted simultaneously. If Slime Mold A learns how to ignore quinine and Slime Mold B to ignore salt, the biologists wonder whether both behaviors can be transmitted and retained through fusion.
Here’s a link to and a citation for the paper published in 2016,
Le blob est un organisme unicellulaire complexe mais dépourvu de système nerveux. Celui-ci est capable d’emmagasiner une connaissance et de la transmettre à ses congénères mais la manière dont il procède demeurait un mystère. Des chercheuses et chercheurs du Centre de recherches sur la cognition animale (CNRS/UT3 Paul Sabatier)* viennent de montrer que le blob apprend à tolérer une substance en l’absorbant.
Cette découverte découle d’une observation : les blobs s’échangent de l’information seulement lorsque leurs réseaux veineux fusionnent. Dans ce cas-là, la connaissance circule-t-elle au travers de ces veines ? Dès lors, la substance à laquelle le blob s’habitue constitue-t-elle le support de sa « mémoire » ?
Dans un premier temps l’équipe de scientifiques a entrainé des blobs à traverser des environnements salés pendant six jours dans le but de les habituer au sel. Par la suite, elle a évalué la concentration en sel au sein de ces blobs : ceux-ci en contenaient dix fois plus que les blobs « naïfs ». Les chercheurs les ont alors placés dans un environnement neutre et ont observé qu’ils excrétaient le sel qu’ils contenaient au bout de deux jours, perdant de fait « la mémoire ». Cette expérience semblait donc indiquer un lien entre la concentration de sel au sein de l’organisme et la « mémoire » de l’apprentissage.
Pour aller plus loin et confirmer cette hypothèse, les scientifiques ont introduit dans des blobs naïfs la « mémoire » de l’habituation au sel en en injectant directement dans leurs organismes. Deux heures après, les blobs ne se comportaient plus comme des naïfs mais comme des blobs ayant subi un entrainement de six jours.
Lorsque les conditions environnementales se détériorent, les blobs sont capables d’entrer dans un état de dormance. Les chercheurs ont démontré qu’un mois après être entrés dans cet état, les blobs conservaient leur habituation au sel. Les blobs stockent en effet le sel absorbé pendant la phase de dormance et conservent ainsi la connaissance sur le long terme.
Les résultats de cette étude prouvent que la substance aversive pourrait constituer le support de la « mémoire » du blob. Les chercheurs essayent maintenant de comprendre si le blob peut mémoriser plusieurs substances aversives en même temps et dans quelle mesure il est capable de s’y habituer.
* Le Centre de recherche sur la cognition animale fait partie du Centre de biologie intégrative (CNRS/UT3 Paul Sabatier)
Here’s the abstract for the paper (the link and citation follow afterward),
Learning and memory are indisputably key features of animal success. Using information about past experiences is critical for optimal decision-making in a fluctuating environment. Those abilities are usually believed to be limited to organisms with a nervous system, precluding their existence in non-neural organisms. However, recent studies showed that the slime mould Physarum polycephalum, despite being unicellular, displays habituation, a simple form of learning. In this paper, we studied the possible substrate of both short- and long-term habituation in slime moulds. We habituated slime moulds to sodium, a known repellent, using a 6 day training and turned them into a dormant state named sclerotia. Those slime moulds were then revived and tested for habituation. We showed that information acquired during the training was preserved through the dormant stage as slime moulds still showed habituation after a one-month dormancy period. Chemical analyses indicated a continuous uptake of sodium during the process of habituation and showed that sodium was retained throughout the dormant stage. Lastly, we showed that memory inception via constrained absorption of sodium for 2 h elicited habituation. Our results suggest that slime moulds absorbed the repellent and used it as a ‘circulating memory’.
This article is part of the theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information’.
Here’s the link and the citation for the 2019 paper,
Should you ever wish to find ‘le blob’, the Paris Zoological Park, known as the parc zoologique de Paris, is one of four establishments which comprise the totality of the Muséum national d’histoire naturelle in Paris. There are others outside Paris. (You can find more in the Muséum’s Wikipedia entry but it is in French.)