The most recent post here but not the most recent research about preserving stone monuments and buildings is a December 23, 2019 piece titled: Good for your bones and good for art conservation: calcium. Spanish researchers (who seem particularly active in this research niche) are investigating a more refined approach to preserving stone monuments with calcium according to a May 8, 2020 news item on Nanowerk,
The fluorescence emitted by tiny zinc oxide quantum dots can be used to determine the penetration depth of certain substances used in the restoration of historical buildings. Researchers from Pablo de Olavide University (Spain) have tested this with samples collected from historical quarries in Cadiz, where the stone was used to build the city hall and cathedral of Seville.
One of the main problems in the preservation of historic buildings is the loss of cohesion of their building materials. Restorers use consolidating substances to make them more resistant, such as lime (calcium hydroxide), which has long been used because of its great durability and high compatibility with the carbonate stone substrate.
Now, researchers at Pablo de Olavide University, in Seville, have developed and patented calcium hydroxide nanoparticles doped with quantum dots that are more effective as consolidant and make it possible to distinguish the restored from the original material, as it is recommended for the conservation and restoration of historical heritage.
“The tiny quantum dots, which are smaller than 10 nanometres, are made of zinc oxide and are semiconductors, which gives them very interesting properties (different from those of larger particles due to quantum mechanics), such as fluorescence, which is the one we use,” explains Javier Becerra, one of the authors.
“Thanks to the fluorescence of these quantum dots, we can evaluate the suitability of the treatment for a monument,” he adds. “We only need to illuminate with ultraviolet light a cross-section of the treated material to determine how far the consolidating matter has penetrated.”
In addition, the product, which the authors have named Nanorepair UV, acts as a consolidant due to the presence of the lime nanoparticles. Consolidation is a procedure that increases the degree of cohesion of a material, reinforcing and hardening the parts that have suffered some deterioration, which is frequent in historical buildings.
The researchers have successfully applied their technique to samples collected in the historic quarries of El Puerto de Santa María and Espera (Cadiz), from where the stone used to build such iconic monuments as Seville Cathedral, a World Heritage Site since 1987, or the town’s city hall, was extracted.
“In the laboratory, we thus obtain an approximation of how the treatment will behave when it is actually applied to the monuments,” says Becerra, who together with the rest of the team, is currently also testing mortars from the Italica and Medina Azahara archaeological sites.
Oddly, this work is not all that recently published. In any event, here’s a link to and a citation for the paper,
Robot comedian is not my first thought on seeing that image; ventriloquist’s dummy is what came to mind. However, it’s not the first time I’ve been wrong about something. A May 19, 2020 news item on ScienceDaily reveals the truth about Jon, a comedian in robot form,
Standup comedian Jon the Robot likes to tell his audiences that he does lots of auditions but has a hard time getting bookings.
“They always think I’m too robotic,” he deadpans.
If raucous laughter follows, he comes back with, “Please tell the booking agents how funny that joke was.”
If it doesn’t, he follows up with, “Sorry about that. I think I got caught in a loop. Please tell the booking agents that you like me … that you like me … that you like me … that you like me.”
Jon the Robot, with assistance from Oregon State University researcher Naomi Fitter, recently wrapped up a 32-show tour of comedy clubs in greater Los Angeles and in Oregon, generating guffaws and more importantly data that scientists and engineers can use to help robots and people relate more effectively with one another via humor.
“Social robots and autonomous social agents are becoming more and more ingrained in our everyday lives,” said Fitter, assistant professor of robotics in the OSU College of Engineering. “Lots of them tell jokes to engage users – most people understand that humor, especially nuanced humor, is essential to relationship building. But it’s challenging to develop entertaining jokes for robots that are funny beyond the novelty level.”
Live comedy performances are a way for robots to learn “in the wild” which jokes and which deliveries work and which ones don’t, Fitter said, just like human comedians do.
Two studies comprised the comedy tour, which included assistance from a team of Southern California comedians in coming up with material true to, and appropriate for, a robot comedian.
The first study, consisting of 22 performances in the Los Angeles area, demonstrated that audiences found a robot comic with good timing – giving the audience the right amounts of time to react, etc. – to be significantly more funny than one without good timing.
The second study, based on 10 routines in Oregon, determined that an “adaptive performance” – delivering post-joke “tags” that acknowledge an audience’s reaction to the joke – wasn’t necessarily funnier overall, but the adaptations almost always improved the audience’s perception of individual jokes. In the second study, all performances featured appropriate timing.
“In bad-timing mode, the robot always waited a full five seconds after each joke, regardless of audience response,” Fitter said. “In appropriate-timing mode, the robot used timing strategies to pause for laughter and continue when it subsided, just like an effective human comedian would. Overall, joke response ratings were higher when the jokes were delivered with appropriate timing.”
The number of performances, given to audiences of 10 to 20, provide enough data to identify significant differences between distinct modes of robot comedy performance, and the research helped to answer key questions about comedic social interaction, Fitter said.
“Audience size, social context, cultural context, the microphone-holding human presence and the novelty of a robot comedian may have influenced crowd responses,” Fitter said. “The current software does not account for differences in laughter profiles, but future work can account for these differences using a baseline response measurement. The only sensing we used to evaluate joke success was audio readings. Future work might benefit from incorporating additional types of sensing.”
Still, the studies have key implications for artificial intelligence efforts to understand group responses to dynamic, entertaining social robots in real-world environments, she said.
“Also, possible advances in comedy from this work could include improved techniques for isolating and studying the effects of comedic techniques and better strategies to help comedians assess the success of a joke or routine,” she said. “The findings will guide our next steps toward giving autonomous social agents improved humor capabilities.”
The studies were published by the Association for Computing Machinery [ACM]/Institute of Electrical and Electronics Engineering’s [IEEE] International Conference on Human-Robot Interaction [HRI].
Here’s another link to the two studies published in a single paper, which were first presented at the 2020 International Conference on Human-Robot Interaction [HRI]. along with a citation for the title of the published presentation,
More and more, this resembles a public relations campaign. First, CRISPR (clustered regularly interspersed short palindromic repeats) gene editing is going to be helpful with COVID-19 and now it can help us to deal with conservation issues. (See my May 26, 2020 posting about the latest CRISPR doings as of May 7, 2020; included is a brief description of the patent dispute between Broad Institute and UC Berkeley and musings about a public relations campaign.)
The gene-editing technology CRISPR has been used for a variety of agricultural and public health purposes — from growing disease-resistant crops to, more recently, a diagnostic test for the virus that causes COVID-19. Now a study involving fish that look nearly identical to the endangered Delta smelt finds that CRISPR can be a conservation and resource management tool, as well. The researchers think its ability to rapidly detect and differentiate among species could revolutionize environmental monitoring.
The study, published in the journal Molecular Ecology Resources, was led by scientists at the University of California, Davis, and the California Department of Water Resources in collaboration with MIT Broad Institute [emphasis mine].
As a proof of concept, it found that the CRISPR-based detection platform SHERLOCK (Specific High-sensitivity Enzymatic Reporter Unlocking) [emphasis mine] was able to genetically distinguish threatened fish species from similar-looking nonnative species in nearly real time, with no need to extract DNA.
“CRISPR can do a lot more than edit genomes,” said co-author Andrea Schreier, an adjunct assistant professor in the UC Davis animal science department. “It can be used for some really cool ecological applications, and we’re just now exploring that.”
WHEN GETTING IT WRONG IS A BIG DEAL
The scientists focused on three fish species of management concern in the San Francisco Estuary: the U.S. threatened and California endangered Delta smelt, the California threatened longfin smelt and the nonnative wakasagi. These three species are notoriously difficult to visually identify, particularly in their younger stages.
Hundreds of thousands of Delta smelt once lived in the Sacramento-San Joaquin Delta before the population crashed in the 1980s. Only a few thousand are estimated to remain in the wild.
“When you’re trying to identify an endangered species, getting it wrong is a big deal,” said lead author Melinda Baerwald, a project scientist at UC Davis at the time the study was conceived and currently an environmental program manager with California Department of Water Resources.
For example, state and federal water pumping projects have to reduce water exports if enough endangered species, like Delta smelt or winter-run chinook salmon, get sucked into the pumps. Rapid identification makes real-time decision making about water operations feasible.
FROM HOURS TO MINUTES
Typically to accurately identify the species, researchers rub a swab over the fish to collect a mucus sample or take a fin clip for a tissue sample. Then they drive or ship it to a lab for a genetic identification test and await the results. Not counting travel time, that can take, at best, about four hours.
SHERLOCK shortens this process from hours to minutes. Researchers can identify the species within about 20 minutes, at remote locations, noninvasively, with no specialized lab equipment. Instead, they use either a handheld fluorescence reader or a flow strip that works much like a pregnancy test — a band on the strip shows if the target species is present.
“Anyone working anywhere could use this tool to quickly come up with a species identification,” Schreier said.
OTHER CRYPTIC CRITTERS
While the three fish species were the only animals tested for this study, the researchers expect the method could be used for other species, though more research is needed to confirm. If so, this sort of onsite, real-time capability may be useful for confirming species at crime scenes, in the animal trade at border crossings, for monitoring poaching, and for other animal and human health applications.
“There are a lot of cryptic species we can’t accurately identify with our naked eye,” Baerwald said. “Our partners at MIT are really interested in pathogen detection for humans. We’re interested in pathogen detection for animals as well as using the tool for other conservation issues.”
SHERLOCK is an evolution of CRISPR technology, which others use to make precise edits in genetic code. SHERLOCK can detect the unique genetic fingerprints of virtually any DNA or RNA sequence in any organism or pathogen. Developed by our founders and licensed exclusively from the Broad Institute, SHERLOCK is a method for single molecule detection of nucleic acid targets and stands for Specific High Sensitivity Enzymatic Reporter unLOCKing. It works by amplifying genetic sequences and programming a CRISPR molecule to detect the presence of a specific genetic signature in a sample, which can also be quantified. When it finds those signatures, the CRISPR enzyme is activated and releases a robust signal. This signal can be adapted to work on a simple paper strip test, in laboratory equipment, or to provide an electrochemical readout that can be read with a mobile phone.
Ensuring the SHERLOCK diagnostic platform is easily accessible, especially in the developing world, where the need for inexpensive, reliable, field-based diagnostics is the most urgent
SHERLOCK (Specific High-sensitivity Enzymatic Reporter unLOCKing) is a CRISPR-based diagnostic tool that is rapid, inexpensive, and highly sensitive, with the potential to have a transformative effect on research and global public health. The SHERLOCK platform can detect viruses, bacteria, or other targets in clinical samples such as urine or blood, and reveal results on a paper strip — without the need for extensive specialized equipment. This technology could potentially be used to aid the response to infectious disease outbreaks, monitor antibiotic resistance, detect cancer, and more. SHERLOCK tools are freely available [emphasis mine] for academic research worldwide, and the Broad Institute’s licensing framework [emphasis mine] ensures that the SHERLOCK diagnostic platform is easily accessible in the developing world, where inexpensive, reliable, field-based diagnostics are urgently needed.
Here’s what I suspect. as stated, the Broad Institute has free SHERLOCK licenses for academic institutions and not-for-profit organizations but Sherlock Biosciences, a Broad Institute spinoff company, is for-profit and has trademarked SHERLOCK for commercial purposes.
This looks like a relatively subtle campaign to influence public perceptions. Genetic modification or genetic engineering as exemplified by the CRISPR gene editing technique is a force for the good of all. It will help us in our hour of need (COVID-19 pandemic) and it can help us save various species and better manage our resources.
This contrasts greatly with the publicity generated by the CRISPR twins situation where a scientist claimed to have successfully edited the germline for twins, Lulu and Nana. This was done despite a voluntary, worldwide moratorium on germline editing of viable embryos. (Search the terms [either here or on a standard search engine] ‘CRISPR twins’, ‘Lulu and Nana’, and/or ‘He Jiankui’ for details about the scandal.
In addition to presenting CRISPR as beneficial in the short term rather than the distant future, this publicity also subtly positions the Broad Institute as CRISPR’s owner.
Clustered regularly interspersed short palindromic repeats (CRISPR) gene editing has been largely confined to laboratory use or tested in agricultural trials. I believe that is true worldwide excepting the CRISPR twin scandal. (There are numerous postings about the CRISPR twins here including a Nov. 28, 2018 post, a May 17, 2019 post, and a June 20, 2019 post. Update: It was reported (3rd. para.) in December 2019 that He had been sentenced to three years jail time.)
Connie Lin in a May 7, 2020 article for Fast Company reports on this surprising decision by the US Food and Drug Administration (FDA), Note: A link has been removed),
The U.S. Food and Drug Administration has granted Emergency Use Authorization to a COVID-19 test that uses controversial gene-editing technology CRISPR.
This marks the first time CRISPR has been authorized by the FDA, although only for the purpose of detecting the coronavirus, and not for its far more contentious applications. The new test kit, developed by Cambridge, Massachusetts-based Sherlock Biosciences, will be deployed in laboratories certified to carry out high-complexity procedures and is “rapid,” returning results in about an hour as opposed to those that rely on the standard polymerase chain reaction method, which typically requires six hours.
The announcement was made in the FDA’s Coronavirus (COVID-19) Update: May 7, 2020 Daily Roundup (4th item in the bulleted list), Or, you can read the May 6, 2020 letter (PDF) sent to John Vozella of Sherlock Biosciences by the FDA.
Sherlock Biosciences, an Engineering Biology company dedicated to making diagnostic testing better, faster and more affordable, today announced the company has received Emergency Use Authorization (EUA) from the U.S. Food and Drug Administration (FDA) for its Sherlock™ CRISPR SARS-CoV-2 kit for the detection of the virus that causes COVID-19, providing results in approximately one hour.
“While it has only been a little over a year since the launch of Sherlock Biosciences, today we have made history with the very first FDA-authorized use of CRISPR technology, which will be used to rapidly identify the virus that causes COVID-19,” said Rahul Dhanda, co-founder, president and CEO of Sherlock Biosciences. “We are committed to providing this initial wave of testing kits to physicians, laboratory experts and researchers worldwide to enable them to assist frontline workers leading the charge against this pandemic.”
The Sherlock™ CRISPR SARS-CoV-2 test kit is designed for use in laboratories certified under the Clinical Laboratory Improvement Amendments of 1988 (CLIA), 42 U.S.C. §263a, to perform high complexity tests. Based on the SHERLOCK method, which stands for Specific High-sensitivity Enzymatic Reporter unLOCKing, the kit works by programming a CRISPR molecule to detect the presence of a specific genetic signature – in this case, the genetic signature for SARS-CoV-2 – in a nasal swab, nasopharyngeal swab, oropharyngeal swab or bronchoalveolar lavage (BAL) specimen. When the signature is found, the CRISPR enzyme is activated and releases a detectable signal. In addition to SHERLOCK, the company is also developing its INSPECTR™ platform to create an instrument-free, handheld test – similar to that of an at-home pregnancy test – that utilizes Sherlock Biosciences’ Synthetic Biology platform to provide rapid detection of a genetic match of the SARS-CoV-2 virus.
“When our lab collaborated with Dr. Feng Zhang’s team to develop SHERLOCK, we believed that this CRISPR-based diagnostic method would have a significant impact on global health,” said James J. Collins, co-founder and board member of Sherlock Biosciences and Termeer Professor of Medical Engineering and Science for MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering. “During what is a major healthcare crisis across the globe, we are heartened that the first FDA-authorized use of CRISPR will aid in the fight against this global COVID-19 pandemic.”
Access to rapid diagnostics is critical for combating this pandemic and is a primary focus for Sherlock Biosciences co-founder and board member, David R. Walt, Ph.D., who co-leads the Mass [Massachusetts] General Brigham Center for COVID Innovation.
“SHERLOCK enables rapid identification of a single alteration in a DNA or RNA sequence in a single molecule,” said Dr. Walt. “That precision, coupled with its capability to be deployed to multiplex over 100 targets or as a simple point-of-care system, will make it a critical addition to the arsenal of rapid diagnostics already being used to detect COVID-19.”
This development is particularly interesting since there was a major intellectual property dispute over CRISPR between the Broad Institute (a Harvard University and Massachusetts Institute of Technology [MIT] joint initiative), and the University of California at Berkeley (UC Berkeley). The Broad Institute mostly won in the first round of the patent fight, as I noted in a March 15, 2017 post but, as far as I’m aware, UC Berkeley is still disputing that decision.
In the period before receiving authorization, it appears that Sherlock Biosciences was doing a little public relations and ‘consciousness raising’ work. Here’s a sample from a May 5, 2020 article by Sharon Begley for STAT (Note: Links have been removed),
The revolutionary genetic technique better known for its potential to cure thousands of inherited diseases could also solve the challenge of Covid-19 diagnostic testing, scientists announced on Tuesday. A team headed by biologist Feng Zhang of the McGovern Institute at MIT and the Broad Institute has repurposed the genome-editing tool CRISPR into a test able to quickly detect as few as 100 coronavirus particles in a swab or saliva sample.
Crucially, the technique, dubbed a “one pot” protocol, works in a single test tube and does not require the many specialty chemicals, or reagents, whose shortage has hampered the rollout of widespread Covid-19 testing in the U.S. It takes about an hour to get results, requires minimal handling, and in preliminary studies has been highly accurate, Zhang told STAT. He and his colleagues, led by the McGovern’s Jonathan Gootenberg and Omar Abudayyeh, released the protocol on their STOPCovid.science website.
Because the test has not been approved by the Food and Drug Administration, it is only for research purposes for now. But minutes before speaking to STAT on Monday, Zhang and his colleagues were on a conference call with FDA officials about what they needed to do to receive an “emergency use authorization” that would allow clinical use of the test. The FDA has used EUAs to fast-track Covid-19 diagnostics as well as experimental therapies, including remdesivir, after less extensive testing than usually required.
For an EUA, the agency will require the scientists to validate the test, which they call STOPCovid, on dozens to hundreds of samples. Although “it is still early in the process,” Zhang said, he and his colleagues are confident enough in its accuracy that they are conferring with potential commercial partners who could turn the test into a cartridge-like device, similar to a pregnancy test, enabling Covid-19 testing at doctor offices and other point-of-care sites.
“It could potentially even be used at home or at workplaces,” Zhang said. “It’s inexpensive, does not require a lab, and can return results within an hour using a paper strip, not unlike a pregnancy test. This helps address the urgent need for widespread, accurate, inexpensive, and accessible Covid-19 testing.” Public health experts say the availability of such a test is one of the keys to safely reopening society, which will require widespread testing, and then tracing and possibly isolating the contacts of those who test positive.
A May 15, 2020 news item on Nanowerk provides context for an announcement of a research breakthrough on quantum entanglement,
Quantum entanglement is a process by which microscopic objects like electrons or atoms lose their individuality to become better coordinated with each other. Entanglement is at the heart of quantum technologies that promise large advances in computing, communications and sensing, for example detecting gravitational waves.
Entangled states are famously fragile: in most cases even a tiny disturbance will undo the entanglement. For this reason, current quantum technologies take great pains to isolate the microscopic systems they work with, and typically operate at temperatures close to absolute zero.
The ICFO [Institute of Photonic Sciences; Spain] team, in contrast, heated a collection of atoms to 450 Kelvin, millions of times hotter than most atoms used for quantum technology. Moreover, the individual atoms were anything but isolated; they collided with each other every few microseconds, and each collision set their electrons spinning in random directions.
The researchers used a laser to monitor the magnetization of this hot, chaotic gas. The magnetization is caused by the spinning electrons in the atoms, and provides a way to study the effect of the collisions and to detect entanglement. What the researchers observed was an enormous number of entangled atoms – about 100 times more than ever before observed. They also saw that the entanglement is non-local – it involves atoms that are not close to each other. Between any two entangled atoms there are thousands of other atoms, many of which are entangled with still other atoms, in a giant, hot and messy entangled state.
What they also saw, as Jia Kong, first author of the study, recalls, “is that if we stop the measurement, the entanglement remains for about 1 millisecond, which means that 1000 times per second a new batch of 15 trillion atoms is being entangled. And you must think that 1 ms is a very long time for the atoms, long enough for about fifty random collisions to occur. This clearly shows that the entanglement is not destroyed by these random events. This is maybe the most surprising result of the work”.
The observation of this hot and messy entangled state paves the way for ultra-sensitive magnetic field detection. For example, in magnetoencephalography (magnetic brain imaging), a new generation of sensors uses these same hot, high-density atomic gases to detect the magnetic fields produced by brain activity. The new results show that entanglement can improve the sensitivity of this technique, which has applications in fundamental brain science and neurosurgery.
As ICREA [Catalan Institution for Research and Advanced Studies] Prof. at ICFO Morgan Mitchell states, “this result is surprising, a real departure from what everyone expects of entanglement.” He adds “we hope that this kind of giant entangled state will lead to better sensor performance in applications ranging from brain imaging to self-driving cars to searches for dark matter
A Spin Singlet and QND
A spin singlet is one form of entanglement where the multiple particles’ spins–their intrinsic angular momentum–add up to 0, meaning the system has zero total angular momentum. In this study, the researchers applied quantum non-demolition (QND) measurement to extract the information of the spin of trillions of atoms. The technique passes laser photons with a specific energy through the gas of atoms. These photons with this precise energy do not excite the atoms but they themselves are affected by the encounter. The atoms’ spins act as magnets to rotate the polarization of the light. By measuring how much the photons’ polarization has changed after passing through the cloud, the researchers are able to determine the total spin of the gas of atoms.
The SERF regime
Current magnetometers operate in a regime that is called SERF, far away from the near absolute zero temperatures that researchers typically employ to study entangled atoms. In this regime, any atom experiences many random collisions with other neighbouring atoms, making collisions the most important effect on the state of the atom. In addition, because they are in a hot medium rather than an ultracold one, the collisions rapidly randomize the spin of the electrons in any given atom. The experiment shows, surprisingly, that this kind of disturbance does not break the entangled states, it merely passes the entanglement from one atom to another.
To just solve a puzzle or play a game, artificial intelligence can require software running on thousands of computers. That could be the energy that three nuclear plants produce in one hour.
A team of engineers has created hardware that can learn skills using a type of AI that currently runs on software platforms. Sharing intelligence features between hardware and software would offset the energy needed for using AI in more advanced applications such as self-driving cars or discovering drugs.
“Software is taking on most of the challenges in AI. If you could incorporate intelligence into the circuit components in addition to what is happening in software, you could do things that simply cannot be done today,” said Shriram Ramanathan, a professor of materials engineering at Purdue University.
AI hardware development is still in early research stages. Researchers have demonstrated AI in pieces of potential hardware, but haven’t yet addressed AI’s large energy demand.
As AI penetrates more of daily life, a heavy reliance on software with massive energy needs is not sustainable, Ramanathan said. If hardware and software could share intelligence features, an area of silicon might be able to achieve more with a given input of energy.
Ramanathan’s team is the first to demonstrate artificial “tree-like” memory in a piece of potential hardware at room temperature. Researchers in the past have only been able to observe this kind of memory in hardware at temperatures that are too low for electronic devices.
The results of this study are published in the journal Nature Communications.
The hardware that Ramanathan’s team developed is made of a so-called quantum material. These materials are known for having properties that cannot be explained by classical physics. Ramanathan’s lab has been working to better understand these materials and how they might be used to solve problems in electronics.
Software uses tree-like memory to organize information into various “branches,” making that information easier to retrieve when learning new skills or tasks.
The strategy is inspired by how the human brain categorizes information and makes decisions.
“Humans memorize things in a tree structure of categories. We memorize ‘apple’ under the category of ‘fruit’ and ‘elephant’ under the category of ‘animal,’ for example,” said Hai-Tian Zhang, a Lillian Gilbreth postdoctoral fellow in Purdue’s College of Engineering. “Mimicking these features in hardware is potentially interesting for brain-inspired computing.”
The team introduced a proton to a quantum material called neodymium nickel oxide. They discovered that applying an electric pulse to the material moves around the proton. Each new position of the proton creates a different resistance state, which creates an information storage site called a memory state. Multiple electric pulses create a branch made up of memory states.
“We can build up many thousands of memory states in the material by taking advantage of quantum mechanical effects. The material stays the same. We are simply shuffling around protons,” Ramanathan said.
Through simulations of the properties discovered in this material, the team showed that the material is capable of learning the numbers 0 through 9. The ability to learn numbers is a baseline test of artificial intelligence.
The demonstration of these trees at room temperature in a material is a step toward showing that hardware could offload tasks from software.
“This discovery opens up new frontiers for AI that have been largely ignored because implementing this kind of intelligence into electronic hardware didn’t exist,” Ramanathan said.
The material might also help create a way for humans to more naturally communicate with AI.
“Protons also are natural information transporters in human beings. A device enabled by proton transport may be a key component for eventually achieving direct communication with organisms, such as through a brain implant,” Zhang said.
Here’s a link to and a citation for the published study,
Perovskite neural trees by Hai-Tian Zhang, Tae Joon Park, Shriram Ramanathan. Nature Communications volume 11, Article number: 2245 (2020) DOI: https://doi.org/10.1038/s41467-020-16105-y Published: 07 May 2020
I went down a rabbit hole while trying to figure out the difference between ‘organic’ memristors and standard memristors. I have put the results of my investigation at the end of this post. First, there’s the news.
An April 21, 2020 news item on ScienceDaily explains why researchers are so focused on memristors and brainlike computing,
The advent of artificial intelligence, machine learning and the internet of things is expected to change modern electronics and bring forth the fourth Industrial Revolution. The pressing question for many researchers is how to handle this technological revolution.
“It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets,” said Thirumalai Venkatesan, one of the authors of a paper published in Applied Physics Reviews, from AIP Publishing.
“Today’s computing is way too energy-intensive to handle big data. We need to rethink our approaches to computation on all levels: materials, devices and architecture that can enable ultralow energy computing.”
Brain-inspired electronics with organic memristors could offer a functionally promising and cost- effective platform, according to Venkatesan. Memristive devices are electronic devices with an inherent memory that are capable of both storing data and performing computation. Since memristors are functionally analogous to the operation of neurons, the computing units in the brain, they are optimal candidates for brain-inspired computing platforms.
Until now, oxides have been the leading candidate as the optimum material for memristors. Different material systems have been proposed but none have been successful so far.
“Over the last 20 years, there have been several attempts to come up with organic memristors, but none of those have shown any promise,” said Sreetosh Goswami, lead author on the paper. “The primary reason behind this failure is their lack of stability, reproducibility and ambiguity in mechanistic understanding. At a device level, we are now able to solve most of these problems,”
This new generation of organic memristors is developed based on metal azo complex devices, which are the brainchild of Sreebata Goswami, a professor at the Indian Association for the Cultivation of Science in Kolkata and another author on the paper.
“In thin films, the molecules are so robust and stable that these devices can eventually be the right choice for many wearable and implantable technologies or a body net, because these could be bendable and stretchable,” said Sreebata Goswami. A body net is a series of wireless sensors that stick to the skin and track health.
The next challenge will be to produce these organic memristors at scale, said Venkatesan.
“Now we are making individual devices in the laboratory. We need to make circuits for large-scale functional implementation of these devices.”
This undated article on Nanowerk provides a relatively complete and technical description of memristors in general (Note: A link has been removed),
A memristor (named as a portmanteau of memory and resistor) is a non-volatile electronic memory device that was first theorized by Leon Ong Chua in 1971 as the fourth fundamental two-terminal circuit element following the resistor, the capacitor, and the inductor (IEEE Transactions on Circuit Theory, “Memristor-The missing circuit element”).
Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function). Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if the device loses power.
However, it was only almost 40 years later that the first practical device was fabricated. This was in 2008, when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behavior. …
The article on Nanowerk includes an embedded video presentation on memristors given by Stanley Williams (also known as R. Stanley Williams).
The memristor is composed of the transition metal ruthenium complexed with “azo-aromatic ligands.” [emphasis mine] The theoretical work enabling this material was performed at Yale, and the organic molecules were synthesized at the Indian Association for the Cultivation of Sciences. …
I highlighted ‘ligands’ because that appears to be the difference. However, there is more than one type of ligand on Wikipedia.
Ligand, an atom, ion, or functional group that donates one or more of its electrons through a coordinate covalent bond to one or more central atoms or ions
Ligand (biochemistry), a substance that binds to a protein
a ‘guest’ in host–guest chemistry
I did take a look at the paper and did not see any references to proteins or other biomolecules that I could recognize as such. I’m not sure why the researchers are describing their device as an ‘organic’ memristor but this may reflect a shortcoming in the definitions I have found or shortcomings in my reading of the paper rather than an error on their parts.
Hopefully, more research will be forthcoming and it will be possible to better understand the terminology.
A May 12, 2020 news item on Nanowerk announces new work from scientists at Duke University on making point-of-care diagnostics easier to use by making the readouts brighter,
Engineers at Duke University [North Carolina, US] have shown that nanosized silver cubes can make diagnostic tests that rely on fluorescence easier to read by making them more than 150 times brighter. Combined with an emerging point-of-care diagnostic platform already shown capable of detecting small traces of viruses and other biomarkers, the approach could allow such tests to become much cheaper and more widespread.
Plasmonics is a scientific field that traps energy in a feedback loop called a plasmon onto the surface of silver nanocubes. When fluorescent molecules are sandwiched between one of these nanocubes and a metal surface, the interaction between their electromagnetic fields causes the molecules to emit light much more vigorously. Maiken Mikkelsen, the James N. and Elizabeth H. Barton Associate Professor of Electrical and Computer Engineering at Duke, has been working with her laboratory at Duke to create new types of hyperspectral cameras and superfast optical signals using plasmonics for nearly a decade.
At the same time, researchers in the laboratory of Ashutosh Chilkoti, the Alan L. Kaganov Distinguished Professor of Biomedical Engineering, have been working on a self-contained, point-of-care diagnostic test that can pick out trace amounts of specific biomarkers from biomedical fluids such as blood. But because the tests rely on fluorescent markers to indicate the presence of the biomarkers, seeing the faint light of a barely positive test requires expensive and bulky equipment.
“Our research has already shown that plasmonics can enhance the brightness of fluorescent molecules tens of thousands of times over,” said Mikkelsen. “Using it to enhance diagnostic assays that are limited by their fluorescence was clearly a very exciting idea.”
“There are not a lot of examples of people using plasmon-enhanced fluorescence for point-of-care diagnostics, and the few that exist have not been yet implemented into clinical practice,” added Daria Semeniak, a graduate student in Chilkoti’s laboratory. “It’s taken us a couple of years, but we think we’ve developed a system that can work.”
In the new paper, researchers from the Chilkoti lab build their super-sensitive diagnostic platform called the D4 Assay onto a thin film of gold, the preferred yin to the plasmonic silver nanocube’s yang. The platform starts with a thin layer of polymer brush coating, which stops anything from sticking to the gold surface that the researchers don’t want to stick there. The researchers then use an ink-jet printer to attach two groups of molecules tailored to latch on to the biomarker that the test is trying to detect. One set is attached permanently to the gold surface and catches one part of the biomarker. The other is washed off of the surface once the test begins, attaches itself to another piece of the biomarker, and flashes light to indicate it’s found its target.
After several minutes pass to allow the reactions to occur, the rest of the sample is washed away, leaving behind only the molecules that have managed to find their biomarker matches, floating like fluorescent beacons tethered to a golden floor.
“The real significance of the assay is the polymer brush coating,” said Chilkoti. “The polymer brush allows us to store all of the tools we need on the chip while maintaining a simple design.”
While the D4 Assay is very good at grabbing small traces of specific biomarkers, if there are only trace amounts, the fluorescent beacons can be difficult to see. The challenge for Mikkelsen and her colleagues was to place their plasmonic silver nanocubes above the beacons in such a way that they supercharged the beacons’ fluorescence.
But as is usually the case, this was easier said than done.
“The distance between the silver nanocubes and the gold film dictates how much brighter the fluorescent molecule becomes,” said Daniela Cruz, a graduate student working in Mikkelsen’s laboratory. “Our challenge was to make the polymer brush coating thick enough to capture the biomarkers–and only the biomarkers of interest–but thin enough to still enhance the diagnostic lights.”
The researchers attempted two approaches to solve this Goldilocks riddle. They first added an electrostatic layer that binds to the detector molecules that carry the fluorescent proteins, creating a sort of “second floor” that the silver nanocubes could sit on top of. They also tried functionalizing the silver nanocubes so that they would stick directly to individual detector molecules on a one-on-one basis.
While both approaches succeeded in boosting the amount of light coming from the beacons, the former showed the best improvement, increasing its fluorescence by more than 150 times. However, this method also requires an extra step of creating a “second floor,” which adds another hurdle to engineering a way to make this work on a commercial point-of-care diagnostic rather than only in a laboratory. And while the fluorescence didn’t improve as much in the second approach, the test’s accuracy did.
“Building microfluidic lab-on-a-chip devices through either approach would take time and resources, but they’re both doable in theory,” said Cassio Fontes, a graduate student in the Chilkoti laboratory. “That’s what the D4 Assay is moving toward.”
And the project is moving forward. Earlier in the year, the researchers used preliminary results from this research to secure a five-year, $3.4 million R01 research award from the National Heart, Lung, and Blood Institute. The collaborators will be working to optimize these fluorescence enhancements while integrating wells, microfluidic channels and other low-cost solutions into a single-step diagnostic device that can run through all of these steps automatically and be read by a common smartphone camera in a low-cost device.
“One of the big challenges in point-of-care tests is the ability to read out results, which usually requires very expensive detectors,” said Mikkelsen. “That’s a major roadblock to having disposable tests to allow patients to monitor chronic diseases at home or for use in low-resource settings. We see this technology not only as a way to get around that bottleneck, but also as a way to enhance the accuracy and threshold of these diagnostic devices.”
My reference point for date and time is almost always Pacific Time (PT). Depending on which time zone you live in, the day and date I’ve listed here may be incorrect. For anyone who has difficulty figuring out which day and time the event will take place where they live, a search for ‘time zone converter’ on one of the search engines should prove helpful.
May 20, 2020 at 7:30 pm (UK time): Complicité’s The Encounter
I received this May 19, 2020 announcement from The Space via email,
Over 80,000 people have watched Complicité’s award-winning production of The Encounter online and now the recording has been made available again – for one week only – in this revival, supported by The Space. You can watch online via the website or YouTube channel [from15 May until 22 May 2020.].
🎧 Enjoy the binaural sound – Make sure you wear headphones in order to experience the show’s impressive binaural sound design – any headphone will work, but playing out of computer speakers will not give the same effect.
Join in a live Q&A – 20 May  – A live discussion event and public Q&A will take place on Wednesday 20 May at 7:30pm (11:30 am PT) with Simon McBurney and guests including filmmaker Takumã Kuikuro (via a link to the Xingu region of the Amazon). Register to join the discussion.
In The Encounter, Director-performer Simon McBurney brings Petru Popescu’s book Amazon Beaming to life on stage.
The show follows the journey of Loren McIntyre, a photographer who got lost in Brazil’s remote Javari Valley in 1969.
It uses live and recorded 3D sound, video projections and loop pedals to recreate the intense atmosphere of the rainforest.
In the first live-streamed production ever to use 3D sound, viewers got the chance to experience the atmosphere of one of the strangest and most beautiful places on Earth – all through their headphones.
Complicité is a UK-based touring theatre company known for its imaginative original productions and adaptations of classic books and plays, and its groundbreaking use of technology. The Encounter is directed and performed by Simon McBurney, co-director is Kirsty Housley.
Saturday, May 23, 2020 from 12 pm – 1:30 pm ET: Pandemic Encounters ::: being [together] in the deep third space
This May 19, 2020 announcement was received via email from the ArtSci Salon, one of the participants in this ‘encounter’, Note: I have made some changes to the formatting,
LEONARDO/ISAST and The Third Space Network announce the first Global LASER: Pandemic Encounters ::: being [together] in the deep third space on Saturday, May 23, 12-1:30pm EDT. This online performance installation is a creation of pioneering telematic artist Paul Sermon in collaboration with Randall Packer, Gregory Kuhn and the Third Space Network. (Locate your time zone)
Pandemic Encounters explores the implications of the migratory transition to the virtual space we are all experiencing. Even when we return to the so-called normal, we will be changed: when social interaction, human engagement, and being together will have undergone a radical transformation. In this new work, Paul Sermon performs as a live chroma-figure in a deep third space audio-visual networked environment, encountering pandemic spaces & action-performers from around the world – artists, musicians, dancers, media practitioners & scientists – a collective response to a global pandemic that has triggered an unfolding metamorphosis of the human condition.
action-performers: Annie Abrahams (France), Clarissa Ribeiro (Brazil), Roberta Buiani (Canada), Andrew Denton (New Zealand), Bhavani Esapathi (UK), Tania Fraga (Brazil), Satinder Gill (US), Birgitta Hosea (UK), Charles Lane (US), Ng Wen Lei (Singapore), Marilene Oliver (Canada), Serena Pang (Singapore), Daniel Pinheiro (Portugal), Olga Remneva (Russia), Toni Sant (UK), Rejane Spitz (Brazil), Atau Tanaka (UK)
The Third Space Network, created by Randall Packer, is an artist-driven Internet platform for staging creative dialogue, live performance and uncategorizable activisms: social empowerment through the act of becoming our own broadcast media.
Those are fabulous toes. Geckos and the fine hairs on their toes have been of great interest to researchers looking to increase qualities of adhesion for all kinds of purposes including for robots that climb. The latest foray into the research suggests that it’s not just the fine hairs found on gecko toes that are important.
Robots with toes? Experiments suggest that climbing robots could benefit from having flexible, hairy toes, like those of geckos, that can adjust quickly to accommodate shifting weight and slippery surfaces.
Biologists from the University of California, Berkeley, and Nanjing University of Aeronautics and Astronautics observed geckos running horizontally along walls to learn how they use their five toes to compensate for different types of surfaces without slowing down.
“The research helped answer a fundamental question: Why have many toes?” said Robert Full, UC Berkeley professor of integrative biology.
As his previous research showed, geckos’ toes can stick to the smoothest surfaces through the use of intermolecular forces, and uncurl and peel in milliseconds. Their toes have up to 15,000 hairs per foot, and each hair has “an awful case of split ends, with as many as a thousand nano-sized tips that allow close surface contact,” he said.
These discoveries have spawned research on new types of adhesives that use intermolecular forces, or van der Waals forces, to stick almost anywhere, even underwater.
One puzzle, he said, is that gecko toes only stick in one direction. They grab when pulled in one direction, but release when peeled in the opposite direction. Yet, geckos move agilely in any orientation.
To determine how geckos have learned to deal with shifting forces as they move on different surfaces, Yi Song, a UC Berkeley visiting student from Nanjing, China, ran geckos sideways along a vertical wall while making high-speed video recordings to show the orientation of their toes. The sideways movement allowed him to distinguish downward gravity from forward running forces to best test the idea of toe compensation.
Using a technique called frustrated total internal reflection, Song, also measured the area of contact of each toe. The technique made the toes light up when they touched a surface.
To the researcher’s surprise, geckos ran sideways just as fast as they climbed upward, easily and quickly realigning their toes against gravity. The toes of the front and hind top feet during sideways wall-running shifted upward and acted just like toes of the front feet during climbing.
To further explore the value of adjustable toes, researchers added slippery patches and strips, as well as irregular surfaces. To deal with these hazards, geckos took advantage of having multiple, soft toes. The redundancy allowed toes that still had contact with the surface to reorient and distribute the load, while the softness let them conform to rough surfaces.
“Toes allowed agile locomotion by distributing control among multiple, compliant, redundant structures that mitigate the risks of moving on challenging terrain,” Full said. “Distributed control shows how biological adhesion can be deployed more effectively and offers design ideas for new robot feet, novel grippers and unique manipulators.”
The team, which also includes Zhendong Dai and Zhouyi Wang of the College of Mechanical and Electrical Engineering at Nanjing University of Aeronautics and Astronautics, published its findings this week in the journal Proceedings of the Royal Society B.